id
int64
0
25.6k
text
stringlengths
0
4.59k
21,400
draw the compact representation of the suffix trie for the string "minimize minimer- what is the longest prefix of the string "cgtacgttcgtacgthat is also suffix of this stringr- draw the frequency array and huffman tree for the following string"dogs do not spot hot pots or catsr- show the longest common subsequence array for the two strings "skullandbonesy "lullabybabieswhat is longest common subsequence between these stringscreativity - give an example of text of length and pattern of length that force the brute-force pattern matching algorithm to have running time that is ohm(nmc- give justification of why the kmpfailurefunction method (code fragment runs in (mtime on pattern of length - show how to modify the kmp string pattern matching algorithm so as to find every occurrence of pattern string that appears as substring in twhile still running in ( +mtime (be sure to catch even those matches that overlap - let be text of length nand let be pattern of length describe an ( + )time method for finding the longest prefix of that is substring oft
21,401
say that pattern of length is circular substring of text of length if there is an index < msuch that [ [ ]that isif is (normalsubstring of or is equal to the concatenation of suffix of and prefix of give an ( )-time algorithm for determining whether is circular substring of - the kmp pattern matching algorithm can be modified to run faster on binary strings by redefining the failure function as where denotes the complement of the kth bit of describe how to modify the kmp algorithm to be able to take advantage of this new failure function and also give method for computing this failure function show that this method makes at most comparisons between the text and the pattern (as opposed to the comparisons needed by the standard kmp algorithm given in section - modify the simplified bm algorithm presented in this using ideas from the kmp algorithm so that it runs in ( mtime - given string of length and string of length mdescribe an ( )time algorithm for finding the longest prefix of that is suffix of - give an efficient algorithm for deleting string from standard trie and analyze its running time - give an efficient algorithm for deleting string from compressed trie and analyze its running time - describe an algorithm for constructing the compact representation of suffix triegiven its noncompact representationand analyze its running time - let be text string of length describe an ( )-time method for finding the longest prefix of that is substring of the reversal of
21,402
describe an efficient algorithm to find the longest palindrome that is suffix of string of length recall that apalindrome is string that is equal to its reversal what is the running time of your methodc- given sequence ( - of numbersdescribe an ( )-time algorithm for finding longest subsequence (xi ,xi xi xi - of numberssuch that ij+ that ist is longest decreasing subsequence of - define the edit distance between two strings and of length and mrespectivelyto be the number of edits that it takes to change into an edit consists of character insertiona character deletionor character replacement for examplethe strings "algorithmand "rhythmhave edit distance design an (nm)-time algorithm for computing the edit distance between and - design greedy algorithm for making change after someone buys some candy costing cents and the customer gives the clerk $ your algorithm should try to minimize the number of coins returned show that your greedy algorithm returns the minimum number of coins if the coins have denominations $ $ $ and $ give set of denominations for which your algorithm may not return the minimum number of coins include an example where your algorithm fails - give an efficient algorithm for determining if pattern is subsequence (not substringof text what is the running time of your algorithmc- let and be strings of length and respectively define ( ,jto be the length of the longest common substring of the suffix of length in and the
21,403
values of ( ,jfor = , and , - anna has just won contest that allows her to take pieces of candy out of candy store for free anna is old enough to realize that some candy is expensivecosting dollars per piecewhile other candy is cheapcosting pennies per piece the jars of candy are numbered so that jar has pieces in itwith price of per piece design an ( )time algorithm that allows anna to maximize the value of the pieces of candy she takes for her winnings show that your algorithm produces the maximum value for anna - let three integer arraysaband cbe giveneach of size given an arbitrary integer xdesign an ( log )-time algorithm to determine if there exist numbersa in ab in band in csuch that - give an ( )-time algorithm for the previous problem projects - perform an experimental analysisusing documents found on the internetof the efficiency (number of character comparisons performedof the brute-force and kmp pattern matching algorithms for varying-length patterns - perform an experimental analysisusing documents found on the internetof the efficiency (number of character comparisons performedof the brute-force and bm pattern matching algorithms for varying-length patterns - perform an experimental comparison of the relative speeds of the bruteforcekmpand bm pattern matching algorithms document the time taken for coding up each of these algorithms as well as their relative running times on documents found on the internet that are then searched using varying-length patterns -
21,404
coding - create class that implements standard trie for set of ascii strings the class should have constructor that takes as argument list of stringsand the class should have method that tests whether given string is stored in the trie - create class that implements compressed trie for set of ascii strings the class should have constructor that takes as argument list of stringsand the class should have method that tests whether given string is stored in the trie - create class that implements prefix trie for an ascii string the class should have constructor that takes as argument string and method for pattern matching on the string - implement the simplified search engine described in section for the pages of small web site use all the words in the pages of the site as index termsexcluding stop words such as articlesprepositionsand pronouns - implement search engine for the pages of small web site by adding pageranking feature to the simplified search engine described in section your page-ranking feature should return the most relevant pages first use all the words in the pages of the site as index termsexcluding stop wordssuch as articlesprepositionsand pronouns - write program that takes two character strings (which could befor examplerepresentations of dna strandsand computes their edit distanceshowing the corresponding pieces (see exercise - notes the kmp algorithm is described by knuthmorrisand pratt in their journal article [ ]and boyer and moore describe their algorithm in journal article published the same year [ in their articlehoweverknuth et al [ also prove that the bm algorithm runs in linear time more recentlycole [ shows that the bm algorithm makes at most character comparisons in the worst caseand this bound is tight all of the algorithms discussed above are also discussed in the book by aho [ ]albeit in more theoretical frameworkincluding the methods for regular-expression
21,405
algorithms is referred to the book by stephen [ and the book by aho [ and crochemore and lecroq [ the trie was invented by morrison [ and is discussed extensively in the classic sorting and searching book by knuth [ the name "patriciais short for "practical algorithm to retrieve information coded in alphanumeric[ mccreight [ shows how to construct suffix tries in linear time an introduction to the field of information retrievalwhich includes discussion of search engines for the web is provided in the book by baeza-yates and ribeiro-neto [ graphs contents the graph abstract data type the graph adt data structures for graphs
21,406
the edge list structure the adjacency list structure the adjacency matrix structure graph traversals depth-first search implementing depth-first search breadth-first search directed graphs
21,407
transitive closure directed acyclic graphs weighted graphs shortest paths dijkstra' algorithm minimum spanning trees kruskal' algorithm the prim-jarnik algorithm
21,408
exercises java datastructures net the graph abstract data type graph is way of representing relationships that exist between pairs of objects that isa graph is set of objectscalled verticestogether with collection of pairwise connections between them by the waythis notion of "graphshould not be confused with bar charts and function plotsas these kinds of "graphsare unrelated to the topic of this graphs have applications in host of different domainsincluding mappingtransportationelectrical engineeringand computer networks viewed abstractlya graph is simply set of vertices and collection of pairs of vertices from vcalled edges thusa graph is way of representing connections or relationships between pairs of objects from some set incidentallysome books use different terminology for graphs and refer to what we call vertices as nodes and what we call edges as arcs we use the terms "verticesand "edges edges in graph are either directed or undirected an edge (uvis said to be directed from to if the pair (uvis orderedwith preceding an edge (uvis said to be undirected if the pair (uvis not ordered undirected edges are sometimes denoted with set notationas { ,vbut for simplicity we use the pair notation (uv)noting that in the undirected case (uvis the same as (vugraphs are typically visualized by drawing the vertices as ovals or rectangles and the edges as segments or curves connecting pairs of ovals and rectangles the following are some examples of directed and undirected graphs example we can visualize collaborations among the researchers of certain discipline by constructing graph whose vertices are associated with the researchers themselvesand whose edges connect pairs of vertices associated with researchers who have coauthored paper or book (see figure such edges are undirected because coauthorship is symmetric relationthat isif has coauthored something with bthen necessarily has coauthored something with figure authors graph of coauthorship among some
21,409
vertices represent the classes defined in the programand whose edges indicate inheritance between classes there is an edge from vertex to vertex if the class for extends the class for such edges are directed because the inheritance relation only goes in one direction (that isit is asymmetricif all the edges in graph are undirectedthen we say the graph is an undirected graph likewisea directed graphalso called digraphis graph whose edges are all directed graph that has both directed and undirected edges is often called mixed graph note that an undirected or mixed graph can be converted into directed graph by replacing every undirected edge (uvby the pair of directed edges ( ,vand ( ,uit is often usefulhoweverto keep undirected and mixed graphs represented as they arefor such graphs have several applicationssuch as that of the following example example city map can be modeled by graph whose vertices are intersections or dead-endsand whose edges are stretches of streets without intersections this graph has both undirected edgeswhich correspond to stretches of twoway streetsand directed edgeswhich correspond to stretches of one-way streets thusin this waya graph modeling city map is mixed graph example physical examples of graphs are present in the electrical wiring and plumbing networks of building such networks can be modeled as graphswhere each connectorfixtureor outlet is viewed as vertexand each uninterrupted stretch of wire or pipe is viewed as an edge such graphs are actually components of much larger graphsnamely the local power and water distribution networks depending on the specific aspects of these graphs that we are interested inwe may consider their edges as undirected or directedforin principlewater can flow in pipe and current can flow in wire in either direction
21,410
edge if an edge is directedits first endpoint is its origin and the other is the destination of the edge two vertices and are said to be adjacent if there is an edge whose end vertices are and an edge is said to be incident on vertex if the vertex is one of the edge' endpoints the outgoing edges of vertex are the directed edges whose origin is that vertex the incoming edges of vertex are the directed edges whose destination is that vertex the degree of vertex vdenoted deg( )is the number of incident edges of the in-degree and out-degree of vertex are the number of the incoming and outgoing edges of vand are denoted indeg(vand outdeg( )respectively example we can study air transportation by constructing graph gcalled flight networkwhose vertices are associated with airportsand whose edges are associated with flights (see figure in graph gthe edges are directed because given flight has specific travel direction (from the origin airport to the destination airportthe endpoints of an edge in correspond respectively to the origin and destination for the flight corresponding to two airports are adjacent in if there is flight that flies between themand an edge is incident upon vertex in if the flight for flies to or from the airport for the outgoing edges of vertex correspond to the outbound flights from ' airportand the incoming edges correspond to the inbound flights to ' airport finallythe in-degree of vertex vofg corresponds to the number of inbound flights to ' airportand the out-degree of vertex in corresponds to the number of outbound flights the definition of graph refers to the group of edges as collectionnot setthus allowing for two undirected edges to have the same end verticesand for two directed edges to have the same origin and the same destination such edges are called parallel edges or multiple edges parallel edges can be in flight network (example )in which case multiple edges between the same pair of vertices could indicate different flights operating on the same route at different times of the day another special type of edge is one that connects vertex to itself namelywe say that an edge (undirected or directedis self-loop if its two endpoints coincide self-loop may occur in graph associated with city map (example )where it would correspond to "circle( curving street that returns to its starting pointwith few exceptionsgraphs do not have parallel edges or self-loops such graphs are said to be simple thuswe can usually say that the edges of simple graph are set of vertex pairs (and not just collectionthroughout this we assume that graph is simple unless otherwise specified figure example of directed graph representing flight network the endpoints of edge ua are lax and ordhencelax and ord are adjacent the in-degree of dfw is and the out-degree of dfw is
21,411
proposition if is graph with edgesthen justificationan edge ( ,vis counted twice in the summation aboveonce by its endpoint and once by its endpoint thusthe total contribution of the edges to the degrees of the vertices is twice the number of edges proposition if is directed graph with edgesthen justificationin directed graphan edge ( ,vcontributes one unit to the outdegree of its origin and one unit to the in-degree of its destination thusthe total contribution of the edges to the out-degrees of the vertices is equal to the number of edgesand similarly for the out-degrees we next show that simple graph with vertices has ( edges proposition let be simple graph with vertices and edges if is undirectedthen < ( )/ and if is directedthen < (
21,412
same endpoints and there are no self-loopsthe maximum degree of vertex in is in this case thusby proposition < ( now suppose that is directed since no two edges can have the same origin and destinationand there are no self-loopsthe maximum in-degree of vertex in is in this case thusby proposition < ( path is sequence of alternating vertices and edges that starts at vertex and ends at vertex such that each edge is incident to its predecessor and successor vertex cycle is path with at least one edge that has its start and end vertices the same we say that path is simple if each vertex in the path is distinctand we say that cycle is simple if each vertex in the cycle is distinctexcept for the first and last one directed path is path such that all edges are directed and are traversed along their direction directed cycle is similarly defined for examplein figure (bosnw jfkaa dfwis in directed simple pathand (laxua ordua dfwaa laxis directed simple cycle if path or cycle is simple graphwe may omit the edges in or cas these are well definedin which case is list of adjacent vertices and is cycle of adjacent vertices example given graph representing city map (see example )we can model couple driving to dinner at recommended restaurant as traversing path though if they know the wayand don' accidentally go through the same intersection twicethen they traverse simple path in likewisewe can model the entire trip the couple takesfrom their home to the restaurant and backas cycle if they go home from the restaurant in completely different way than how they wentnot even going through the same intersection twicethen their entire round trip is simple cycle finallyif they travel along one-way streets for their entire tripwe can model their night out as directed cycle subgraph of graph is graph whose vertices and edges are subsets of the vertices and edges of grespectively for examplein the flight network of figure vertices bosjfkand miaand edges aa and dl form subgraph spanning subgraph of is subgraph of that contains all the vertices of the graph graph is connected iffor any two verticesthere is path between them if graph is not connectedits maximal connected subgraphs are called the connected components of forest is graph without cycles tree is connected forestthat isa connected graph without cycles note that this definition of tree is somewhat different from the one given in namelyin the context of graphsa tree has no root whenever there is ambiguitythe trees of should be referred to as rooted treeswhile the trees of this should be referred to as free trees the connected components of forest are (freetrees spanning tree of graph is spanning subgraph that is (freetree example perhaps the most talked about graph today is the internetwhich can be viewed as graph whose vertices are computers and whose (undirectededges
21,413
computers and the connections between them in single domainlike wiley comform subgraph of the internet if this subgraph is connectedthen two users on computers in this domain can send -mail to one another without having their information packets ever leave their domain suppose the edges of this subgraph form spanning tree this implies thatif even single connection goes down (for examplebecause someone pulls communication cable out of the back of computer in this domain)then this subgraph will no longer be connected there are number of simple properties of treesforestsand connected graphs proposition let be an undirected graph with vertices and edges if is connectedthen >= if is treethen if is forestthen <= we leave the justification of this proposition as an exercise ( - the graph adt as an abstract data typea graph is collection of elements that are stored at the graph' positions--its vertices and edges hencewe can store elements in graph at either its edges or its vertices (or bothin javathis means we can define vertex and edge interfaces that each extend the position interface let us then introduce the following simplified graph adtwhich is suitable for vertex and edge positions in undirected graphsthat isgraphs whose edges are all undirected additional methods for dealing with directed edges are discussed in section vertices()return an iterable collection of all the vertices of the graph edges()return an iterable collection of all the edges of the graph incidentedges( )return an iterable collection of the edges incident upon vertex opposite( , )return the endvertex of edge distinct from vertex van error occurs if is not incident on
21,414
return an array storing the end vertices of edge areadjacent( , )test whether vertices and are adjacent replace( , )replace the element stored at vertex with replace( , )replace the element stored at edge with insertvertex( )insert and return new vertex storing element insertedge(vw, )insert and return new undirected edge with end vertices and and storing element removevertex( )remove vertex and all its incident edges and return the element stored at removeedge( )remove edge and return the element stored at there are several ways to realize the graph adt we explore three such ways in the next section data structures for graphs in this sectionwe discuss three popular ways of representing graphswhich are usually referred to as the edge list structurethe adjacency list structureand the adjacency matrix in all three representationswe use collection to store the vertices of the graph regarding the edgesthere is fundamental difference between the first two structures and the latter the edge list structure and the adjacency list structure only store the edges actually present in the graphwhile the adjacency matrix stores placeholder for every pair of vertices (whether there is an edge between them or notas we will explain in this sectionthis difference implies thatfor graph with
21,415
spacewhereas an adjacency matrix representation uses ( space the edge list structure the edge list structure is possibly the simplestthough not the most efficientrepresentation of graph in this representationa vertex of storing an element is explicitly represented by vertex object all such vertex objects are stored in collection vsuch as an array list or node list if is an array listfor examplethen we naturally think of the vertices as being numbered vertex objects the vertex object for vertex storing element has instance variables fora reference to reference to the position (or entryof the vertex-object in collection the distinguishing feature of the edge list structure is not how it represents verticeshoweverbut the way in which it represents edges in this structurean edge of storing an element is explicitly represented by an edge object the edge objects are stored in collection ewhich would typically be an array list or node list edge objects the edge object for an edge storing element has instance variables fora reference to references to the vertex objects associated with the endpoint vertices of reference to the position (or entryof the edge-object in collection visualizing the edge list structure we illustrate an example of the edge list structure for graph in figure figure (aa graph (bschematic representation of the edge list structure for we visualize the elements stored in the vertex and edge
21,416
references to the element objects the reason this structure is called the edge list structure is that the simplest and most common implementation of the edge collection is with list even soin order to be able to conveniently search for specific objects associated with edgeswe may wish to implement with dictionary (whose entries store the element as the key and the edge as the valuein spite of our calling this the "edge list we may also wish to implement the collection as dictionary for the same reason stillin keeping with traditionwe call this structure the edge list structure
21,417
edges to the vertices they are incident upon this allows us to define simple algorithms for methods endvertices(eand opposite(veperformance of the edge list structure one method that is inefficient for the edge list structurehoweveris that of accessing the edges that are incident upon vertex determining this set of vertices requires an exhaustive inspection of all the edge objects in the collection that isin order to determine which edges are incident to vertex vwe must examine all the edges in the edge list and checkfor each oneif it happens to be incident to thusmethod incidentedges(vruns in time proportional to the number of edges in the graphnot in time proportional to the degree of vertex in facteven to check if two vertices and are adjacent by the areadjacent( ,wmethodrequires that we search the entire edge collection looking for an edge with end vertices and moreoversince removing vertex involves removing all of its incident edgesmethod removevertex also requires complete search of the edge collection table summarizes the performance of the edge list structure implementation of graph under the assumption that collections and are realized with doubly linked lists (section table running times of the methods of graph implemented with the edge list structure the space used is ( )where is the number of vertices and is the number of edges operation time vertices (nedges (mendverticesopposite ( incidentedgesareadjacent
21,418
replace ( insertvertexinsert edgeremoveedgeo( removevertex (mdetails for selected methods of the graph adt are as followsmethods vertices(and edges(are implemented by calling iterator(and iterator()respectively methods incidentedges and areadjacent all take (mtimesince to determine which edges are incident upon vertex we must inspect all edges since the collections and are lists implemented with doubly linked listwe can insert verticesand insert and remove edgesin ( time the update method removevertex(vtakes (mtimesince it requires that we inspect all the edges to find and remove those incident upon thusthe edge list representation is simple but has significant limitations the adjacency list structure the adjacency list structure for graph adds extra information to the edge list structure that supports direct access to the incident edges (and thus to the adjacent verticesof each vertex this approach allows us to use the adjacency list structure to implement several methods of the graph adt much faster than what is possible with the edge list structureeven though both of these two representations use an amount of space proportional to the number of vertices and edges in the graph the adjacency list structure includes all the structural components of the edge list structure plus the followinga vertex object holds reference to collection ( )called the incidence collection of vwhose elements store references to the edges incident on
21,419
the edge object for an edge with end vertices and holds references to the positions (or entriesassociated with edge in the incidence collections ( )and (wtraditionallythe incidence collection (vfor vertex is listwhich is why we call this way of representing graph the adjacency list structure the adjacency list structure provides direct access both from the edges to the vertices and from the vertices to their incident edges we illustrate the adjacency list structure of graph in figure figure (aa graph (bschematic representation of the adjacency list structure of as in figure we visualize the elements of collections with names performance of the adjacency list structure
21,420
structure in ( time can also be implemented in ( time with the adjacency list structureusing essentially the same algorithms in additionbeing able to provide access between vertices and edges in both directions allows us to speed up the performance of number of the graph methods by using an adjacency list structure instead of an edge list structure table summarizes the performance of the adjacency list structure implementation of graphassuming that collections and and the incidence collections of the vertices are all implemented with doubly linked lists for vertex vthe space used by the incidence collection of is proportional to the degree of vthat isit is (deg( )thusby proposition the space requirement of the adjacency list structure is ( mtable running times of the methods of graph implemented with the adjacency list structure the space used is ( )where is the number of vertices and is the number of edges operation time vertices (nedges (mendverticesopposite ( incidentedges(vo(deg( )areadjacent( ,wo(min(deg( ),deg( )replace (
21,421
( removevertex (deg( )in contrast to the edge-list way of doing thingsthe adjacency list structure provides improved running times for the following methodsmethod incidentedges(vtakes time proportional to the number of incident vertices of vthat iso(deg( )time method areadjacent( ,vcan be performed by inspecting either the incidence collection of or that of by choosing the smaller of the twowe get (min(deg( ),deg( ))running time method removevertex(vtakes (deg( )time the adjacency matrix structure like the adjacency list structurethe adjacency matrix structure of graph also extends the edge list structure with an additional component in this casewe augment the edge list with matrix ( two-dimensional arraya that allows us to determine adjacencies between pairs of vertices in constant time in the adjacency matrix representationwe think of the vertices as being the integers in the set { , and the edges as being pairs of such integers this allows us to store references to edges in the cells of two-dimensional array specificallythe adjacency matrix representation extends the edge list structure as follows (see figure ) vertex object stores distinct integer in the range , called the index of we keep two-dimensional array such that the cell [ ,jholds reference to the edge (vw)if it existswhere is the vertex with index and is the vertex with index if there is no such edgethen [ ,jnull figure (aa graph without parallel edges(bschematic representation of the simplified adjacency matrix structure for
21,422
for graphs with parallel edgesthe adjacency matrix representation must be extended so thatinstead of having [ijstoring pointer to an associated edge (vw)it must store pointer to an incidence collection (vw)which stores all the edges from to since most of the graphs we consider are simplewill not consider this complication here the (simpleadjacency matrix allows us to perform method areadjacent(vwin ( time we achieve this running time by accessing vertices and to determine their respective indices and jand then testing if [ijis null or
21,423
increase in space usagehoweverwhich is now ( )and in the running time of other methods for examplemethod incidentedges(vnow requires that we examine an entire row or column of array and thus runs in (ntime moreoverany vertex insertions or deletions now require creating whole new array aof larger or smaller sizerespectivelywhich takes ( time table summarizes the performance of the adjacency matrix structure implementation of graph from this tablewe observe that the adjacency list structure is superior to the adjacency matrix in spaceand is superior in time for all methods except for the areadjacent method table running times for graph implemented with an adjacency matrix operation time vertices (nedges (mendverticesoppositeareadjacent ( incidentedges(vo( deg( )replaceinsertedgeremoveedgeo( insert vertexremove vertex ( historicallyboolean adjacency matrices were the first representations used for graphs (so that [ijtrue if and only if (ijis an edgewe should not find this fact surprisinghoweverfor the adjacency matrix has natural appeal as mathematical structure (for examplean undirected graph has symmetric
21,424
in computing due to its faster methods for most algorithms (many algorithms do not use method areadjacentand its space efficiency most of the graph algorithms we examine will run efficiently when acting upon graph stored using the adjacency list representation in some caseshowevera trade-off occurswhere graphs with few edges are most efficiently processed with an adjacency list structure and graphs with many edges are most efficiently processed with an adjacency matrix structure graph traversals greek mythology tells of an elaborate labyrinth that was built to house the monstrous minotaurwhich was part bull and part man this labyrinth was so complex that neither beast nor human could escape it no humanthat isuntil the greek herotheseuswith the help of the king' daughterariadnedecided to implement graph traversal algorithm theseus fastened ball of thread to the door of the labyrinth and unwound it as he traversed the twisting passages in search of the monster theseus obviously knew about good algorithm designforafter finding and defeating the beasttheseus easily followed the string back out of the labyrinth to the loving arms of ariadne formallya traversal is systematic procedure for exploring graph by examining all of its vertices and edges depth-first search the first traversal algorithm we consider in this section is depth-first search (dfsin an undirected graph depth-first search is useful for testing number of properties of graphsincluding whether there is path from one vertex to another and whether or not graph is connected depth-first search in an undirected graph is analogous to wandering in labyrinth with string and can of paint without getting lost we begin at specific starting vertex in gwhich we initialize by fixing one end of our string to and painting as "visited the vertex is now our "currentvertex--call our current vertex we then traverse by considering an (arbitraryedge ( ,vincident to the current vertex if the edge ( ,vleads us to an already visited (that ispaintedvertex vwe immediately return to vertex ifon the other hand(uvleads to an unvisited vertex vthen we unroll our stringand go to we then paint as "visited,and make it the current vertexrepeating the computation aboce eventuallywe will get to "dead-end,that isa current vertex such that all the edges incident on lead to vertices already visited thustaking any edge incident on will cause us to return to to get out of this impassewe roll our string back upbacktracking along the edge that brought us to ugoing back to previously visited vertex we then make our current vertex and repeat the computation above for any edges incident upon that we have not looked at before if all of ' incident edges lead to visited verticesthen we again roll up our string and backtrack to the vertex we
21,425
backtrack along the path that we have traced so far until we find vertex that has yet unexplored edgestake one such edgeand continue the traversal the process terminates when our backtracking leads us back to the start vertex sand there are no more unexplored edges incident on this simple process traverses all the edges of (see figure figure example of depth-first search traversal on graph starting at vertex discovery edges are shown with solid lines and back edges are shown with dashed lines(ainput graph(bpath of discovery edges traced from until back edge ( ,ais hit(creaching fwhich is dead end(dafter backtracking to cresuming with edge ( , )and hitting another dead endj(eafter backtracking to (fafter backtracking to
21,426
21,427
which they are explored during the traversaldistinguishing the edges used to discover new verticescalled discovery edgesor tree edgesfrom those that lead to already visited verticescalled back edges (see figure fin the analogy abovediscovery edges are the edges where we unroll our string when we traverse themand back edges are the edges where we immediately return without unrolling any string as we will seethe discovery edges form spanning tree of the connected component of the starting vertex we call the edges not in this tree "back edgesbecauseassuming that the tree is rooted at the start vertexeach such edge leads back from vertex in this tree to one of its ancestors in the tree the pseudo-code for dfs traversal starting at vertex follows our analogy with string and paint we use recursion to implement the string analogyand we assume that we have mechanism (the paint analogyto determine if vertex or edge has been explored or notand to label the edges as discovery edges or back edges this mechanism will require additional space and may affect the running time of the algorithm pseudo-code description of the recursive dfs algorithm is given in code fragment code fragment the dfs algorithm there are number of observations that we can make about the depth-first search algorithmmany of which derive from the way the dfs algorithm partitions the edges of the undirected graph into two groupsthe discovery edges and the back edges for examplesince back edges always connect vertex to previously visited vertex ueach back edge implies cycle in gconsisting of the discovery edges from to plus the back edge (uvproposition let be an undirected graph on which dfs traversal starting at vertex has been performed then the traversal visits all
21,428
spanning tree of the connected component of justificationsuppose there is at least one vertex in ' connected component not visitedand let be the first unvisited vertex on some path from to (we may have wsince is the first unvisited vertex on this pathit has neighbor that was visited but when we visited uwe must have considered the edge ( , )henceit cannot be correct that is unvisited thereforethere are no unvisited vertices in ' connected component since we only mark edges when we go to unvisited verticeswe will never form cycle with discovery edgesthat isdiscovery edges form tree moreoverthis is spanning tree becauseas we have just seenthe depth-first search visits each vertex in the connected component of in terms of its running timedepth-first search is an efficient method for traversing graph note that dfs is called exactly once on each vertexand that every edge is examined exactly twiceonce from each of its end vertices thusif vertices and edges are in the connected component of vertex sa dfs starting at runs in ( timeprovided the following conditions are satisfiedthe graph is represented by data structure such that creating and iterating through the incidentedges(viterable collection takes (degree( )timeand the opposite( ,emethod takes ( time the adjacency list structure is one such structurebut the adjacency matrix structure is not we have way to "marka vertex or edge as exploredand to test if vertex or edge has been explored in ( time we discuss ways of implementing dfs to achieve this goal in the next section given the assumptions abovewe can solve number of interesting problems proposition let be graph with vertices and edges represented with an adjacency list dfs traversal of can be performed in ( mtimeand can be used to solve the following problems in ( mtimetesting whether is connected computing spanning tree of gif is connected computing the connected components of computing path between two given vertices of gif it exists
21,429
computing cycle in gor reporting that has no cycles the justification of proposition is based on algorithms that use slightly modified versions of the dfs algorithm as subroutines implementing depth-first search as we have mentioned abovethe data structure we use to represent graph impacts the performance of the dfs algorithm for examplean adjacency list can be used to yield running time of ( mfor traversing graph with vertices and edges using an adjacency matrixon the other handwould result in running time of ( )since each of the calls to the incidentedges method would take (ntime if the graph is densethat isit has close to ( edgesthen the difference between these two choices is minoras they both would run in ( time but if the graph is sparsethat isit has close to (nedgesthen the adjacency matrix approach would be much slower than the adjacency list approach another important implementation detail deals with the way vertices and edges are represented in particularwe need to have way of marking vertices and edges as visited or not there are two simple solutionsbut each has drawbackswe can build our vertex and edge objects to contain an explored fieldwhich can be used by the dfs algorithm for marking this approach is quite simpleand supports constant-time marking and unmarkingbut it assumes that we are designing our graph with dfs in mindwhich will not always be valid furthermorethis approach needlessly restricts dfs to graphs with vertices having an explored field thusif we want generic dfs algorithm that can take any graph as inputthis approach has limitations we can use an auxiliary hash table to store all the explored vertices and edges during the dfs algorithm this scheme is generalin that it does not require any special fields in the positions of the graph but this approach does not achieve worst-case constant time for marking and unmarking of vertices edges insteadsuch hash table only supports the mark (insertand test (findoperations in constant expected time (see section fortunatelythere is middle ground between these two extremes the decorator pattern marking the explored vertices in dfs traversal is an example of the decorator software engineering design pattern this pattern is used to add decorations (also called attributesto existing objects each decoration is identified by key identifying this decoration and by value associated with the key the use of decorations is motivated by the need of some algorithms and data structures to add extra variablesor temporary scratch datato objects that do not normally
21,430
dynamically attached to an object in our dfs examplewe would like to have "decorablevertices and edges with an explored decoration and boolean value making graph vertices decorable we can realize the decorator pattern for any position by allowing it to be decorated this allows us to add labels to vertices and edgesfor examplewithout requiring that we know in advance the kinds of labels that we will need we can simply require that our vertices and edges implement decorable position adtwhich inherits from both the position adt and the map adt (section namelythe methods of the decorable position adt are the union of the methods of the position adt and of the map adtthat isin addition to the size(and isempty(methodsa decorable position would support the followingelement()return the element stored at this position put( , )map the decoration value to the key kreturning the old value for kor null if this is new value for get( )get the decoration value assigned to kor null if there is no mapping for remove( )remove the decoration mapping for kreturning the old valueor null if there is none entries()return all the key-decoration pairs for this position the map methods of decorable position provide simple mechanism for accessing and setting the decorations of for examplewe use get(kto obtain the value of the decoration with key and we use put( ,xto set the value of the decoration with key to moreoverthe key can be any objectincluding special explored object our dfs algorithm might create we show java interface defining such an adt in code fragment we can implement decorable position with an object that stores an element and map in principlethe running times of the methods of decorable position
21,431
use small constant number of decorations thusthe decorable position methods will run in ( worst-case time no matter how we implement the embedded map code fragment an interface defining an adt for decorable positions note that we don' use generic parameterized types for the inherited map methodssince we don' know in advance the types of the decorations and we want to allow for objects of many different types as decorations using decorable positionsthe complete dfs traversal algorithm can be described in more detailas shown in code fragment code fragment dfs on graph with decorable edges and vertices generic dfs implementation in java
21,432
depth-first search traversal using general classdfswhich has methodexecutewhich takes as input the grapha start vertexand any auxiliary information neededand then initializes the graph and calls the recursive methoddfstraversalwhich activates the dfs traversal our implementation assumes that the vertices and edges are decorable positionsand it uses decorations to tell if vertices and edges have been visited or not the dfs class contains the following methods to allow it to do special tasks during dfs traversalsetup()called prior to doing the dfs traversal call to dfstraversal(initresult()called at the beginning of the execution of dfstraversal(startvisit( )called at the start of the visit of traversediscovery( , )called when discovery edge out of is traversed traverseback( , )called when back edge out of is traversed isdone()called to determine whether to end the traversal early finishvisit( )called when we are finished exploring from result()called to return the output of dfstraversal finalresult( )called to return the output of the execute methodgiven the outputrfrom dfstraversal code fragment instance variables and support methods of class dfswhich performs generic dfs traversal the methods visitunvisitand isvisited are implemented using decorable positions that are parameterized using the wildcard symbol"?"which can match either the or the parameter used for decorable positions (continues in code fragment
21,433
dfstraversal of class dfswhich performs generic dfs traversal of graph (continued from code fragment
21,434
the dfs class is based on the template method pattern (see section )which describes generic computation mechanism that can be specialized by redefining certain steps the way we identify vertices and edges that have already been visited during the traversal is in calls to methods isvisitedvisitand unvisit for us to do anything interestingwe must extend dfs and redefine some of its auxiliary methods this approach conforms to the template method pattern in code fragments through we illustrate some applications of dfs traversal class connectivitydfs (code fragment tests whether the graph is connected it counts the vertices reachable by dfs traversal starting at vertex and compares this number with the total number of vertices of the graph code fragment specialization of class dfs to test if graph is connected class componentsdfs (code fragment finds the connected components of graph it labels each vertex with its connected component numberusing the decorator patternand returns the number of connected components found code fragment specialization of dfs to compute connected components
21,435
start and target vertices it performs depth-first search traversal beginning at the start vertex we maintain the path of discovery edges from the start vertex to the current vertex when we encounter an unexplored vertexwe add it to the end of the pathand when we finish processing vertexwe remove it from the path the traversal is terminated when the target vertex is encounteredand the path is returned as an iterable collection of vertices and edges (both kinds of positions in graphnote that the path found by this class consists of discovery edges code fragment specialization of class dfs to find path between start and target vertices
21,436
component of given vertex vby performing depth-first search traversal from that terminates when back edge is found it returns (possibly emptyiterable collection of the vertices and edges in the cycle formed by the found back edge code fragment specialization of class dfs to find cycle in the connected component of the start vertex
21,437
breadth-first search in this sectionwe consider the breadth-first search (bfstraversal algorithm like dfsbfs traverses connected component of graphand in so doing defines useful spanning tree bfs is less "adventurousthan dfshowever instead of wandering the graphbfs proceeds in rounds and subdivides the vertices into
21,438
unrolling the string in more conservative manner bfs starts at vertex swhich is at level and defines the "anchorfor our string in the first roundwe let out the string the length of one edge and we visit all the vertices we can reach without unrolling the string any farther in this casewe visitand paint as "visited,the vertices adjacent to the start vertex --these vertices are placed into level in the second roundwe unroll the string the length of two edges and we visit all the new vertices we can reach without unrolling our string any farther these new verticeswhich are adjacent to level vertices and not previously assigned to levelare placed into level and so on the bfs traversal terminates when every vertex has been visited pseudo-code for bfs starting at vertex is shown in code fragment we use auxiliary space to label edgesmark visited verticesand store collections associated with levels that isthe collections and so onstore the vertices that are in level level level and so on these collections couldfor examplebe implemented as queues they also allow bfs to be nonrecursive code fragment the bfs algorithm we illustrate bfs traversal in figure
21,439
example of breadth-first search traversalwhere the edges incident on vertex are explored by the alphabetical order of the adjacent vertices the discovery edges are shown with solid lines and the cross edges are shown with dashed lines(agraph before the traversal(bdiscovery of level (cdiscovery of level (ddiscovery of level (ediscovery of level (fdiscovery of level
21,440
traversalwe can label each vertex by the length of shortest path (in terms of the number of edgesfrom the start vertex in particularif vertex is placed into level by bfs starting at vertex sthen the length of shortest path from to is as with dfswe can visualize the bfs traversal by orienting the edges along the direction in which they are explored during the traversaland by distinguishing the edges used to discover new verticescalled discovery edgesfrom those that lead to already visited verticescalled cross edges (see figure as with the dfsthe discovery edges form spanning treewhich in this case we call the bfs tree we do not call the nontree edges "back edgesin this casehoweverfor none of them connects vertex to one of its ancestors every nontree edge connects vertex to another vertex that is neither ' ancestor nor its descendent the bfs traversal algorithm has number of interesting propertiessome of which we explore in the proposition that follows proposition let be an undirected graph on which bfs traversal starting at vertex has been performed then the traversal visits all vertices in the connected component of the discovery-edges form spanning tree twhich we call the bfs treeof the connected component of for each vertex at level ithe path of the bfs tree between and has edgesand any other path of between and has at least edges if (uvis an edge that is not in the bfs treethen the level numbers of and differ by at most we leave the justification of this proposition as an exercise ( - the analysis of the running time of bfs is similar to the one of dfswhich implies the following proposition let be graph with vertices and edges represented with the adjacency list structure bfs traversal of takes ( mtime alsothere exist ( )-time algorithms based on bfs for the following problemstesting whether is connected computing spanning tree of gif is connected computing the connected components of
21,441
given start vertex of gcomputingfor every vertex of , path with the minimum number of edges between and vor reporting that no such path exists computing cycle in gor reporting that has no cycles directed graphs in this sectionwe consider issues that are specific to directed graphs recall that directed graph (digraph)is graph whose edges are all directed methods dealing with directed edges when we allow for some or all the edges in graph to be directedwe should add the following two methods to the graph adt in order to deal with edge directions isdirected( )test whether edge is directed insertdirectededge(vwo)insert and return new directed edge with origin and destination and storing element alsoif an edge is directedthe method endvertices(eshould return an array such that [ is the origin of and [ is the destination of the running time for the method isdirected(eshould be ( )and the running time of the method insertdirectededge(vwoshould match that of undirected edge insertion reachability one of the most fundamental issues with directed graphs is the notion of reachabilitywhich deals with determining where we can get to in directed graph traversal in directed graph always goes along directed pathsthat ispaths where all the edges are traversed according to their respective directions given vertices and of digraph we say that reaches (and is reachable from uif has directed path from to we also say that vertex reaches an edge ( ,zif reaches the origin vertex of the edge digraph is strongly connected if for any two vertices and of and reaches directed cycle of reaches is cycle where all the edges are may have cycle traversed according to their respective directions (note that consisting of two edges with opposite direction between the same pair of vertices
21,442
examples is acyclic if it has no directed cycles (see figure for some the transitive closure of digraph are the same as the vertices of is the digraph and such that the vertices of has an edge (uv)whenever has directed path from to that iswe define by starting with the digraph and adding in an extra edge (uvfor each and such that is reachable from (and there isn' already an edge (uvin figure examples of reachability in digraph(aa directed path from bos to lax is drawn in blue(ba directed cycle (ordmiadfwlaxordis shown in blueits vertices induce strongly connected subgraph(cthe subgraph of the vertices and edges reachable from ord is shown in blue(dremoving the dashed blue edges gives an acyclic digraph
21,443
followinginclude the given vertices and vdetermine whether reaches find all the vertices of determine whether is strongly connected determine whether is acyclic compute the transitive closure that are reachable from given vertex of in the remainder of this sectionwe explore some efficient algorithms for solving these problems traversing digraph
21,444
methods akin to the depth-first search (dfsand breadth-first search (bfsalgorithms defined previously for undirected graphs (sections and such explorations can be usedfor exampleto answer reachability questions the directed depth-first search and breadth-first search methods we develop in this section for performing such explorations are very similar to their undirected counterparts in factthe only real difference is that the directed depth-first search and breadth-first search methods only traverse edges according to their respective directions the directed version of dfs starting at vertex can be described by the recursive algorithm in code fragment (see figure code fragment the directed dfs algorithm figure an example of dfs in digraph(aintermediate stepwherefor the first timean already visited vertex (dfwis reached(bthe completed dfs the tree edges are shown with solid blue linesthe back edges are shown with dashed blue linesand the forward and cross edges are shown with dashed black lines the order in which the vertices are visited is indicated by label next to each vertex the edge (ord,dfwis back edgebut (dfw,ordis forward edge edge (bos,sfois forward edgeand (sfo,laxis cross edge
21,445
reachable from the starting dfs on digraph vertex into tree edges or discovery edgeswhich lead us to discover new vertexand nontree edgeswhich take us to previously visited vertex the tree edges form tree rooted at the starting vertexcalled the depth-first search treeand there are three kinds of nontree edgesback edgeswhich connect vertex to an ancestor in the dfs tree forward edgeswhich connect vertex to descendent in the dfs tree cross edgeswhich connect vertex to vertex that is neither its ancestor nor its descendent refer back to figure to see an example of each type of nontree edge proposition let be digraph depth-first search on starting at vertex visits all the vertices of that are reachable from alsothe dfs tree contains directed paths from to every vertex reachable from visited by dfs starting justificationlet be the subset of vertices of at vertex we want to show that contains and every vertex reachable from belongs to suppose nowfor the sake of contradictionthat there is vertex reachable from that is not in consider directed path from to wand let (uvbe the first edge on such path taking us out of that isu is in but is not in when dfs reaches uit explores all the outgoing edges of uand thus must reach also vertex via edge ( ,vhencev should be in and we have obtained contradiction thereforev must contain every vertex reachable from analyzing the running time of the directed dfs method is analogous to that for its undirected counterpart in particulara recursive call is made for each vertex exactly
21,446
and ms edges are reachable from vertex sa directed dfs starting at runs in ( timeprovided the digraph is represented with data structure that supports constant-time vertex and edge methods the adjacency list structure satisfies this requirementfor example by proposition we can use dfs to find all the vertices reachable from given vertexand hence to find the transitive closure of that iswe can perform dfsstarting from each vertex of to see which vertices are reachable from vadding an edge (vwto the transitive closure for each such likewiseby repeatedly traversing digraph easily test whether with dfsstarting in turn at each vertexwe can is strongly connected namelyis strongly connected if each dfs visits all the vertices of thuswe may immediately derive the proposition that follows proposition let be digraph with vertices and edges the following problems can be solved by an algorithm that traverses dfsruns in ( ( + )timeand uses (nauxiliary spacecomputingfor each vertex of testing whether computing the transitive closure times using the subgraph reachable from is strongly connected of testing for strong connectivity actuallywe can determine if directed graph is strongly connected much faster than thisjust using two depth-first searches we begin by performing dfs of our directed graph vertex of starting at an arbitrary vertex if there is any that is not visited by this dfsand is not reachable from sthen the graph is not strongly connected soif this first dfs visits each vertex of we reverse all the edges of then (using the reverse direction methodand perform another dfs starting at in this "reversegraph if every vertex of is visited by this second dfsthen the graph is strongly connectedfor each of the vertices visited in this dfs can reach since this algorithm makes just two dfs traversals of it runs in ( mtime
21,447
as with dfswe can extend breadth-first search (bfsto work for directed graphs the algorithm still visits vertices level by level and partitions the set of edges into tree edges (or discovery edges)which together form directed breadth-first search tree rooted at the start vertexand nontree edges unlike the directed dfs methodhoweverthe directed bfs method only leaves two kinds of nontree edgesback edgeswhich connect vertex to one of its ancestorsand cross edgeswhich connect vertex to another vertex that is neither its ancestor nor its descendent there are no forward edgeswhich is fact we explore in an exercise ( - transitive closure in this sectionwe explore an alternative technique for computing the transitive closure of digraph let be digraph with vertices and edges we compute the transitive closure of in series of rounds we initialize we also as we then begin the arbitrarily number the vertices of computation of the roundsbeginning with round in generic round kwe construct digraph starting with and adding to the directed contains both the edges ( , and ( , in this edge ( if digraph waywe will enforce simple rule embodied in the proposition that follows proposition for = ,ndigraph only if digraph has directed path from to whose intermediate vertices (if anyare in the set{ , in particularclosure of has an edge ( if and is equal to the transitive proposition suggests simple algorithm for computing the transitive closure of that is based on the series of rounds we described above this algorithm is known as the floyd-warshall algorithmand its pseudo-code is given in code fragment from this pseudo-codewe can easily analyze the running time of the floyd-warshall algorithm assuming that the data structure representing supports methods areadjacent and insertdirectededge in ( time the main loop is executed times and the inner loop considers each of ( pairs of verticesperforming constant-time computation for each one thusthe total running time of the floyd-warshall algorithm is ( code fragment pseudo-code for the floydwarshall algorithm this algorithm computes the
21,448
of by incrementally computing series of digraphs where for this description is actually an example of an algorithmic design pattern known as dynamic programmingwhich is discussed in more detail in section from the description and analysis above we may immediately derive the following proposition proposition let be digraph with verticesand let be represented by data structure that supports lookup and update of adjacency information in ( time then the floyd-warshall algorithm computes the transitive closure in ( time of we illustrate an example run of the floyd-warshall algorithm in figure figure sequence of digraphs computed by the floyd-warshall algorithm(ainitial digraph and numbering of the vertices(bdigraph ( (dif (edigraph (fnote that has the edges ( , and ( )but not the edge (vivj)in the drawing of digraph we show
21,449
edge (vivjwith thick blue line performance of the floyd-warshall algorithm
21,450
performing dfs of directed graph from each of its verticesbut this depends upon the representation of the graph if graph is represented using an adjacency matrixthen running the dfs method once on directed graph takes ( time (we explore the reason for this in exercise - thusrunning dfs times takes ( timewhich is no better than single execution of the floyd-warshall algorithmbut the floyd-warshall algorithm would be much simpler to implement neverthelessif the graph is represented using an adjacency list structurethen running the dfs algorithm times would take ( ( + )time to compute the transitive closure even soif the graph is densethat isif it has &( edgesthen this approach still runs in ( time and is more complicated than single instance of the floyd-warshall algorithm the only case where repeatedly calling the dfs method is better is when the graph is not dense and is represented using an adjacency list structure directed acyclic graphs directed graphs without directed cycles are encountered in many applications such digraph is often referred to as directed acyclic graphor dagfor short applications of such graphs include the followinginheritance between classes of java program prerequisites between courses of degree program scheduling constraints between the tasks of project example in order to manage large projectit is convenient to break it up into collection of smaller tasks the taskshoweverare rarely independentbecause scheduling constraints exist between them (for examplein house building projectthe task of ordering nails obviously precedes the task of nailing shingles to the roof deck clearlyscheduling constraints cannot have circularitiesbecause they would make the project impossible (for examplein order to get job you need to have work experiencebut in order to get work experience you need to have job the scheduling constraints impose restrictions on the order in which the tasks can be executed namelyif constraint says that task must be completed before task is startedthen must precede in the order of execution of the tasks thusif we model feasible set of tasks as vertices of directed graphand we place directed edge from tow whenever the task for must be executed before the task for wthen we define directed acyclic graph the example above motivates the following definition let be digraph with vertices topological ordering of is an ordering , of the vertices of such that for every edge (vivjof that isa topological ordering is an
21,451
figure note that digraph may have more than one topological ordering figure two topological orderings of the same acyclic digraph proposition has topological ordering if and only if it is acyclic justificationthe necessity (the "only ifpart of the statementis easy to demonstrate suppose is topologically ordered assumefor the sake of contradictionthat has cycle consisting of edges (vi vi )(vi vi )(vik vi because of the topological orderingwe must have ik which is clearly impossible thusmust be acyclic we now argue the sufficiency of the condition (the "ifpartsuppose is acyclic we will give an algorithmic description of how to build topological ordering for since is acyclicmust have vertex with no incoming edges (that iswith in-degree let be such vertex indeedif did not existthen in tracing directed path from an arbitrary start vertex we would eventually encounter previously visited vertexthus contradicting the acyclicity of if we remove from together with its outgoing edgesthe resulting digraph is still acyclic hencethe resulting digraph also has vertex with no incoming edgesand we let be such vertex by repeating this process until the digraph becomes
21,452
because of the construction aboveif (vi,vjis an edge of then vi must be deleted before vj can be deletedand thus < thusv vn is topological ordering proposition ' justification suggests an algorithm (code fragment )called topological sortingfor computing topological ordering of digraph code fragment pseudo-code for the topological sorting algorithm (we show an example application of this algorithm in figure proposition let be digraph with vertices andm edges the topological sorting algorithm runs in ( mtime using (nauxiliary spaceand either computes topological ordering of which indicates that or fails to number some verticeshas directed cycle
21,453
incounter variables can be done with simple traversal of the graphwhich takes ( mtime we use the decorator pattern to associate counter attributes with the vertices say that vertex is visited by the topological sorting algorithm when is removed from the stack vertex can be visited only when incounter ( which implies that all its predecessors (vertices with outgoing edges into uwere previously visited as consequenceany vertex that is on directed cycle will never be visitedand any other vertex will be visited exactly once the algorithm traverses all the outgoing edges of each visited vertex onceso its running time is proportional to the number of outgoing edges of the visited vertices thereforethe algorithm runs in ( mtime regarding the space usageobserve that the stack and the incounter variables attached to the vertices use (nspace as side effectthe topological sorting algorithm of code fragment also tests whether the input digraph is acyclic indeedif the algorithm terminates without ordering all the verticesthen the subgraph of the vertices that have not been ordered must contain directed cycle figure example of run of algorithm topologicalsort (code fragment )(ainitial configuration( -iafter each while-loop iteration the vertex labels show the vertex number and the current incounter value the edges traversed are shown with dashed blue arrows thick lines denote the vertex and edges examined in the current iteration
21,454
weighted graphs as we saw in section the breadth-first search strategy can be used to find shortest path from some starting vertex to every other vertex in connected graph this approach makes sense in cases where each edge is as good as any otherbut there are many situations where this approach is not appropriate for examplewe might be using graph to represent computer network (such as the internet)and we might be interested in finding the fastest way to route data packet between two
21,455
each otherfor some connections in computer network are typically much faster than others (for examplesome edges might represent slow phone-line connections while others might represent high-speedfiber-optic connectionslikewisewe might want to use graph to represent the roads between citiesand we might be interested in finding the fastest way to travel cross-country in this caseit is again probably not appropriate for all the edges to be equal to each otherfor some intercity distances will likely be much larger than others thusit is natural to consider graphs whose edges are not weighted equally weighted graph is graph that has numeric (for exampleintegerlabel (eassociated with each edge ecalled the weight of edge we show an example of weighted graph in figure figure weighted graph whose vertices represent major airports and whose edge weights represent distances in miles this graph has path from jfk to lax of total weight , (going through ord and dfwthis is the minimum weight path in the graph from jfk to lax in the remaining sections of this we study weighted graphs shortest paths let be weighted graph the length (or weightof path is the sum of the weights of the edges of that isif (( , ),( , )(vk - ,vk))then the length of pdenoted ( )is defined as
21,456
minimum length path (also called shortest pathfrom to uif such path exists people often use the convention that (vuif there is no path at all from to in even if there is path from to in gthe distance from to may not be definedhoweverif there is cycle in whose total weight is negative for examplesuppose vertices in represent citiesand the weights of edges in represent how much money it costs to go from one city to another if someone were willing to actually pay us to go from say jfk to ordthen the "costof the edge (jfk,ordwould be negative if someone else were willing to pay us to go from ord to jfkthen there would be negative-weight cycle in and distances would no longer be defined that isanyone could now build path (with cyclesin from any city to another city that first goes to jfk and then cycles as many times as he or she likes from jfk to ord and backbefore going on to the existence of such paths would allow us to build arbitrarily low negative-cost paths (andin this casemake fortune in the processbut distances cannot be arbitrarily low negative numbers thusany time we use edge weights to represent distanceswe must be careful not to introduce any negative-weight cycles suppose we are given weighted graph gand we are asked to find shortest path from some vertex to each other vertex in gviewing the weights on the edges as distances in this sectionwe explore efficient ways of finding all such shortest pathsif they exist the first algorithm we discuss is for the simpleyet commoncase when all the edge weights in are nonnegative (that isw( > for each edge of )hencewe know in advance that there are no negative-weight cycles in recall that the special case of computing shortest path when all weights are equal to one was solved with the bfs traversal algorithm presented in section there is an interesting approach for solving this single-source problem based on the greedy method design pattern (section recall that in this pattern we solve the problem at hand by repeatedly selecting the best choice from among those available in each iteration this paradigm can often be used in situations where we are trying to optimize some cost function over collection of objects we can add objects to our collectionone at timealways picking the next one that optimizes the function from among those yet to be chosen dijkstra' algorithm the main idea in applying the greedy method pattern to the single-source shortestpath problem is to perform "weightedbreadth-first search starting at in particularwe can use the greedy method to develop an algorithm that iteratively grows "cloudof vertices out of vwith the vertices entering the cloud in order of
21,457
outside the cloud that is closest to the algorithm terminates when no more vertices are outside the cloudat which point we have shortest path from to every other vertex of this approach is simplebut nevertheless powerfulexample of the greedy method design pattern greedy method for finding shortest paths applying the greedy method to the single-sourceshortest-path problemresults in an algorithm known as dijkstra' algorithm when applied to other graph problemshoweverthe greedy method may not necessarily find the best solution (such as in the so-called traveling salesman problemin which we wish to find the shortest path that visits all the vertices in graph exactly onceneverthelessthere are number of situations in which the greedy method allows us to compute the best solution in this we discuss two such situationscomputing shortest paths and constructing minimum spanning tree in order to simplify the description of dijkstra' algorithmwe assumein the followingthat the input graph is undirected (that isall its edges are undirectedand simple (that isit has no self-loops and no parallel edgeshencewe denote the edges of as unordered vertex pairs ( ,zin dijkstra' algorithm for finding shortest pathsthe cost function we are trying to optimize in our application of the greedy method is also the function that we are trying to compute--the shortest path distance this may at first seem like circular reasoning until we realize that we can actually implement this approach by using "bootstrappingtrickconsisting of using an approximation to the distance function we are trying to computewhich in the end will be equal to the true distance edge relaxation let us define label [ufor each vertex in vwhich we use to approximate the distance in from to the meaning of these labels is that [uwill always store the length of the best path we have found so far from to initiallyd[ and [ufor each uvand we define the set cwhich is our "cloudof verticesto initially be the empty set at each iteration of the algorithmwe select vertex not in with smallest [ulabeland we pull into in the very first iteration we willof coursepull into once new vertex is pulled into cwe then update the label [zof each vertex that is adjacent to and is outside of cto reflect the fact that there may be new and better way to get to via this update operation is known as relaxation procedurefor it takes an old estimate and checks if it can be improved to get closer to its true value ( metaphor for why we call this relaxation comes from spring that is stretched out and then "relaxedback to its true resting shape in the case of dijkstra' algorithmthe relaxation is performed for an edge ( ,zsuch
21,458
value for [zusing the edge ( ,zthe specific edge relaxation operation is as followsedge relaxationif [ + (( , ) [zthen [ ]- [ ]+ (( , )we give the pseudo-code for dijkstra' algorithm in code fragment note that we use priority queue to store the vertices outside of the cloud code fragment dijkstra' algorithm for the single-source shortest path problem we illustrate several iterations of dijkstra' algorithm in figures and figure an execution of dijkstra' algorithm on weighted graph the start vertex is bwi box next to each vertex stores the label [vthe symbol is used instead of the edges of the shortest-path tree are drawn as thick blue arrowsand for each vertex outside the "cloudwe show the current best
21,459
in figure figure an example execution of dijkstra' algorithm (continued from figure
21,460
the interestingand possibly even little surprisingaspect of the dijkstra algorithm is thatat the moment vertex is pulled into cits label [ustores the correct length of shortest path from to thuswhen the algorithm terminatesit will have computed the shortest-path distance from to every vertex of that isit will have solved the single-source shortest path problem it is probably not immediately clear why dijkstra' algorithm correctly finds the shortest path from the start vertex to each other vertex in the graph why is it that the distance from to is equal to the value of the label [uat the time vertex is pulled into the cloud (which is also the time is removed from the priority queue )the answer to this question depends on there being no negative-weight edges in the graphfor it allows the greedy method to work correctlyas we show in the proposition that follows
21,461
into the cloudthe label [uis equal to (vu)the length of shortest path from to justificationsuppose that [ ]> ( ,tfor some vertex in vand let be the first vertex the algorithm pulled into the cloud (that isremoved from qsuch that [ ]> ( ,uthere is shortest path from to (for otherwise (vu)= [ ]let us therefore consider the moment when is pulled into cand let be the first vertex of (when going from to uthat is not in at this moment let be the predecessor of in path (note that we could have (see figure we knowby our choice of zthat is already in at this point moreoverd[yd( , )since is the first incorrect vertex when was pulled into cwe tested (and possibly updatedd[zso that we had at that point [ ]<= [ ]+ (( , ))= ( , )+ (( , )but since is the next vertex on the shortest path from to uthis implies that [zd( ,zbut we are now at the moment when we are picking unot zto join chenced[ <= [zit should be clear that subpath of shortest path is itself shortest path hencesince is on the shortest path from to ud( , )+ ( , )= ( ,umoreoverd(zu> because there are no negative-weight edges therefored[ < [zd( , < ( ,zd( ,ud( ,ubut this contradicts the definition of uhencethere can be no such vertex figure schematic illustration for the justification of proposition
21,462
in this sectionwe analyze the time complexity of dijkstra' algorithm we denote with and mthe number of vertices and edges of the input graph grespectively we assume that the edge weights can be added and compared in constant time because of the high level of the description we gave for dijkstra' algorithm in code fragment analyzing its running time requires that we give more details on its implementation specificallywe should indicate the data structures used and how they are implemented let us first assume that we are representing the graph using an adjacency list structure this data structure allows us to step through the vertices adjacent to during the relaxation step in time proportional to their number it still does not settle all the details for the algorithmhoweverfor we must say more about how to implement the other principle data structure in the algorithm--the priority queue an efficient implementation of the priority queue uses heap (section this allows us to extract the vertex with smallest label (call to the removemin methodin (logntime as noted in the pseudo-codeeach time we update [zlabel we need to update the key of in the priority queue thuswe actually need heap implementation of an adaptable priority queue (section if is an adaptable priority queue implemented as heapthen this key update canfor examplebe done using the replacekey(ek)where is the entry storing the key for the vertex if is location-awarethen we can easily implement such key updates in (logntimesince location-aware entry for vertex would allow to have immediate access to the entry storing in the heap (see section assuming this implementation of qdijkstra' algorithm runs in (( mlogntime
21,463
are as followsinserting all the vertices in with their initial key value can be done in ( logntime by repeated insertionsor in (ntime using bottom-up heap construction (see section at each iteration of the while loopwe spend (logntime to remove vertex from qand (degree( )log ntime to perform the relaxation procedure on the edges incident on the overall running time of the while loop is which is (( +mlog nby proposition note that if we wish to express the running time as function of onlythen it is ( log nin the worst case an alternative implementation for dijkstra' algorithm let us now consider an alternative implementation for the adaptable priority queue using an unsorted sequence thisof courserequires that we spend (ntime to extract the minimum elementbut it allows for very fast key updatesprovided supports location-aware entries (section specificallywe can implement each key update done in relaxation step in ( time--we simply change the key value once we locate the entry in to update hencethis implementation results in running time that is ( )which can be simplified to ( since is simple comparing the two implementations we have two choices for implementing the adaptable priority queue with location-aware entries in dijkstra' algorithma heap implementationwhich yields running time of (( )log )and an unsorted sequence implementationwhich yields running time of ( since both implementations would be fairly simple to code upthey are about equal in terms of the programming sophistication needed these two implementations are also about equal in terms of the constant factors in their worst-case running times looking only at these worst-case timeswe prefer the heap implementation when the number of edges in the graph is small (that iswhen /log )and we prefer the sequence implementation when the number of edges is large (that iswhen /log
21,464
vertices and edgessuch that the weight of each edge is nonnegativeand vertex of gdijkstra' algorithm computes the distance from to all other vertices of in (( +mlog nworst-case timeoralternativelyin ( worstcase time in exercise - we explore how to modify dijkstra' algorithm to output tree rooted at vsuch that the path in from to vertex is shortest path in from to programming dijkstra' algorithm in java having given pseudo-code description of dijkstra' algorithmlet us now present java code for performing dijkstra' algorithmassuming we are given an undirected graph with positive integer weights we express the algorithm by means of class dijkstra (code fragments )which uses weight decoration for each edge to extract ' weight class dijkstra assumes that each edge has weight decoration code fragment class dijkstra implementing dijkstra' algorithm (continues in code fragment
21,465
visit an adaptable priority queue supporting location-aware entries (section is used we insert vertex into with method insertwhich returns the location-aware entry of in we "attachto its entry in by means of method setentryand we retrieve the entry of by means of method getentry note that associating entries to the vertices is an instance of the decorator design pattern (section instead of using an additional data structure for the labels [ ]we exploit the fact that [uis the key of vertex in qand thus [ucan be retrieved given the entry for in changing the label of vertex to in the relaxation procedure corresponds to calling method replacekey( , )where is the location-aware entry for in
21,466
dijkstra (continued from code fragment
21,467
minimum spanning trees suppose we wish to connect all the computers in new office building using the least amount of cable we can model this problem using weighted graph whose vertices represent the computersand whose edges represent all the possible pairs (uvof computerswhere the weight ((vu)of edge (vuis equal to the amount of cable needed to connect computer to computer rather than computing shortest path tree from some particular vertex vwe are interested instead in finding (freetree that contains all the vertices of and has the minimum total weight over all such trees methods for finding such tree are the focus of this section problem definition given weighted undirected graph gwe are interested in finding tree that contains all the vertices in and minimizes the sum treesuch as thisthat contains every vertex of connected graph is said to be spanning treeand the problem of computing spanning tree with smallest total weight is known as the minimum spanning tree (or mstproblem the development of efficient algorithms for the minimum spanning tree problem predates the modern notion of computer science itself in this sectionwe discuss two classic algorithms for solving the mst problem these algorithms are both applications of the greedy methodwhichas was discussed briefly in the previous sectionis based on choosing objects to join growing collection by iteratively picking an object that minimizes some cost function the first algorithm we discuss is kruskal' algorithmwhich "growsthe mst in clusters by considering edges in order of their weights the second algorithm we discuss is the prim-jarnik algorithmwhich grows the mst from single root vertexmuch in the same way as dijkstra' shortest-path algorithm as in section in order to simplify the description of the algorithmswe assumein the followingthat the input graph is undirected (that isall its edges are undirectedand simple (that isit has no self-loops and no parallel edgeshencewe denote the edges of as unordered vertex pairs ( ,zbefore we discuss the details of these algorithmshoweverlet us give crucial fact about minimum spanning trees that forms the basis of the algorithms crucial fact about minimum spanning trees
21,468
case depends crucially on the following fact (see figure figure an illustration of the crucial fact about minimum spanning trees proposition let be weighted connected graphand let and be partition of the vertices of into two disjoint nonempty sets furthermorelete be an edge in with minimum weight from among those with one endpoint in and the other in there is minimum spanning tree that has as one of its edges justificationlet be minimum spanning tree of if does not contain edge ethe addition of to must create cycle thereforethere is some edge of this cycle that has one endpoint in and the other in moreoverby the choice of ew( < (fif we remove from }we obtain spanning tree whose total weight is no more than before since was minimum spanning treethis new tree must also be minimum spanning tree in factif the weights in are distinctthen the minimum spanning tree is uniquewe leave the justification of this less crucial fact as an exercise ( - in additionnote that proposition remains valid even if the graph contains negative-weight edges or negative-weight cyclesunlike the algorithms we presented for shortest paths
21,469
kruskal' algorithm the reason proposition is so important is that it can be used as the basis for building minimum spanning tree in kruskal' algorithmit is used to build the minimum spanning tree in clusters initiallyeach vertex is in its own cluster all by itself the algorithm then considers each edge in turnordered by increasing weight if an edge connects two different clustersthen is added to the set of edges of the minimum spanning treeand the two clusters connected by are merged into single cluster ifon the other hande connects two vertices that are already in the same clusterthen is discarded once the algorithm has added enough edges to form spanning treeit terminates and outputs this tree as the minimum spanning tree we give pseudo-code for kruskal' mst algorithm in code fragment and we show the working of this algorithm in figures and code fragment kruskal' algorithm for the mst problem as mentioned beforethe correctness of kruskal' algorithm follows from the crucial fact about minimum spanning treesproposition each time kruskal' algorithm adds an edge ( ,uto the minimum spanning tree twe can define partitioning of the set of vertices (as in the propositionby letting be the cluster containing and letting contain the rest of the vertices in this clearly defines disjoint partitioning of the vertices of andmore importantlysince we are extracting edges from in order by their weightse must be minimum-weight edge with one vertex in and the other in thuskruskal' algorithm always adds valid minimum spanning tree edge
21,470
mst algorithm on graph with integer weights we show the clusters as shaded regions and we highlight the edge being considered in each iteration (continues in figure
21,471
mst algorithm rejected edges are shown dashed (continues in figure
21,472
mst algorithm (continuedthe edge considered in (nmerges the last two clusterswhich concludes this
21,473
figure the running time of kruskal' algorithm we denote the number of vertices and edges of the input graph with and mrespectively because of the high level of the description we gave for kruskal' algorithm in code fragment analyzing its running time requires that we give more details on its implementation specificallywe should indicate the data structures used and how they are implemented we can implement the priority queue using heap thuswe can initialize in ( log mtime by repeated insertionsor in (mtime using bottom-up heap construction (see section in additionat each iteration of the while loopwe can remove minimum-weight edge in (log mtimewhich actually is (log )since is simple thusthe total time spent performing priority queue operations is no more than ( log nwe can represent each cluster using one of the union-find partition data structures discussed in section recall that the sequence-based union-find structure allows us to perform series of union and find operations in ( log ntimeand the tree-based version can implement such series of operations in ( logntime thussince we perform calls to method union and at most calls to findthe total time spent on merging clusters and determining the clusters that vertices belong to is no more than (mlognusing the sequencebased approach or (mlognusing the tree-based approach thereforeusing arguments similar to those used for dijkstra' algorithmwe conclude that the running time of kruskal' algorithm is ((nmlog )which can be simplified as (mlog )since is simple and connected
21,474
the prim-jarnik algorithm in the prim-jarnik algorithmwe grow minimum spanning tree from single cluster starting from some "rootvertex the main idea is similar to that of dijkstra' algorithm we begin with some vertex vdefining the initial "cloudof vertices thenin each iterationwe choose minimum-weight edge ( , )connecting vertex in the cloud to vertex outside of the vertex is then brought into the cloud and the process is repeated until spanning tree is formed againthe crucial fact about minimum spanning trees comes to playfor by always choosing the smallest-weight edge joining vertex inside to one outside cwe are assured of always adding valid edge to the mst to efficiently implement this approachwe can take another cue from dijkstra' algorithm we maintain label [ufor each vertex outside the cloud cso that [ustores the weight of the best current edge for joining to the cloud these labels allow us to reduce the number of edges that we must consider in deciding which vertex is next to join the cloud we give the pseudo-code in code fragment code fragment the prim-jarnik algorithm for the mst problem
21,475
let and denote the number of vertices and edges of the input graph grespectively the implementation issues for the prim-jarnik algorithm are similar to those for dijkstra' algorithm if we implement the adaptable priority queue as heap that supports location-aware entries (section )then we can extract the vertex in each iteration in (log ntime in additionwe can update each [zvalue in (log ntimeas wellwhich is computation considered at most once for each edge ( ,zthe other steps in each iteration can be implemented in constant time thusthe total running time is (( +mlog )which is ( log nillustrating the prim-jarn ik algorithm we illustrate the prim-jarn ik algorithm in figures through figure an illustration of the prim-jarnik mst algorithm (continues in figure
21,476
algorithm (continued from figure
21,477
exercises for source code and help with exercisesplease visit java datastructures net
21,478
- draw simple undirected graph that has vertices edgesand connected components why would it be impossible to draw with connected components if had edgesr- let be simple connected graph with vertices and edges explain why (log mis (log nr- draw an adjacency list and adjacency matrix representation of the undirected graph shown in figure - draw simple connected directed graph with vertices and edges such that the in-degree and out-degree of each vertex is show that there is single (nonsimplecycle that includes all the edges of your graphthat isyou can trace all the edges in their respective directions without ever lifting your pencil (such cycle is called an euler tour - repeat the previous problem and then remove one edge from the graph show that now there is single (nonsimplepath that includes all the edges of your graph (such path is called an euler path - bob loves foreign languages and wants to plan his course schedule for the following years he is interested in the following nine language coursesla la la la la la la la and la the course prerequisites arela (nonela la
21,479
la la la :la ,la la la la la la la :la ,la la la find the sequence of courses that allows bob to satisfy all the prerequisites - suppose we represent graph having vertices and edges with the edge list structure whyin this casedoes the insert vertex method run in ( time while the remove vertex method runs in (mtimer- let be graph whose vertices are the integers through and let the adjacent vertices of each vertex be given by the table belowvertex adjacent vertices (
21,480
( ( ( ( ( ( , assume thatin traversal of gthe adjacent vertices of given vertex are returned in the same order as they are listed in the table above draw give the sequence of vertices of visited using dfs traversal starting at vertex give the sequence of vertices visited using bfs traversal starting at vertex - would you use the adjacency list structure or the adjacency matrix structure in each of the following casesjustify your choice
21,481
little space as possible the graph has , vertices and , , edgesand it is important to use as little space as possible you need to answer the query areadjacent as fast as possibleno matter how much space you use - explain why the dfs traversal runs in ( time on an -vertex simple graph that is represented with the adjacency matrix structure - draw the transitive closure of the directed graph shown in figure - compute topological ordering for the directed graph drawn with solid edges in figure - can we use queue instead of stack as an auxiliary data structure in the topological sorting algorithm shown in code fragment why or why notr- draw simpleconnectedweighted graph with vertices and edgeseach with unique edge weights identify one vertex as "startvertex and illustrate running of dijkstra' algorithm on this graph - show how to modify the pseudo-code for dijkstra' algorithm for the case when the graph may contain parallel edges and self-loops - show how to modify the pseudo-code for dijkstra' algorithm for the case when the graph is directed and we we want to compute shortest directed paths from the source vertex to all the other vertices
21,482
show how to modify dijkstra' algorithm to not only output the distance from to each vertex in gbut also to output tree rooted at such that the path in from to vertex is shortest path in from to - there are eight small islands in lakeand the state wants to build seven bridges to connect them so that each island can be reached from any other one via one or more bridges the cost of constructing bridge is proportional to its length the distances between pairs of islands are given in the following table
21,483
21,484
21,485
find which bridges to build to minimize the total construction cost - draw simpleconnectedundirectedweighted graph with vertices and edgeseach with unique edge weights illustrate the execution of kruskal' algorithm on this graph (note that there is only one minimum spanning tree for this graph - repeat the previous problem for the prim-jarnik algorithm - consider the unsorted sequence implementation of the priority queue used in dijkstra' algorithm in this casewhy is this the best-case running time of dijkstra' algorithm ( on an -vertex graphr- describe the meaning of the graphical conventions used in figure illustrating dfs traversal what do the colors blue and black refer towhat do the arrows signifyhow about thick lines and dashed linesr- repeat exercise - for figure illustrating bfs traversal - repeat exercise - for figure illustrating directed dfs traversal - repeat exercise - for figure illustrating the floyd-warshall algorithm -
21,486
algorithm - repeat exercise - for figures and illustrating dijkstra' algorithm - repeat exercise - for figures and illustrating kruskal' algorithm - repeat exercise - for figures and illustrating the primjarnik algorithm - how many edges are in the transitive closure of graph that consists of simple directed path of verticesr- given complete binary tree with nodesconsider directed graph having the nodes of as its vertices for each parent-child pair in tcreate directed edge in of from the parent to the child show that the transitive closure has ( log nedges - simple undirected graph is complete if it contains an edge between every pair of distinct vertices what does depth-first search tree of complete graph look liker- recalling the definition of complete graph from exercise - what does breadth-first search tree of complete graph look liker- say that maze is constructed correctly if there is one path from the start to the finishthe entire maze is reachable from the startand there are no loops around any portions of the maze given maze drawn in an gridhow can we
21,487
algorithmcreativity - say that an -vertex directed acyclic graph is compact if there is some way of numbering the vertices of with the integers from to such that contains the edge (ijif and only if jfor all ij in [ give an ( )time algorithm for detecting if is compact - justify proposition - describein pseudo-codean ( )-time algorithm for computing all the connected components of an undirected graph with vertices and edges - let be the spanning tree rooted at the start vertex produced by the depth-first search of connectedundirected graph argue why every edge of not in goes from vertex in to one of its ancestorsthat isit is back edge - suppose we wish to represent an -vertex graph using the edge list structureassuming that we identify the vertices with the integers in the set { , describe how to implement the collection to support (log )-time performance for the areadjacent method how are you implementing the method in this casec- tamarindo university and many other schools worldwide are doing joint project on multimedia computer network is built to connect these schools using communication links that form free tree the schools decide to install file server at one of the schools to share data among all the schools since the transmission time on link is dominated by the link setup and synchronizationthe cost of data transfer is proportional to the number of links used henceit is desirable to choose "centrallocation for the file server given free tree and node of tthe eccentricity of is the length of longest path from to
21,488
of design an efficient algorithm thatgiven an -node free tree tcomputes center of is the center uniqueif nothow many distinct centers can free tree havec- show thatif is bfs tree produced for connected graph gthenfor each vertex at level ithe path of between and has edgesand any other path of between and has at least edges - the time delay of long-distance call can be determined by multiplying small fixed constant by the number of communication links on the telephone network between the caller and callee suppose the telephone network of company named rt& is free tree the engineers of rt& want to compute the maximum possible time delay that may be experienced in long-distance call given free tree tthe diameter of is the length of longest path between two nodes of give an efficient algorithm for computing the diameter of - company named rt& has network of switching stations connected by high-speed communication links each customer' phone is directly connected to one station in his or her area the engineers of rt& have developed prototype video-phone system that allows two customers to see each other during phone call in order to have acceptable image qualityhoweverthe number of links used to transmit video signals between the two parties cannot exceed suppose that rt& ' network is represented by graph design an efficient algorithm that computesfor each stationthe set of stations it can reach using no more than links - explain why there are no forward nontree edges with respect to bfs tree constructed for directed graph -
21,489
traverses each edge of exactly once according to its direction such tour is connected and the in-degree equals the out-degree of each always exists if vertex in with vertices and edges is cycle that describe an ( )-time algorithm for finding an euler tour of such digraph - an independent set of an undirected graph ( ,eis subset of such that no two vertices in are adjacent that isif and are in ithen ( ,vis not in maximal independent set is an independent set such thatif we were to add any additional vertex to mthen it would not be independent any more every graph has maximal independent set (can you see thisthis question is not part of the exercisebut it is worth thinking about give an efficient algorithm that computes maximal independent set for graph what is this method' running timec- let be an undirected graph with vertices and edges describe an ( )-time algorithm for traversing each edge of exactly once in each direction - justify proposition - give an example of an -vertex simple graph that causes dijkstra' algorithm to run in ( log ntime when its implemented with heap - with negative-weight edgesgive an example of weighted directed graph but no negative-weight cyclesuch that dijkstra' algorithm incorrectly computes the shortest-path distances from some start vertex - consider the following greedy strategy for finding shortest path from vertex start to vertex goal in given connected graph initialize path to start
21,490
initialize visitedvertices to {start if start=goalreturn path and exit otherwisecontinue find the edge (start,vof minimum weight such that is adjacent to start and is not in visitedvertices add to path add to visitedvertices set start equal to and go to step does this greedy strategy always find shortest path from start to goaleither explain intuitively why it worksor give counter example - show that if all the weights in connected weighted graph are distinctthen there is exactly one minimum spanning tree for - design an efficient algorithm for finding longest directed path from vertex to vertex of an acyclic weighted digraph specify the graph representation used and any auxiliary data structures used alsoanalyze the time complexity of your algorithm - consider diagram of telephone networkwhich is graph whose vertices represent switching centersand whose edges represent communication lines joining pairs of centers edges are marked by their bandwidthand the bandwidth of path is the bandwidth of its lowest bandwidth edge give an algorithm thatgiven diagram and two switching centers and boutputs the maximum bandwidth of path between and
21,491
computer networks should avoid single points of failurethat isnetwork nodes that can disconnect the network if they fail we say connected graph is biconnected if it contains no vertex whose removal would divide into two or more connected components give an ( )-time algorithm for adding at most edges to connected graph gwith > vertices and > edgesto guarantee that is biconnected - nasa wants to link stations spread over the country using communication channels each pair of stations has different bandwidth availablewhich is known priori nasa wants to select channels (the minimum possiblein such way that all the stations are linked by the channels and the total bandwidth (defined as the sum of the individual bandwidths of the channelsis maximum give an efficient algorithm for this problem and determine its worstcase time complexity consider the weighted graph ( , )where is the set of stations and is the set of channels between the stations define the weight (eof an edge in as the bandwidth of the corresponding channel - suppose you are given timetablewhich consists ofa set of airportsand for each airport in aa minimum connecting time (aa set of flightsand the followingfor each flight in forigin airport (fin destination airport (fin departure time (farrival time (
21,492
problemwe are given airports and band time tand we wish to compute sequence of flights that allows one to arrive at the earliest possible time in when departing from at or after time minimum connecting times at intermediate airports should be observed what is the running time of your algorithm as function of and mc- inside the castle of asymptopia there is mazeand along each corridor of the maze there is bag of gold coins the amount of gold in each bag varies noble knightnamed sir paulwill be given the opportunity to walk through the mazepicking up bags of gold he may enter the maze only through door marked "enterand exit through another door marked "exit while in the maze he may not retrace his steps each corridor of the maze has an arrow painted on the wall sir paul may only go down the corridor in the direction of the arrow there is no way to traverse "loopin the maze given map of the mazeincluding the amount of gold in and the direction of each corridordescribe an algorithm to help sir paul pick up the most gold - let be weighted digraph with vertices design variation of floydwarshall' algorithm for computing the lengths of the shortest paths from each vertex to every other vertex in ( time - suppose we are given directed graph adjacency matrix corresponding to with verticesand let be the nx let the product of with itself ( be definedfor <ij <nas followsm (ij= ( ( ,jm( ,nm( , )where "wis the boolean or operator and "ais boolean and given this definitionwhat does (ij imply about the vertices and jwhat if (ij suppose is the product of with itself what do the entries of signifyhow about the entries of ( (min generalwhat information is contained in the matrix mp
21,493
now suppose that is weighted and assume the following for < < , ( , )= for < , <nm(ijweight(ijif (ijis in for <ij <nm(ijif (ijis not in alsolet be definedfor < , <nas followsm (ijmin{ ( + ( , ), ( , + ( , )if (ijkwhat may we conclude about the relationship between vertices and jc- graph is bipartite if its vertices can be partitioned into two sets and such that every edge in has one end vertex in and the other in design and analyze an efficient algorithm for determining if an undirected graph is bipartite (without knowing the sets and in advancec- an old mst methodcalled baruvka' algorithmworks as follows on graph having vertices and edges with distinct weightslet tbe subgraph of initially containing just the vertices in while has fewer than edges do for each connected component of do find the lowest-weight edge (vuin with in ci and not in ci add (vuto (unless it is already in treturn argue why this algorithm is correct and why it runs in (mlogntime
21,494
let be graph with vertices and edges such that all the edge weights in are integers in the range [ ,ngive an algorithm for finding minimum spanning tree for in (mlogntime projects - write class implementing simplified graph adt that has only methods relevant to undirected graphs and does not include update methodsusing the adjacency matrix structure your class should include constructor method that takes two collections (for examplesequences)-- collection of vertex elements and collection of pairs of vertex elements--and produces the graph that these two collections represent - implement the simplified graph adt described in project - using the adjacency list structure - implement the simplified graph adt described in project - using the edge list structure - extend the class of project - to support update methods - extend the class of project - to support all the methods of the graph adt (including methods for directed edgesp- implement generic bfs traversal using the template method pattern - implement the topological sorting algorithm - implement the floyd-warshall transitive closure algorithm
21,495
design an experimental comparison of repeated dfs traversals versus the floyd-warshall algorithm for computing the transitive closure of digraph - implement kruskal' algorithm assuming that the edge weights are integers - implement the prim-jarnik algorithm assuming that the edge weights are integers - perform an experimental comparison of two of the minimum spanning tree algorithms discussed in this (kruskal and prim-jarnikdevelop an extensive set of experiments to test the running times of these algorithms using randomly generated graphs - one way to construct maze starts with an nxn grid such that each grid cell is bounded by four unit-length walls we then remove two boundary unit-length wallsto represent the start and finish for each remaining unit-length wall not on the boundarywe assign random value and create graph gcalled the dualsuch that each grid cell is vertex in and there is an edge joining the vertices for two cells if and only if the cells share common wall the weight of each edge is the weight of the corresponding wall we construct the maze by finding minimum spanning tree for and removing all the walls corresponding to edges in write program that uses this algorithm to generate mazes and then solves them minimallyyour program should draw the maze andideallyit should visualize the solution as well - write program that builds the routing tables for the nodes in computer networkbased on shortest-path routingwhere path distance is measured by hop countthat isthe number of edges in path the input for this problem is the connectivity information for all the nodes in the networkas in the following example which indicates three network nodes that are connected to that isthree nodes that are one hop away the routing table for the node at address is set of pairs ( , )which indicates thatto route message from to bthe
21,496
should output the routing table for each node in the networkgiven an input list of node connectivity listseach of which is input in the syntax as shown aboveone per line notes the depth-first search method is part of the "folkloreof computer sciencebut hopcroft and tarjan [ are the ones who showed how useful this algorithm is for solving several different graph problems knuth [ discusses the topological sorting problem the simple linear-time algorithm that we describe for determining if directed graph is strongly connected is due to kosaraju the floyd-warshall algorithm appears in paper by floyd [ and is based upon theorem of warshall [ the mark-sweep garbage collection method we describe is one of many different algorithms for performing garbage collection we encourage the reader interested in further study of garbage collection to examine the book by jones [ to learn about different algorithms for drawing graphsplease see the book by tamassia [ ]the annotated bibliography of di battista et al [ ]or the book by di battista et al [ the first known minimum spanning tree algorithm is due to baruvka [ ]and was published in the prim-jarnik algorithm was first published in czech by jarnik [ in and in english in by prim [ kruskal published his minimum spanning tree algorithm in [ the reader interested in further study of the history of the minimum spanning tree problem is referred to the paper by graham and hell [ the current asymptotically fastest minimum spanning tree algorithm is randomized method of kargerkleinand tarjan [ that runs in (mexpected time dijkstra [ published his single-sourceshortest path algorithm in the reader interested in further study of graph algorithms is referred to the books by ahujamagnantiand orlin [ ]cormenleisersonand rivest [ ]even [ ]gibbons [ ]mehlhorn [ ]and tarjan [ ]and the book by van leeuwen [ incidentallythe running time for the prim-jarnik algorithmand also that of dijkstra' algorithmcan actually be improved to be ( log mby implementing the queue with either of two more sophisticated data structuresthe "fibonacci heap[ or the "relaxed heap[
21,497
memory contents memory management stacks in the java virtual machine allocating space in the memory heap garbage collection
21,498
external memory and caching the memory hierarchy caching strategies external searching and btrees ( ,btrees -trees external-memory sorting
21,499
exercises java datastructures net memory management in order to implement any data structure on an actual computerwe need to use computer memory computer memory is simply sequence of memory wordseach of which usually consists of or bytes (depending on the computerthese memory words are numbered from to where is the number of memory words available to the computer the number associated with each memory word is known as its address thusthe memory in computer can be viewed as basically one giant array of memory words using this memory to construct data structures (and run programsrequires that we manage the computer' memory to provide the space needed for data--including variablesnodespointersarraysand character strings-and the programs the computer is to run we discuss the basics of memory management in this section stacks in the java virtual machine java program is typically compiled into sequence of byte codes that are defined as "machineinstructions for well-defined model--the java virtual machine (jvmthe definition of the jvm is at the heart of the definition of the java language itself by compiling java code into the jvm byte codesrather than the machine language of specific cpua java program can be run on any computersuch as personal computer or serverthat has program that can emulate the jvm interestinglythe stack data structure plays central role in the definition of the jvm the java method stack stacks have an important application to the run-time environment of java programs running java program (more preciselya running java threadhas private stackcalled the java method stack or just java stack for shortwhich is used to keep track of local variables and other important information on methods as they are invoked during execution (see figure