id
int64
0
25.6k
text
stringlengths
0
4.59k
22,800
watching the algorithm run with inversely sorted bars is especially instructive the resulting patterns show clearly how each range is sorted individually and merged with its other halfand how the ranges grow larger and larger the mergesort java program in moment we'll look at the entire mergesort java program firstlet' focus on the method that carries out the mergesort here it isprivate void recmergesort(long[workspaceint lowerboundint upperboundif(lowerbound =upperbound/if range is return/no use sorting else /find midpoint int mid (lowerbound+upperbound /sort low half recmergesort(workspacelowerboundmid)/sort high half recmergesort(workspacemid+ upperbound)/merge them merge(workspacelowerboundmid+ upperbound)/end else /end recmergesort as you can seebesides the base casethere are only four statements in this method one computes the midpointthere are two recursive calls to recmergesort((one for each half of the array)and finally call to merge(to merge the two sorted halves the base case occurs when the range contains only one element (lowerbound==upperboundand results in an immediate return in the mergesort java programthe mergesort(method is the one actually seen by the class user it creates the array workspace[and then calls the recursive routine recmergesort(to carry out the sort the creation of the workspace array is handled in mergesort(because doing it in recmergesort(would cause the array to be created anew with each recursive callan inefficiency the merge(method in the previous merge java program (listing operated on three separate arraystwo source arrays and destination array the merge(routine in the mergesort java program operates on single arraythe thearray member of the darray class the arguments to this merge(method are the starting point of the lowhalf subarraythe starting point of the high-half subarrayand the upper bound of the high-half subarray the method calculates the sizes of the subarrays based on this information
22,801
recursion listing shows the complete mergesort java programwhich uses variant of the array classes from adding the mergesort(and recmergesort(methods to the darray class the main(routine creates an arrayinserts itemsdisplays the arraysorts the items with mergesort()and displays the array again listing the mergesort java program /mergesort java /demonstrates recursive merge sort /to run this programc>java mergesortapp ///////////////////////////////////////////////////////////////class darray private long[thearray/ref to array thearray private int nelems/number of data items //public darray(int max/constructor thearray new long[max]/create array nelems //public void insert(long value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public void display(/displays array contents for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")//public void mergesort(/called by main(/provides workspace long[workspace new long[nelems]recmergesort(workspace nelems- )//private void recmergesort(long[workspaceint lowerboundint upperbound
22,802
listing continued if(lowerbound =upperbound/if range is return/no use sorting else /find midpoint int mid (lowerbound+upperbound /sort low half recmergesort(workspacelowerboundmid)/sort high half recmergesort(workspacemid+ upperbound)/merge them merge(workspacelowerboundmid+ upperbound)/end else /end recmergesort(//private void merge(long[workspaceint lowptrint highptrint upperboundint /workspace index int lowerbound lowptrint mid highptr- int upperbound-lowerbound+ /of items while(lowptr <mid &highptr <upperboundifthearray[lowptrthearray[highptrworkspace[ ++thearray[lowptr++]else workspace[ ++thearray[highptr++]while(lowptr <midworkspace[ ++thearray[lowptr++]while(highptr <upperboundworkspace[ ++thearray[highptr++]for( = <nj++thearray[lowerbound+jworkspace[ ]/end merge(///end class darray ///////////////////////////////////////////////////////////////class mergesortapp
22,803
listing recursion continued public static void main(string[argsint maxsize /array size darray arr/reference to array arr new darray(maxsize)/create the array arr insert( )arr insert( )arr insert( )arr insert( )arr insert( )arr insert( )arr insert( )arr insert( )arr insert( )arr insert( )arr insert( )arr insert( )/insert items arr display()/display items arr mergesort()/merge sort the array arr display()/display items again /end main(/end class mergesortapp ///////////////////////////////////////////////////////////////the output from the program is simply the display of the unsorted and sorted arrays if we put additional statements in the recmergesort(methodwe could generate running commentary on what the program does during sort the following output shows how this might look for the four-item array { (you can think of this as the lower half of the array in figure entering - will sort low half of - entering - will sort low half of -
22,804
entering - base-case return - will sort high half of - entering - base-case return - will merge halves into - return - will sort high half of - entering - will sort low half of - entering - base-case return - will sort high half of - entering - base-case return - will merge halves into - return - will merge halves into - return - thearray= thearray= thearray= this is roughly the same content as would be generated by the mergesort workshop applet if it could sort four items study of this outputand comparison with the code for recmergesort(and figure will reveal the details of the sorting process efficiency of the mergesort as we notedthe mergesort runs in ( *logntime how do we know thislet' see how we can figure out the number of times data item must be copied and the number times it must be compared with another data item during the course of the algorithm we assume that copying and comparing are the most time-consuming operationsthat the recursive calls and returns don' add much overhead number of copies consider figure each cell below the top line represents an element copied from the array into the workspace adding up all the cells in figure (the seven numbered stepsshows there are copies necessary to sort items log is so *log equals this shows thatfor the case of itemsthe number of copies is proportional to *log another way to look at this calculation is thatto sort items requires levelseach of which involves copies level means all copies into the same size subarray in the first levelthere are four -element subarraysin the second levelthere are two -element subarraysand in the third levelthere is one -element subarray each level has elementsso again there are * or copies
22,805
recursion in figure by considering only half the graphyou can see that copies are necessary for an array of items (steps and )and copies are necessary for items similar calculations provide the number of copies necessary for larger arrays table summarizes this information table number of operations when is power of log number of copies into workspace ( *log ( ( ( ( ( ( ( total copies comparisons max (minactuallythe items are not only copied into the workspacethey're also copied back into the original array this doubles the number of copiesas shown in the total copies column the final column of table shows comparisonswhich we'll return to in moment it' harder to calculate the number of copies and comparisons when is not multiple of but these numbers fall between those that are power of for itemsthere are total copiesand for items , total copies number of comparisons in the mergesort algorithmthe number of comparisons is always somewhat less than the number of copies how much lessassuming the number of items is power of for each individual merging operationthe maximum number of comparisons is always one less than the number of items being mergedand the minimum is half the number of items being merged you can see why this is true in figure which shows two possibilities when trying to merge two arrays of four items each in the first casethe items interleaveand seven comparisons must be made to merge them in the second caseall the items in one array are smaller than all the items in the otherso only four comparisons need be made there are many merges for each sortso we must add the comparisons for each one referring back to figure you can see that seven merge operations are required to sort eight items the number of items being merged and the resulting number of comparisons are shown in table
22,806
comparisons - - - - - - - aworst-case scenario comparisons - - - - bbest-case scenario figure table maximum and minimum comparisons comparisons involved in sorting items step number totals number of items being merged (nmaximum comparisons ( - minimum comparisons ( / for each mergethe maximum number of comparisons is one less than the number of items adding these figures for all the merges gives us total of the minimum number of comparisons is always half the number of items being mergedand adding these figures for all the merges results in comparisons similar arithmetic results in the comparisons columns for table the actual
22,807
recursion number of comparisons to sort specific array depends on how the data is arrangedbut it will be somewhere between the maximum and minimum values eliminating recursion some algorithms lend themselves to recursive approachsome don' as we've seenthe recursive triangle(and factorial(methods can be implemented more efficiently using simple loop howevervarious divide-and-conquer algorithmssuch as mergesortwork very well as recursive routines often an algorithm is easy to conceptualize as recursive methodbut in practice the recursive approach proves to be inefficient in such casesit' useful to transform the recursive approach into non-recursive approach such transformation can often make use of stack recursion and stacks there is close relationship between recursion and stacks in factmost compilers implement recursion by using stacks as we notedwhen method is calledthe compiler pushes the arguments to the method and the return address (where control will go when the method returnson the stackand then transfers control to the method when the method returnsit pops these values off the stack the arguments disappearand control returns to the return address simulating recursive method in this section we'll demonstrate how any recursive solution can be transformed into stack-based solution remember the recursive triangle(method from the first section in this here it is againint triangle(int nif( == return else returnn triangle( - )we're going to break this algorithm down into its individual operationsmaking each operation one case in switch statement (you can perform similar decomposition using goto statements in +and some other languagesbut java doesn' support goto the switch statement is enclosed in method called step(each call to step(causes one case section within the switch to be executed calling step(repeatedly will eventually execute all the code in the algorithm
22,808
the triangle(method we just saw performs two kinds of operations firstit carries out the arithmetic necessary to compute triangular numbers this involves checking if is and adding to the results of previous recursive calls howevertriangle(also performs the operations necessary to manage the method itselfincluding transfer of controlargument accessand the return address these operations are not visible by looking at the codethey're built into all methods hereroughly speakingis what happens during call to methodwhen method is calledits arguments and the return address are pushed onto stack method can access its arguments by peeking at the top of the stack when method is about to returnit peeks at the stack to obtain the return addressand then pops both this address and its arguments off the stack and discards them the stacktriangle java program contains three classesparamsstackxand stacktriangleapp the params class encapsulates the return address and the method' argumentnobjects of this class are pushed onto the stack the stackx class is similar to those in other except that it holds objects of class params the stacktriangleapp class contains four methodsmain()rectriangle()step()and the usual getint(method for numerical input the main(routine asks the user for numbercalls the rectriangle(method to calculate the triangular number corresponding to nand displays the result the rectriangle(method creates stackx object and initializes codepart to it then settles into while loopwhere it repeatedly calls step(it won' exit from the loop until step(returns true by reaching case its exit point the step(method is basically large switch statement in which each case corresponds to section of code in the original triangle(method listing shows the stacktriangle java program listing the stacktriangle java program /stacktriangle java /evaluates triangular numbersstack replaces recursion /to run this programc>java stacktriangleapp import java io */for / ///////////////////////////////////////////////////////////////class params /parameters to save on stack public int npublic int returnaddress
22,809
listing recursion continued public params(int nnint ran=nnreturnaddress=ra/end class params ///////////////////////////////////////////////////////////////class stackx private int maxsize/size of stackx array private params[stackarrayprivate int top/top of stack //public stackx(int /constructor maxsize /set array size stackarray new params[maxsize]/create array top - /no items yet //public void push(params /put item on top of stack stackarray[++topp/increment topinsert item //public params pop(/take item from top of stack return stackarray[top--]/access itemdecrement top //public params peek(/peek at top of stack return stackarray[top]///end class stackx ///////////////////////////////////////////////////////////////class stacktriangleapp static int thenumberstatic int theanswer
22,810
listing continued static stackx thestackstatic int codepartstatic params theseparams//public static void main(string[argsthrows ioexception system out print("enter number")thenumber getint()rectriangle()system out println("triangle="+theanswer)/end main(//public static void rectriangle(thestack new stackx( )codepart whilestep(=false/call step(until it' true /null statement //public static boolean step(switch(codepartcase /initial call theseparams new params(thenumber )thestack push(theseparams)codepart breakcase /method entry theseparams thestack peek()if(theseparams = /test theanswer codepart /exit else codepart /recursive call breakcase /method call params newparams new params(theseparams )
22,811
listing recursion continued thestack push(newparams)codepart /go enter method breakcase /calculation theseparams thestack peek()theanswer theanswer theseparams ncodepart breakcase /method exit theseparams thestack peek()codepart theseparams returnaddress/( or thestack pop()breakcase /return point return true/end switch return false/end triangle //public static string getstring(throws ioexception inputstreamreader isr new inputstreamreader(system in)bufferedreader br new bufferedreader(isr)string br readline()return //public static int getint(throws ioexception string getstring()return integer parseint( )///end class stacktriangleapp ///////////////////////////////////////////////////////////////this program calculates triangular numbersjust as the triangle java program (listing at the beginning of the did here' some sample outputenter number triangle=
22,812
figure shows how the sections of code in each case relate to the various parts of the algorithm case initial call simulated method simmeth(case entry test done not done call method case case do calculation case exit case return point figure the cases and the step(method the program simulates methodbut it has no name in the listing because it isn' real java method let' call this simulated method simmeth(the initial call to simmeth((at case pushes the value entered by the user and return value of onto the stack and moves to the entry point of simmeth((case at its entry (case )simmeth(tests whether its argument is it accesses the argument by peeking at the top of the stack if the argument is this is the base case
22,813
recursion and control goes to simmeth()' exit (case if notit calls itself recursively (case this recursive call consists of pushing - and return address of onto the stackand going to the method entry at case on the return from the recursive callsimmeth(adds its argument to the value returned from the call finallyit exits (case when it exitsit pops the last params object off the stackthis information is no longer needed the return address given in the initial call was so case is the place where control goes when the method returns this code returns true to let the while loop in rectriangle(know that the loop is over note that in this description of simmeth()' operation we use terms like argumentrecursive calland return address to mean simulations of these featuresnot the normal java versions if you inserted some output statements in each case to see what simmeth(was doingyou could arrange for output like thisenter number case theanswer= stackcase theanswer= stack( case theanswer= stack( case theanswer= stack( ( case theanswer= stack( ( case theanswer= stack( ( ( case theanswer= stack( ( ( case theanswer= stack( ( ( ( case theanswer= stack( ( ( ( case theanswer= stack( ( ( case theanswer= stack( ( ( case theanswer= stack( ( case theanswer= stack( ( case theanswer= stack( case theanswer= stack( case theanswer= stacktriangle= the case number shows what section of code is being executed the contents of the stack (consisting of params objects containing followed by return addressare also shown the simmeth(method is entered four times (case and returns four times (case only when it starts returning does theanswer begin to accumulate the results of the calculations
22,814
what does this provein stacktriangle java (listing we have program that more or less systematically transforms program that uses recursion into program that uses stack this suggests that such transformation is possible for any program that uses recursionand in fact this is the case with some additional workyou can systematically refine the code we show heresimplifying it and even eliminating the switch statement entirely to make the code more efficient in practicehoweverit' usually more practical to rethink the algorithm from the beginningusing stack-based approach instead of recursive approach listing shows what happens when we do that with the triangle(method listing the stacktriangle java program /stacktriangle java /evaluates triangular numbersstack replaces recursion /to run this programc>java stacktriangle app import java io */for / ///////////////////////////////////////////////////////////////class stackx private int maxsize/size of stack array private int[stackarrayprivate int top/top of stack //public stackx(int /constructor maxsize sstackarray new int[maxsize]top - //public void push(int /put item on top of stack stackarray[++topp//public int pop(/take item from top of stack return stackarray[top--]//public int peek(/peek at top of stack return stackarray[top]/
22,815
listing recursion continued public boolean isempty(/true if stack is empty return (top =- )///end class stackx ///////////////////////////////////////////////////////////////class stacktriangle app static int thenumberstatic int theanswerstatic stackx thestackpublic static void main(string[argsthrows ioexception system out print("enter number")thenumber getint()stacktriangle()system out println("triangle="+theanswer)/end main(//public static void stacktriangle(thestack new stackx( )/make stack theanswer /initialize answer while(thenumber /until is thestack push(thenumber)/push value --thenumber/decrement value while!thestack isempty(/until stack emptyint newn thestack pop()/pop valuetheanswer +newn/add to answer //public static string getstring(throws ioexception inputstreamreader isr new inputstreamreader(system in)bufferedreader br new bufferedreader(isr)
22,816
listing continued string br readline()return //public static int getint(throws ioexception string getstring()return integer parseint( )///end class stacktriangle app here two short while loops in the stacktriangle(method substitute for the entire step(method of the stacktriangle java program of coursein this program you can see by inspection that you can eliminate the stack entirely and use simple loop howeverin more complicated algorithms the stack must remain often you'll need to experiment to see whether recursive methoda stack-based approachor simple loop is the most efficient (or practicalway to handle particular situation some interesting recursive applications let' look briefly at some other situations in which recursion is useful you will see from the diversity of these examples that recursion can pop up in unexpected places we'll examine three problemsraising number to powerfitting items into knapsackand choosing members of mountain-climbing team we'll explain the concepts and leave the implementations as exercises raising number to power the more sophisticated pocket calculators allow you to raise number to an arbitrary power they usually have key labeled something like ^ywhere the circumflex indicates that is raised to the power how would you do this calculation if your calculator lacked this keyyou might assume you would need to multiply by itself times that isif was and was ( )you would carry out the arithmetic for * * * * * * * howeverfor large values of ythis approach might prove tedious is there quicker wayone solution is to rearrange the problem so you multiply by multiples of whenever possibleinstead of by take as an example eventuallywe must involve eight in the multiplication process let' say we start with * = we've used up two of the
22,817
recursion sbut there are still six to go howeverwe now have new number to work with so we try * = this uses four (because each is two multiplied togetherwe need to use up four more sbut now we have to work withand * = uses exactly eight (because each has four sso we've found the answer to with only three multiplications instead of seven that' (log ntime instead (ncan we make this process into an algorithm that computer can executethe scheme is based on the mathematical equality xy ( ) / in our example ( ) / or ( ) this is true because raising power to another power is the same as multiplying the powers howeverwe're assuming our computer can' raise number to powerso we can' handle ( ) let' see if we can transform this into an expression that involves only multiplication the trick is to start by substituting new variable for let' say that = then equals ( ) which is howeveraccording to the original equalitya can be written ( ) so ( ) again we substitute new variable for say =cthen ( ) can be written ( ) which also equals now we have problem we can handle with simple multiplicationc times you can imbed this scheme in recursive method--let' call it power()--for calculating powers the arguments are and yand the method returns xy we don' need to worry about variables like and anymore because and get new values each time the method calls itself its arguments are * and / for the = and = the sequence of arguments and return values would be = = = = = = = = returning = = returning = = returning = = returning = = when is we return the answer is passed unchanged back up the sequence of methods we've shown an example in which is an even number throughout the entire sequence of divisions this will not usually be the case here' how to revise the algorithm to deal with the situation where is odd use integer division on the way down and don' worry about remainder when dividing by howeverduring the
22,818
return processwhenever is an odd numberdo an additional multiplication by here' the sequence for = = = = = = = = = = returning = = returning = = returning = = returning = = / is oddso multiply by returning = = the knapsack problem the knapsack problem is classic in computer science in its simplest form it involves trying to fit items of different weights into knapsack so that the knapsack ends up with specified total weight you don' need to fit in all the items for examplesuppose you want your knapsack to weigh exactly poundsand you have five itemswith weights of and pounds for small numbers of itemshumans are pretty good at solving this problem by inspection so you can probably figure out that only the and combination of items adds up to if we want computer to solve this problemwe'll need to give it more detailed instructions here' the algorithm if at any point in this process the sum of the items you selected adds up to the targetyou're done start by selecting the first item the remaining items must add up to the knapsack' target weight minus the first itemthis is new target weight tryone by oneeach of the possible combinations of the remaining items noticehoweverthat you don' really need to try all the combinationsbecause whenever the sum of the items is more than the target weightyou can stop adding items if none of the combinations workdiscard the first itemand start the whole process again with the second item continue this with the third item and so on until you've tried all the combinationsat which point you know there is no solution
22,819
recursion in the example just describedstart with now we want the remaining items to add up to ( minus of thesewe start with which is too small now we want the remaining items to add up to ( minus we start with but that' bigger than so we try and then which are also too big we've run out of itemsso we know that any combination that includes won' add up to next we try so now we're looking for target of ( minus we continue in the same wayas summarized hereitems ========================================= /target is too small /target is too small /target is too big /target is too big /target is too big no more items /target is too small /target is too big /target is too big no more items /target is too small /target is too big no more items /target is too small no more items /target is too small /target is too small /target is too big /target is just right successas you may recognizea recursive routine can pick the first itemandif the item is smaller than the targetthe routine can call itself with new target to investigate the sums of all the remaining items combinationspicking team in mathematicsa combination is selection of things in which their order doesn' matter for examplesuppose there is group of five mountain climbers named abcdand from this group you want to select team of three to scale steep and icy mount anaconda howeveryou're worried about how the team members will get alongso you decide to list all the possible teamsthat isall the possible combinations of three climbers but then you think it would be nice to have computer program print out all the combinations for you such program would show you the possible combinationsabcabdabeacdaceadebcdbcebdecde
22,820
how would you write such programit turns out there' an elegant recursive solution it involves dividing these combinations into two groupsthose that begin with and those that don' suppose we abbreviate the idea of people selected from group of as ( , let' say is the size of the group and is the size of team theorem says that (nk( ( kfor our example of people selected from group of we have ( ( ( we've broken large problem into two smaller ones instead of selecting from group of we're selecting twice from group of firstall the ways to select people from group of then all the ways to select people from group of there are ways to select people from group of in the ( term--which we'll call the left term--these combinations are bcbdbecdcede is the missing group memberso to make three-person teams we precede these combinations with aabcabdabeacdaceade there are four ways to select people from group of in the ( term--the right term--we have bcdbcebdecde when these combinations from the right term are added to the from the left termwe get the combinations for ( you can apply the same decomposition process to each of the groups of for example( is ( added to ( as you can seethis is natural place to apply recursion you can think of this problem as tree with ( , on the top row( , and ( , on the next rowand so onwhere the nodes in the tree correspond to recursive function calls figure shows what this looks like for the ( , example the base cases are combinations that make no sensethose with for either number and those where the team size is greater than the group size the combination ( , is valid but there' no point trying to break it down further in the figuredotted lines show the base casesyou return rather than following them
22,821
recursion ab cde bde bce abe figure bcd ade ace acd abd - picking team of from group of the recursion depth corresponds to the group membersthe node on the top row represents group member athe two nodes on the next row represent group member band so on if there are group membersyou'll have levels as you descend the tree you need to remember the sequence of members you visit here' how to do thatwhenever you make call to left termyou record the node you're leaving by adding its letter to sequence these left calls and the letters to add to the sequence are shown by the darker lines in the figure you'll need to role the sequence back up as you return to record all the combinationsyou can display them as you go along you don' display anything when making left calls howeverwhen you make calls to the rightyou check the sequenceif you're at valid nodeand adding one member will complete the teamthen add the node to the sequence and display the complete team summary recursive method calls itself repeatedlywith different argument values each time some value of its arguments causes recursive method to return without calling itself this is called the base case when the innermost instance of recursive method returnsthe process "unwindsby completing pending instances of the methodgoing from the latest back to the original call
22,822
triangular number is the sum of itself and all numbers smaller than itself (number means integer in this context for examplethe triangular number of is because + + + the factorial of number is the product of itself and all numbers smaller than itself for examplethe factorial of is * * * both triangular numbers and factorials can be calculated using either recursive method or simple loop the anagram of word (all possible combinations of its letterscan be found recursively by repeatedly rotating all its letters and anagramming the rightmost - of them binary search can be carried out recursively by checking which half of sorted range the search key is inand then doing the same thing with that half the towers of hanoi puzzle consists of three towers and an arbitrary number of rings the towers of hanoi puzzle can be solved recursively by moving all but the bottom disk of subtree to an intermediate towermoving the bottom disk to the destination towerand finally moving the subtree to the destination merging two sorted arrays means to create third array that contains all the elements from both arrays in sorted order in mergesort -element subarrays of larger array are merged into -element subarrays -element subarrays are merged into -element subarraysand so on until the entire array is sorted mergesort requires ( *logntime mergesort requires workspace equal in size to the original array for triangular numbersfactorialsanagramsand the binary searchthe recursive method contains only one call to itself (there are two shown in the code for the binary searchbut only one is used on any given pass through the method' code for the towers of hanoi and mergesortthe recursive method contains two calls to itself any operation that can be carried out with recursion can be carried out with stack recursive approach may be inefficient if soit can sometimes be replaced with simple loop or stack-based approach
22,823
recursion questions these questions are intended as self-test for readers answers may be found in appendix if the user enters in the triangle java program (listing )what is the maximum number of "copiesof the triangle(method (actually just copies of its argumentthat exist at any one time where are the copies of the argumentmentioned in question storeda in variable in the triangle(method in field of the triangleapp class in variable of the getstring(method on stack assume the user enters as in question what is the value of when the triangle(method first returns value other than assume the same situation as in question what is the value of when the triangle(method is about to return to main() true or falsein the triangle(methodthe return values are stored on the stack in the anagram java program (listing )at certain depth of recursiona version of the doanagram(method is working with the string "ledwhen this method calls new version of itselfwhat letters will the new version be working with we've seen that recursion can take the place of loopas in the loop-oriented orderedarray java program (listing and the recursive binarysearch java program (listing which of the following is not truea both programs divide the range repeatedly in half if the key is not foundthe loop version returns because the range bounds crossbut the recursive version occurs because it reaches the bottom recursion level if the key is foundthe loop version returns from the entire methodwhereas the recursive version returns from only one level of recursion in the recursive version the range to be searched must be specified in the argumentswhile in the loop version it need not be
22,824
in the recfind(method in the binarysearch java program (listing )what takes the place of the loop in the non-recursive versiona the recfind(method arguments to recfind( recursive calls to recfind( the call from main(to recfind( the binarysearch java program is an example of the approach to solving problem what gets smaller as you make repeated recursive calls in the redfind(method what becomes smaller with repeated recursive calls in the towers java program (listing ) the algorithm in the towers java program involves "treesthat are data storage devices secretly putting small disks under large disks changing which columns are the source and destination moving one small disk and then stack of larger disks which is not true about the merge(method in the merge java program (listing ) its algorithm can handle arrays of different sizes it must search the target array to find where to put the next item it is not recursive it continuously takes the smallest item irrespective of what array it' in the disadvantage of mergesort is that it is not recursive it uses more memory although faster than the insertion sortit is much slower than quicksort it is complicated to implement besides loopa can often be used instead of recursion
22,825
recursion experiments carrying out these experiments will help to provide insights into the topics covered in the no programming is involved in the triangle java program (listing )remove the code for the base case (the if( == )the return ;and the elsethen run the program and see what happens use the towers workshop applet in manual mode to solve the puzzle with seven or more disks rewrite the main(part of mergesort java (listing so you can fill the array with hundreds of thousands of random numbers run the program to sort these numbers and compare its speed with the sorts in "simple sorting programming projects writing programs that solve the programming projects helps to solidify your understanding of the material and demonstrates how the concepts are applied (as noted in the introductionqualified instructors may obtain completed solutions to the programming projects on the publisher' web site suppose you buy budget-priced pocket pc and discover that the chip inside can' do multiplicationonly addition you program your way out of this quandary by writing recursive methodmult()that performs multiplication of and by adding to itself times its arguments are and and its return value is the product of and write such method and main(program to call it does the addition take place when the method calls itself or when it returns in "binary trees,we'll look at binary treeswhere every branch has (potentiallyexactly two sub-branches if we draw binary tree on the screen using characterswe might have branch on the top row on the next rowthen and so on here' what that looks like for tree characters widexxx---- --- --- --- - - - - - - - - xxxxxxxxxxxxxxxx
22,826
(note that the bottom line should be shifted half character-width rightbut there' nothing we can do about that with character-mode graphics you can draw this tree using recursive makebranches(method with arguments left and rightwhich are the endpoints of horizontal range when you first enter the routineleft is and right is the number of characters (including dashesin all the linesminus you draw an in the center of this range then the method calls itself twiceonce for the left half of the range and once for the right half return when the range gets too small you will probably want to put all the dashes and xs into an array and display the array all at onceperhaps with display(method write main(program to draw the tree by calling makebranches(and display(allow main(to determine the line length of the display ( or whateverensure that the array that holds the characters for display is no larger than it needs to be what is the relationship of the number of lines (five in the picture hereto the line width implement the recursive approach to raising number to poweras described in the "raising number to powersection near the end of this write the recursive power(function and main(routine to test it write program that solves the knapsack problem for an arbitrary knapsack capacity and series of weights assume the weights are stored in an array hintthe arguments to the recursive knapsack(function are the target weight and the array index where the remaining items start implement recursive approach to showing all the teams that can be created from group ( things taken at timewrite the recursive showteams(method and main(method to prompt the user for the group size and the team size to provide arguments for showteam()which then displays all the possible combinations
22,827
advanced sorting in this shellsort partitioning quicksort we discussed simple sorting in the aptly titled "simple sorting the sorts described there--the bubbleselectionand insertion sorts--are easy to implement but are rather slow in "recursion,we described the mergesort it runs much faster than the simple sorts but requires twice as much space as the original arraythis is often serious drawback this covers two advanced approaches to sortingshellsort and quicksort these sorts both operate much faster than the simple sortsthe shellsort in about ( *(logn) timeand quicksort in ( *logntime neither of these sorts requires large amount of extra spaceas mergesort does the shellsort is almost as easy to implement as mergesortwhile quicksort is the fastest of all the general-purpose sorts we'll conclude the with brief mention of the radix sortan unusual and interesting approach to sorting we'll examine the shellsort first quicksort is based on the idea of partitioningso we'll then examine partitioning separatelybefore examining quicksort itself shellsort the shellsort is named for donald shellthe computer scientist who discovered it in it' based on the insertion sortbut adds new feature that dramatically improves the insertion sort' performance the shellsort is good for medium-sized arraysperhaps up to few thousand itemsdepending on the particular implementation it' not quite as fast as quicksort and other ( *lognsortsso it' not optimum for very large files howeverit' much faster than the ( sorts like the selection sort and the insertion sortand it' very easy to implementthe code is short and simple radix sort
22,828
advanced sorting the worst-case performance is not significantly worse than the average performance (we'll see later in this that the worst-case performance for quicksort can be much worse unless precautions are taken some experts (see sedgewick in appendix "further reading"recommend starting with shellsort for almost any sorting project and changing to more advanced sortlike quicksortonly if shellsort proves too slow in practice insertion sorttoo many copies because shellsort is based on the insertion sortyou might want to review the section titled "insertion sortin recall that partway through the insertion sort the items to the left of marker are internally sorted (sorted among themselvesand items to the right are not the algorithm removes the item at the marker and stores it in temporary variable thenbeginning with the item to the left of the newly vacated cellit shifts the sorted items right one cell at timeuntil the item in the temporary variable can be reinserted in sorted order here' the problem with the insertion sort suppose small item is on the far rightwhere the large items should be to move this small item to its proper place on the leftall the intervening items (between the place where it is and where it should bemust be shifted one space right this step takes close to copiesjust for one item not all the items must be moved full spacesbut the average item must be moved / spaceswhich takes times / shifts for total of / copies thusthe performance of insertion sort is ( this performance could be improved if we could somehow move smaller item many spaces to the left without shifting all the intermediate items individually -sorting the shellsort achieves these large shifts by insertion-sorting widely spaced elements after they are sortedit sorts somewhat less widely spaced elementsand so on the spacing between elements for these sorts is called the increment and is traditionally represented by the letter figure shows the first step in the process of sorting -element array with an increment of here the elements and are sorted after and are sortedthe algorithm shifts over one cell and sorts and this process continues until all the elements have been -sortedwhich means that all items spaced four cells apart are sorted among themselves the process is shown (using more compact visual metaphorin figure after the complete -sortthe array can be thought of as comprising four subarrays( , , )( , , )( , )and ( , )each of which is completely sorted these subarrays are interleaved but otherwise independent
22,829
unsorted sorted figure -sorting and notice thatin this particular exampleat the end of the -sort no item is more than two cells from where it would be if the array were completely sorted this is what is meant by an array being "almostsorted and is the secret of the shellsort by creating interleavedinternally sorted sets of itemswe minimize the amount of work that must be done to complete the sort nowas we noted in the insertion sort is very efficient when operating on an array that' almost sorted if it needs to move items only one or two cells to sort the fileit can operate in almost (ntime thusafter the array has been -sortedwe can -sort it using the ordinary insertion sort the combination of the -sort and the -sort is much faster than simply applying the ordinary insertion sort without the preliminary -sort diminishing gaps we've shown an initial interval--or gap--of cells for sorting -cell array for larger arrays the interval should start out much larger the interval is then repeatedly reduced until it becomes for instancean array of , items might be -sortedthen -sortedthen sortedthen -sortedthen -sortedand finally -sorted the sequence of numbers used to generate the intervals (in this example is called the interval sequence or gap sequence the particular interval sequence shown here
22,830
advanced sorting attributed to knuth (see appendix )is popular one in reversed formstarting from it' generated by the recursive expression * where the initial value of is the first two columns of table show how this formula generates the sequence figure complete -sort table knuth' interval sequence * ( -
22,831
table continued * ( - there are other approaches to generating the interval sequencewe'll return to this issue later firstwe'll explore how the shellsort works using knuth' sequence in the sorting algorithmthe sequence-generating formula is first used in short loop to figure out the initial gap value of is used for the first value of hand the = * + formula is applied to generate the sequence and so on this process ends when the gap is larger than the array for , -element arraythe seventh number in the sequence , is too large thuswe begin the sorting process with the sixth-largest numbercreating -sort theneach time through the outer loop of the sorting routinewe reduce the interval using the inverse of the formula previously givenh ( - this is shown in the third column of table this inverse formula generates the reverse sequence starting with each of these numbers is used to -sort the array when the array has been -sortedthe algorithm is done the shellsort workshop applet you can use the shellsort workshop applet to see how this sort works figure shows the applet after all the bars have been -sortedjust as the -sort begins figure the shellsort workshop applet
22,832
advanced sorting as you single-step through the algorithmyou'll notice that the explanation we gave in the preceding discussion is slightly simplified the sequence for the -sort is not actually ( , , )( , , )( , )and ( , insteadthe first two elements of each group of three are sorted firstthen the first two elements of the second groupand so on once the first two elements of all the groups are sortedthe algorithm returns and sorts three-element groups the actual sequence is ( , )( , )( , )( , )( , , )( , , it might seem more obvious for the algorithm to -sort each complete subarray first--( , )( , , )( , )( , , )( , )( , )--but the algorithm handles the array indices more efficiently using the first scheme the shellsort is actually not very efficient with only itemsmaking almost as many swaps and comparisons as the insertion sort howeverwith bars the improvement becomes significant it' instructive to run the workshop applet starting with inversely sorted bars (remember thatas in the first press of new creates random sequence of barswhile the second press creates an inversely sorted sequence figure shows how the bars look after the first passwhen the array has been completely -sorted figure shows the situation after the next passwhen it is -sorted with each new value of hthe array becomes more nearly sorted figure after the -sort why is the shellsort so much faster than the insertion sorton which it' basedwhen is largethe number of items per pass is smalland items move long distances this is very efficient as grows smallerthe number of items per pass increasesbut the items are already closer to their final sorted positionswhich is
22,833
more efficient for the insertion sort it' the combination of these trends that makes the shellsort so effective figure after the -sort notice that later sorts (small values of hdon' undo the work of earlier sorts (large values of han array that has been -sorted remains -sorted after -sortfor example if this wasn' sothe shellsort couldn' work java code for the shellsort the java code for the shellsort is scarcely more complicated than for the insertion sort starting with the insertion sortyou substitute for in appropriate places and add the formula to generate the interval sequence we've made shellsort( method in the arraysh classa version of the array classes from "arrays listing shows the complete shellsort java program listing the shellsort java program /shellsort java /demonstrates shell sort /to run this programc>java shellsortapp //class arraysh private long[thearray/ref to array thearray private int nelems/number of data items /
22,834
advanced sorting listing continued public arraysh(int max/constructor thearray new long[max]/create the array nelems /no items yet //public void insert(long value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")//public void shellsort(int innerouterlong tempint while( <nelems/ * while( > /find initial value of /( /decreasing huntil = / -sort the file for(outer=houter<nelemsouter++temp thearray[outer]inner outer/one subpass (eg while(inner - &thearray[inner- >tempthearray[innerthearray[inner- ]inner -
22,835
listing continued thearray[innertemp/end for ( - /decrease /end while( > /end shellsort(///end class arraysh ///////////////////////////////////////////////////////////////class shellsortapp public static void main(string[argsint maxsize /array size arraysh arrarr new arraysh(maxsize)/create the array for(int = <maxsizej++/fill array with /random numbers long (int)(java lang math random()* )arr insert( )arr display()/display unsorted array arr shellsort()/shell sort the array arr display()/display sorted array /end main(/end class shellsortapp in main(we create an object of type arrayshable to hold itemsfill it with random datadisplay itshellsort itand display it again here' some sample outputa= = you can change maxsize to higher numbersbut don' go too high , items take fraction of minute to sort the shellsort algorithmalthough it' implemented in just few linesis not simple to follow to see the details of its operationstep through -item sort with the workshop appletcomparing the messages generated by the applet with the code in the shellsort(method
22,836
advanced sorting other interval sequences picking an interval sequence is bit of black art our discussion so far used the formula = * + to generate the interval sequencebut other interval sequences have been used with varying degrees of success the only absolute requirement is that the diminishing sequence ends with so the last pass is normal insertion sort in shell' original paperhe suggested an initial gap of / which was simply divided in half for each pass thusthe descending sequence for = is this approach has the advantage that you don' need to calculate the sequence before the sort begins to find the initial gapyou just divide by howeverthis turns out not to be the best sequence although it' still better than the insertion sort for most datait sometimes degenerates to ( running timewhich is no better than the insertion sort variation of this approach is to divide each interval by instead of for = this leads to this is considerably better than dividing by as it avoids some worst-case circumstances that lead to ( behavior some extra code is needed to ensure that the last value in the sequence is no matter what is this gives results comparable to knuth' sequence shown in the listing another possibility for descending sequence (from flamigsee appendix bis if( else ( * - it' generally considered important that the numbers in the interval sequence are relatively primethat isthey have no common divisors except this constraint makes it more likely that each pass will intermingle all the items sorted on the previous pass the inefficiency of shell' original / sequence is due to its failure to adhere to this rule you may be able to invent gap sequence of your own that does just as well (or possibly even betterthan those shown whatever it isit should be quick to calculate so as not to slow down the algorithm efficiency of the shellsort no one so far has been able to analyze the shellsort' efficiency theoreticallyexcept in special cases based on experimentsthere are various estimateswhich range from ( / down to ( /
22,837
table shows some of these estimated (valuescompared with the slower insertion sort and the faster quicksort the theoretical times corresponding to various values of are shown note that nx/ means the yth root of raised to the power thusif is / is the square root of which is , also(logn) means the log of nsquared this is often written log nbut that' easy to confuse with log nthe logarithm to the base of table estimates of shellsort running time (value type of sort items items , items , items insertionetc shellsort shellsort shellsort shellsort quicksortetc , , , , , , , , , , , , , , , , , / *(logn) / / *logn for most datathe higher estimatessuch as / are probably more realistic partitioning partitioning is the underlying mechanism of quicksortwhich we'll explore nextbut it' also useful operation on its ownso we'll cover it here in its own section to partition data is to divide it into two groupsso that all the items with key value higher than specified amount are in one groupand all the items with lower key value are in another you can easily imagine situations in which you would want to partition data maybe you want to divide your personnel records into two groupsemployees who live within miles of the office and those who live farther away or school administrator might want to divide students into those with grade point averages higher and lower than so as to know who deserves to be on the dean' list the partition workshop applet our partition workshop applet demonstrates the partitioning process figure shows bars before partitioningand figure shows them again after partitioning
22,838
advanced sorting figure twelve bars before partitioning figure twelve bars after partitioning the horizontal line represents the pivot valuewhich is the value used to determine into which of the two groups an item is placed items with key value less than the pivot value go in the left part of the arrayand those with greater (or equalkey go in the right part (in the section on quicksortwe'll see that the pivot value can be the key value of an actual data itemcalled the pivot for nowit' just number the arrow labeled partition points to the leftmost item in the right (highersubarray this value is returned from the partitioning methodso it can be used by other methods that need to know where the division is
22,839
for more vivid display of the partitioning processset the partition workshop applet to bars and press the run button the leftscan and rightscan pointers will zip toward each otherswapping bars as they go when they meetthe partition is complete you can choose any value you want for the pivot valuedepending on why you're doing the partition (such as choosing grade point average of for varietythe workshop applet chooses random number for the pivot value (the horizontal black lineeach time new or size is pressedbut the value is never too far from the average bar height after being partitionedthe data is by no means sortedit has simply been divided into two groups howeverit' more sorted than it was before as we'll see in the next sectionit doesn' take much more trouble to sort it completely notice that partitioning is not stable that iseach group is not in the same order it was originally in factpartitioning tends to reverse the order of some of the data in each group the partition java program how is the partitioning process carried outlet' look at some example code listing shows the partition java programwhich includes the partitionit(method for partitioning an array listing the partition java program /partition java /demonstrates partitioning an array /to run this programc>java partitionapp ///////////////////////////////////////////////////////////////class arraypar private long[thearray/ref to array thearray private int nelems/number of data items //public arraypar(int max/constructor thearray new long[max]/create the array nelems /no items yet //public void insert(long value/put element into array thearray[nelemsvalue/insert it
22,840
advanced sorting listing continued nelems++/increment size //public int size(/return number of items return nelems//public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")//public int partitionit(int leftint rightlong pivotint leftptr left /right of first elem int rightptr right /left of pivot while(truewhile(leftptr right &/find bigger item thearray[++leftptrpivot/(nopwhile(rightptr left &/find smaller item thearray[--rightptrpivot/(nopif(leftptr >rightptr/if pointers crossbreak/partition done else /not crossedso swap(leftptrrightptr)/swap elements /end while(truereturn leftptr/return partition /end partitionit(//public void swap(int dex int dex /swap two elements long temptemp thearray[dex ]/ into temp thearray[dex thearray[dex ]/ into thearray[dex temp/temp into
22,841
listing continued /end swap(///end class arraypar ///////////////////////////////////////////////////////////////class partitionapp public static void main(string[argsint maxsize /array size arraypar arr/reference to array arr new arraypar(maxsize)/create the array for(int = <maxsizej++/fill array with /random numbers long (int)(java lang math random()* )arr insert( )arr display()/display unsorted array long pivot /pivot value system out print("pivot is pivot)int size arr size()/partition array int partdex arr partitionit( size- pivot)system out println("partition is at index partdex)arr display()/display partitioned array /end main(the main(routine creates an arraypar object that holds items of type long the pivot value is fixed at the routine inserts random values into arraypardisplays thempartitions them by calling the partitionit(methodand displays them again here' some sample outputa= pivot is partition is at index = you can see that the partition is successfulthe first eight numbers are all smaller than the pivot value of the last eight are all larger
22,842
advanced sorting notice that the partitioning process doesn' necessarily divide the array in half as it does in this examplethat depends on the pivot value and key values of the data there may be many more items in one group than in the other the partition algorithm the partitioning algorithm works by starting with two pointersone at each end of the array (we use the term pointers to mean indices that point to array elementsnot +pointers the pointer on the leftleftptrmoves toward the rightand the one on the rightrightptrmoves toward the left notice that leftptr and rightptr in the partition java program correspond to leftscan and rightscan in the partition workshop applet actuallyleftptr is initialized to one position to the left of the first celland rightptr to one position to the right of the last cellbecause they will be incremented and decrementedrespectivelybefore they're used stopping and swapping when leftptr encounters data item smaller than the pivot valueit keeps going because that item is already on the correct side of the array howeverwhen it encounters an item larger than the pivot valueit stops similarlywhen rightptr encounters an item larger than the pivotit keeps goingbut when it finds smaller itemit also stops two inner while loopsthe first for leftptr and the second for rightptrcontrol the scanning process pointer stops because its while loop exits here' simplified version of the code that scans for out-of-place itemswhilethearray[++leftptrpivot /(nopwhilethearray[--rightptrpivot /(nopswap(leftptrrightptr)/find bigger item /find smaller item /swap elements the first while loop exits when an item larger than pivot is foundthe second loop exits when an item smaller than pivot is found when both these loops exitboth leftptr and rightptr point to items that are in the wrong sides of the arrayso these items are swapped after the swapthe two pointers continue onagain stopping at items that are in the wrong side of the array and swapping them all this activity is nested in an outer while loopas can be seen in the partitionit(method in listing when the two pointers eventually meetthe partitioning process is complete and this outer while loop exits you can watch the pointers in action when you run the partition workshop applet with bars these pointersrepresented by blue arrowsstart at opposite ends of
22,843
the array and move toward each otherstopping and swapping as they go the bars between them are unpartitionedthose they've already passed over are partitioned when they meetthe entire array is partitioned handling unusual data if we were sure that there was data item at the right end of the array that was smaller than the pivot valueand an item at the left end that was largerthe simplified while loops previously shown would work fine unfortunatelythe algorithm may be called upon to partition data that isn' so well organized if all the data is smaller than the pivot valuefor examplethe leftptr variable will go all the way across the arraylooking in vain for larger itemand fall off the right endcreating an array index out of bounds exception similar fate will befall rightptr if all the data is larger than the pivot value to avoid these problemsextra tests must be placed in the while loops to check for the ends of the arrayleftptrleft in the second you can see these tests in context in listing in the section on quicksortwe'll see that clever pivot-selection process can eliminate these end-of-array tests eliminating code from inner loops is always good idea if you want to make program run faster delicate code the code in the while loops is rather delicate for exampleyou might be tempted to remove the increment operators from the inner while loops and use them to replace the nop statements (nop refers to statement consisting only of semicolonand means no operationfor exampleyou might try to change thiswhile(leftptr right &thearray[++leftptrpivot/(nopto thiswhile(leftptr right &thearray[leftptrpivot++leftptrand similarly for the other inner while loop these changes would make it possible for the initial values of the pointers to be left and rightwhich is somewhat clearer than left- and right+ howeverthese changes result in the pointers being incremented only when the condition is satisfied the pointers must move in any caseso two extra statements within the outer while loop would be required to bump the pointers the nop version is the most efficient solution
22,844
advanced sorting equal keys here' another subtle change you might be tempted to make in the partitionit(code if you run the partitionit(method on items that are all equal to the pivot valueyou will find that every comparison leads to swap swapping items with equal keys seems like waste of time the operators that compare pivot with the array elements in the while loops cause the extra swapping howeversuppose you try to fix this by replacing them with operators this indeed prevents the swapping of equal elementsbut it also causes leftptr and rightptr to end up at the ends of the array when the algorithm has finished as we'll see in the section on quicksortit' good for the pointers to end up in the middle of the arrayand very bad for them to end up at the ends so if partitionit(is going to be used for quicksortthe operators are the right way to goeven if they cause some unnecessary swapping efficiency of the partition algorithm the partition algorithm runs in (ntime it' easy to see why this is so when running the partition workshop appletthe two pointers start at opposite ends of the array and move toward each other at more or less constant ratestopping and swapping as they go when they meetthe partition is complete if there were twice as many items to partitionthe pointers would move at the same ratebut they would have twice as many items to compare and swapso the process would take twice as long thusthe running time is proportional to more specificallyfor each partition there will be + or + comparisons every item will be encountered and used in comparison by one or the other of the pointersleading to comparisonsbut the pointers overshoot each other before they find out they've "crossedor gone beyond each otherso there are one or two extra comparisons before the partition is complete the number of comparisons is independent of how the data is arranged (except for the uncertainty between one or two extra comparisons at the end of the scanthe number of swapshoweverdoes depend on how the data is arranged if it' inversely orderedand the pivot value divides the items in halfthen every pair of values must be swappedwhich is / swaps (remember in the partition workshop applet that the pivot value is selected randomlyso that the number of swaps for inversely sorted bars won' always be exactly / for random datathere will be fewer than / swaps in partitioneven if the pivot value is such that half the bars are shorter and half are taller this is because some bars will already be in the right place (short bars on the lefttall bars on the rightif the pivot value is higher (or lowerthan most of the barsthere will be even fewer swaps because only those few bars that are higher (or lowerthan the pivot will need to be swapped on averagefor random dataabout half the maximum number of swaps take place
22,845
although there are fewer swaps than comparisonsthey are both proportional to thusthe partitioning process runs in (ntime running the workshop appletyou can see that for random bars there are about swaps and comparisonsand for random bars there are about swaps and comparisons quicksort quicksort is undoubtedly the most popular sorting algorithmand for good reasonin the majority of situationsit' the fastestoperating in ( *logntime (this is only true for internal or in-memory sortingfor sorting data in disk filesother algorithms may be better quicksort was discovered by hoare in to understand quicksortyou should be familiar with the partitioning algorithm described in the preceding section basicallythe quicksort algorithm operates by partitioning an array into two subarrays and then calling itself recursively to quicksort each of these subarrays howeverthere are some embellishments we can make to this basic scheme they have to do with the selection of the pivot and the sorting of small partitions we'll examine these refinements after we've looked at simple version of the main algorithm it' difficult to understand what quicksort is doing before you understand how it does itso we'll reverse our usual presentation and show the java code for quicksort before presenting the quicksort workshop applet the quicksort algorithm the code for basic recursive quicksort method is fairly simple here' an examplepublic void recquicksort(int leftint rightif(right-left < /if size is return/it' already sorted else /size is or larger /partition range int partition partitionit(leftright)recquicksort(leftpartition- )/sort left side recquicksort(partition+ right)/sort right side as you can seethere are three basic steps partition the array or subarray into left (smaller keysand right (larger keysgroups
22,846
advanced sorting call ourselves to sort the left group call ourselves again to sort the right group after partitionall the items in the left subarray are smaller than all those on the right if we then sort the left subarray and sort the right subarraythe entire array will be sorted how do we sort these subarraysby calling ourself recursively the arguments to the recquicksort(method determine the left and right ends of the array (or subarrayit' supposed to sort the method first checks if this array consists of only one element if sothe array is by definition already sortedand the method returns immediately this is the base case in the recursion process if the array has two or more cellsthe algorithm calls the partitionit(methoddescribed in the preceding sectionto partition it this method returns the index number of the partitionthe left element in the right (larger keyssubarray the partition marks the boundary between the subarrays this situation is shown in figure unpartitioned array pivot left subarray left partition right subarray already sorted will be sorted by first recursive call to recquicksort(figure right will be sorted by second recursive call to recquicksort(recursive calls sort subarrays after the array is partitionedrecquicksort(calls itself recursivelyonce for the left part of its arrayfrom left to partition- and once for the rightfrom partition+ to right note that the data item at the index partition is not included in either of the recursive calls why notdoesn' it need to be sortedthe explanation lies in how the pivot value is chosen
22,847
choosing pivot value what pivot value should the partitionit(method usehere are some relevant ideasthe pivot value should be the key value of an actual data itemthis item is called the pivot you can pick data item to be the pivot more or less at random for simplicitylet' say we always pick the item on the right end of the subarray being partitioned after the partitionif the pivot is inserted at the boundary between the left and right subarraysit will be in its final sorted position this last point may sound unlikelybut remember thatbecause the pivot' key value is used to partition the arrayfollowing the partition the left subarray holds items smaller than the pivotand the right subarray holds items larger the pivot starts out on the rightbut if it could somehow be placed between these two subarraysit would be in the correct place--that isin its final sorted position figure shows how this looks with pivot whose key value is unpartitioned array pivot item correct place for pivot partitioned left subarray figure partitioned right subarray the pivot and the subarrays this figure is somewhat fanciful because you can' actually take an array apart as we've shown so how do we move the pivot to its proper placewe could shift all the items in the right subarray to the right one cell to make room for the pivot howeverthis is inefficient and unnecessary remember that all the
22,848
advanced sorting items in the right subarrayalthough they are larger than the pivotare not yet sortedso they can be moved aroundwithin the right subarraywithout affecting anything thereforeto simplify inserting the pivot in its proper placewe can simply swap the pivot ( and the left item in the right subarraywhich is this swap places the pivot in its proper position between the left and right groups the is switched to the right endbut because it remains in the right (largergroupthe partitioning is undisturbed this situation is shown in figure left subarray left subarray pivot right subarray right subarray pivot figure swapping the pivot when it' swapped into the partition' locationthe pivot is in its final resting place all subsequent activity will take place on one side of it or on the otherbut the pivot itself won' be moved (or indeed even accessedagain to incorporate the pivot selection process into our recquicksort(methodlet' make it an overt statementand send the pivot value to partitionit(as an argument here' how that lookspublic void recquicksort(int leftint rightif(right-left < /if size < return/already sorted else /size is or larger long pivot thearray[right]/rightmost item /partition range int partition partitionit(leftrightpivot)
22,849
recquicksort(leftpartition- )recquicksort(partition+ right)/end recquicksort(/sort left side /sort right side when we use this scheme of choosing the rightmost item in the array as the pivotwe'll need to modify the partitionit(method to exclude this rightmost item from the partitioning processafter allwe already know where it should go after the partitioning process is completeat the partitionbetween the two groups alsoafter the partitioning process is completedwe need to swap the pivot from the right end into the partition' location listing shows the quicksort java programwhich incorporates these features listing the quicksort java program /quicksort java /demonstrates simple version of quick sort /to run this programc>java quicksort app ///////////////////////////////////////////////////////////////class arrayins private long[thearray/ref to array thearray private int nelems/number of data items //public arrayins(int max/constructor thearray new long[max]/create the array nelems /no items yet //public void insert(long value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")
22,850
advanced sorting listing continued //public void quicksort(recquicksort( nelems- )//public void recquicksort(int leftint rightif(right-left < /if size < return/already sorted else /size is or larger long pivot thearray[right]/rightmost item /partition range int partition partitionit(leftrightpivot)recquicksort(leftpartition- )/sort left side recquicksort(partition+ right)/sort right side /end recquicksort(//public int partitionit(int leftint rightlong pivotint leftptr left- /left (after ++int rightptr right/right- (after --while(true/find bigger item whilethearray[++leftptrpivot /(nop/find smaller item while(rightptr &thearray[--rightptrpivot/(nopif(leftptr >rightptr/if pointers crossbreak/partition done else /not crossedso swap(leftptrrightptr)/swap elements /end while(trueswap(leftptrright)/restore pivot return leftptr/return pivot location /end partitionit(/
22,851
listing continued public void swap(int dex int dex /swap two elements long temp thearray[dex ]/ into temp thearray[dex thearray[dex ]/ into thearray[dex temp/temp into /end swap///end class arrayins ///////////////////////////////////////////////////////////////class quicksort app public static void main(string[argsint maxsize /array size arrayins arrarr new arrayins(maxsize)/create array for(int = <maxsizej++/fill array with /random numbers long (int)(java lang math random()* )arr insert( )arr display()/display items arr quicksort()/quicksort them arr display()/display them again /end main(/end class quicksort app the main(routine creates an object of type arrayinsinserts random data items of type long in itdisplays itsorts it with the quicksort(methodand displays the results here' some typical outputa= = an interesting aspect of the code in the partitionit(method is that we've been able to remove the test for the end of the array in the first inner while loop this testseen in the earlier partitionit(method in the partition java program in listing was leftptr right
22,852
advanced sorting it prevented leftptr running off the right end of the array if no item there was larger than pivot why can we eliminate the testbecause we selected the rightmost item as the pivotso leftptr will always stop there howeverthe test is still necessary for rightptr in the second while loop (later we'll see how this test can be eliminated as well choosing the rightmost item as the pivot is thus not an entirely arbitrary choiceit speeds up the code by removing an unnecessary test picking the pivot from some other location would not provide this advantage the quicksort workshop applet at this point you know enough about the quicksort algorithm to understand the nuances of the quicksort workshop applet the big picture for the big pictureuse the size button to set the applet to sort random barsand press the run button following the sorting processthe display will look something like figure figure the quicksort workshop applet with bars watch how the algorithm partitions the array into two partsthen sorts each of these parts by partitioning it into two partsand so oncreating smaller and smaller subarrays when the sorting process is completeeach dotted line provides visual record of one of the sorted subarrays the horizontal range of the line shows which bars were part of the subarrayand its vertical position is the pivot value (the height of the
22,853
pivotthe total length of all these lines on the display is measure of how much work the algorithm has done to sort the arraywe'll return to this topic later each dotted line (except the shortest onesshould have line below it (probably separated by othershorter linesand line above it that together add up to the same length as the original line (less one barthese are the two partitions into which each subarray is divided the details for more detailed examination of quicksort' operationswitch to the -bar display in the quicksort workshop applet and step through the sorting process you'll see how the pivot value corresponds to the height of the pivot on the right side of the array and how the algorithm partitions the arrayswaps the pivot into the space between the two sorted groupssorts the shorter group (using many recursive calls)and then sorts the larger group figure shows all the steps involved in sorting bars the horizontal brackets under the arrays show which subarray is being partitioned at each stepand the circled numbers show the order in which these partitions are created pivot being swapped into place is shown with dotted arrow the final position of the pivot is shown as dotted cell to emphasize that this cell contains sorted item that will not be changed thereafter horizontal brackets under single cells (steps and are base case calls to recquicksort()they return immediately sometimesas in steps and the pivot ends up in its original position on the right side of the array being sorted in this situationthere is only one subarray remaining to be sortedthe one to the left of the pivot there is no second subarray to its right the different steps in figure occur at different levels of recursionas shown in table the initial call from main(to recquicksort(is the first levelrecquicksort(calling two new instances of itself is the second levelthese two instances calling four more instances is the third leveland so on table recursion levels for figure step recursion level
22,854
advanced sorting fig - figure the quicksort process
22,855
the order in which the partitions are createdcorresponding to the step numbersdoes not correspond with depth it' not the case that all the first-level partitions are done firstthen all the second level onesand so on insteadthe left group at every level is handled before any of the right groups in theory there should be steps in the fourth level and in the fifth levelbut in this small array we run out of items before these steps are necessary the number of levels in the table shows that with data itemsthe machine stack needs enough space for sets of arguments and return valuesone for each recursion level this isas we'll see latersomewhat greater than the logarithm to the base of the number of itemslog the size of the machine stack is determined by your particular system sorting very large numbers of data items using recursive procedures may cause this stack to overflowleading to memory errors things to notice here are some details you may notice as you run the quicksort workshop applet you might think that powerful algorithm like quicksort would not be able to handle subarrays as small as two or three items howeverthis version of the quicksort algorithm is quite capable of sorting such small subarraysleftscan and rightscan just don' go very far before they meet for this reason we don' need to use different sorting scheme for small subarrays (althoughas we'll see laterhandling small subarrays differently may have advantages at the end of each scanthe leftscan variable ends up pointing to the partition-that isthe left element of the right subarray the pivot is then swapped with the partition to put the pivot in its proper placeas we've seen as we notedin steps and of figure leftscan ends up pointing to the pivot itselfso the swap has no effect this may seem like wasted swapyou might decide that leftscan should stop one bar sooner howeverit' important that leftscan scan all the way to the pivototherwisea swap would unsort the pivot and the partition be aware that leftscan and rightscan start at left- and right this may look peculiar on the displayespecially if left is then leftscan will start at - similarlyrightscan initially points to the pivotwhich is not included in the partitioning process these pointers start outside the subarray being partitioned because they will be incremented and decrementedrespectivelybefore they're used the first time the applet shows ranges as numbers in parenthesesfor example( - means the subarray from index to index the range given in some of the messages may be negativefrom higher number to lower onesuch as array partitionedleft ( - )right ( - the ( - range means single cell ( )but what does ( - meanthis range isn' realit simply reflects the values that left and rightthe arguments to recquicksort()have when this method is called here' the code in question
22,856
advanced sorting int partition partitionit(leftrightpivot)recquicksort(leftpartition- )/sort left side recquicksort(partition+ right)/sort right side if partitionit(is called with left and right for exampleand happens to return as the partitionthen the range supplied in the first call to recquicksort(will be ( - and the range to the second will be ( - this is normal the base case in recquicksort(is activated by array sizes less than as well as by so it will return immediately for negative ranges negative ranges are not shown in figure although they do cause (briefcalls to recquicksort(degenerates to ( performance if you use the quicksort workshop applet to sort inversely sorted barsyou'll see that the algorithm runs much more slowly and that many more dotted horizontal lines are generatedindicating more and larger subarrays are being partitioned what' happening herethe problem is in the selection of the pivot ideallythe pivot should be the median of the items being sorted that ishalf the items should be larger than the pivotand half smaller this would result in the array being partitioned into two subarrays of equal size having two equal subarrays is the optimum situation for the quicksort algorithm if it has to sort one large and one small arrayit' less efficient because the larger subarray has to be subdivided more times the worst situation results when subarray with elements is divided into one subarray with element and the other with - elements (this division into cell and - cells can also be seen in steps and in figure if this and - division happens with every partitionthen every element requires separate partition step this is in fact what takes place with inversely sorted datain all the subarraysthe pivot is the smallest itemso every partition results in - elements in one subarray and only the pivot in the other to see this unfortunate process in actionstep through the quicksort workshop applet with inversely sorted bars notice how many more steps are necessary than with random data in this situation the advantage gained by the partitioning process is lost and the performance of the algorithm degenerates to ( besides being slowthere' another potential problem when quicksort operates in ( time when the number of partitions increasesthe number of recursive function calls also increases every function call takes up room on the machine stack if there are too many callsthe machine stack may overflow and paralyze the system to summarizein the quicksort appletwe select the rightmost element as the pivot if the data is truly randomthis isn' too bad choice because usually the
22,857
pivot won' be too close to either end of the array howeverwhen the data is sorted or inversely sortedchoosing the pivot from one end or the other is bad idea can we improve on our approach to selecting the pivotmedian-of-three partitioning many schemes have been devised for picking better pivot the method should be simple but have good chance of avoiding the largest or smallest value picking an element at random is simple but--as we've seen--doesn' always result in good selection howeverwe could examine all the elements and actually calculate which one was the median this would be the ideal pivot choicebut the process isn' practicalas it would take more time than the sort itself compromise solution is to find the median of the firstlastand middle elements of the arrayand use this for the pivot picking the median of the firstlastand middle elements is called the median-of-three approach and is shown in figure left center right median is figure the median of three finding the median of three items is obviously much faster than finding the median of all the itemsand yet it successfully avoids picking the largest or smallest item in cases where the data is already sorted or inversely sorted there are probably some pathological arrangements of data where the median-of-three scheme works poorlybut normally it' fast and effective technique for finding the pivot besides picking the pivot more effectivelythe median-of-three approach has an additional benefitwe can dispense with the rightptr>left test in the second inside while loopleading to small increase in the algorithm' speed how is this possiblethe test can be eliminated because we can use the median-of-three approach to not only select the pivotbut also to sort the three elements used in the selection process figure shows this operation when these three elements are sortedand the median item is selected as the pivotwe are guaranteed that the element at the left end of the subarray is less than (or equal tothe pivotand the element at the right end is greater than (or equal tothe
22,858
advanced sorting pivot this means that the leftptr and rightptr indices can' step beyond the right or left ends of the arrayrespectivelyeven if we remove the leftptr>right and rightptr<left tests (the pointer will stopthinking it needs to swap the itemonly to find that it has crossed the other pointer and the partition is complete the values at left and right act as sentinels to keep leftptr and rightptr confined to valid array values left center right left center right before sorting after sorting becomes pivot figure sorting the leftcenterand right elements another small benefit to median-of-three partitioning is that after the leftcenterand right elements are sortedthe partition process doesn' need to examine these elements again the partition can begin at left+ and right- because left and right have in effect already been partitioned we know that left is in the correct partition because it' on the left and it' less than the pivotand right is in the correct place because it' on the right and it' greater than the pivot thusmedian-of-three partitioning not only avoids ( performance for alreadysorted datait also allows us to speed up the inner loops of the partitioning algorithm and reduce slightly the number of items that must be partitioned the quicksort java program listing shows the quicksort java programwhich incorporates median-of-three partitioning we use separate methodmedianof ()to sort the leftcenterand right elements of subarray this method returns the value of the pivotwhich is then sent to the partitionit(method
22,859
listing the quicksort java program /quicksort java /demonstrates quick sort with median-of-three partitioning /to run this programc>java quicksort app ///////////////////////////////////////////////////////////////class arrayins private long[thearray/ref to array thearray private int nelems/number of data items //public arrayins(int max/constructor thearray new long[max]/create the array nelems /no items yet //public void insert(long value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")//public void quicksort(recquicksort( nelems- )//public void recquicksort(int leftint rightint size right-left+ if(size < /manual sort if small manualsort(leftright)else /quicksort if large
22,860
advanced sorting listing continued long median medianof (leftright)int partition partitionit(leftrightmedian)recquicksort(leftpartition- )recquicksort(partition+ right)/end recquicksort(//public long medianof (int leftint rightint center (left+right)/ /order left center ifthearray[leftthearray[centerswap(leftcenter)/order left right ifthearray[leftthearray[rightswap(leftright)/order center right ifthearray[centerthearray[rightswap(centerright)swap(centerright- )/put pivot on right return thearray[right- ]/return median value /end medianof (//public void swap(int dex int dex /swap two elements long temp thearray[dex ]/ into temp thearray[dex thearray[dex ]/ into thearray[dex temp/temp into /end swap//public int partitionit(int leftint rightlong pivotint leftptr left/right of first elem int rightptr right /left of pivot while(truewhilethearray[++leftptrpivot /find bigger /(nopwhilethearray[--rightptrpivot /find smaller
22,861
listing continued /(nopif(leftptr >rightptr/if pointers crossbreak/partition done else /not crossedso swap(leftptrrightptr)/swap elements /end while(trueswap(leftptrright- )/restore pivot return leftptr/return pivot location /end partitionit(//public void manualsort(int leftint rightint size right-left+ if(size < return/no sort necessary if(size = / -sort left and right ifthearray[leftthearray[rightswap(leftright)returnelse /size is / -sort leftcenterright ifthearray[leftthearray[right- swap(leftright- )/leftcenter ifthearray[leftthearray[rightswap(leftright)/leftright ifthearray[right- thearray[rightswap(right- right)/centerright /end manualsort(///end class arrayins ///////////////////////////////////////////////////////////////class quicksort app public static void main(string[argsint maxsize /array size arrayins arr/reference to array arr new arrayins(maxsize)/create the array
22,862
advanced sorting listing continued for(int = <maxsizej++/fill array with /random numbers long (int)(java lang math random()* )arr insert( )arr display()/display items arr quicksort()/quicksort them arr display()/display them again /end main(/end class quicksort app this program uses another new methodmanualsort()to sort subarrays of three or fewer elements it returns immediately if the subarray is one cell (or less)swaps the cells if necessary if the range is and sorts three cells if the range is the recquicksort(routine can' be used to sort ranges of or because median partitioning requires at least four cells the main(routine and the output of quicksort java are similar to those of quicksort java the quicksort workshop applet the quicksort workshop applet demonstrates the quicksort algorithm using median-of-three partitioning this applet is similar to the quicksort workshop appletbut starts off sorting the firstcenterand left elements of each subarray and selecting the median of these as the pivot value at leastit does this if the array size is greater than if the subarray is two or three unitsthe applet simply sorts it "by handwithout partitioning or recursive calls notice the dramatic improvement in performance when the applet is used to sort inversely ordered bars no longer is every subarray partitioned into cell and - cellsinsteadthe subarrays are partitioned roughly in half other than this improvement for ordered datathe quicksort workshop applet produces results similar to quicksort it is no faster when sorting random datait' advantages become evident only when sorting ordered data handling small partitions if you use the median-of-three partitioning methodit follows that the quicksort algorithm won' work for partitions of three or fewer items the number in this case is called cutoff point in the examples above we sorted subarrays of two or three items by hand is this the best way
22,863
using an insertion sort for small partitions another option for dealing with small partitions is to use the insertion sort when you do thisyou aren' restricted to cutoff of you can set the cutoff to or any other number it' interesting to experiment with different values of the cutoff to see where the best performance lies knuth (see appendix brecommends cutoff of howeverthe optimum number depends on your computeroperating systemcompiler (or interpreter)and so on the quicksort java programshown in listing uses an insertion sort to handle subarrays of fewer than cells listing the quicksort java program /quicksort java /demonstrates quick sortuses insertion sort for cleanup /to run this programc>java quicksort app ///////////////////////////////////////////////////////////////class arrayins private long[thearray/ref to array thearray private int nelems/number of data items //public arrayins(int max/constructor thearray new long[max]/create the array nelems /no items yet //public void insert(long value/put element into array thearray[nelemsvalue/insert it nelems++/increment size //public void display(/displays array contents system out print(" =")for(int = <nelemsj++/for each elementsystem out print(thearray[ ")/display it system out println("")//public void quicksort(
22,864
advanced sorting listing continued recquicksort( nelems- )/insertionsort( nelems- )/the other option //public void recquicksort(int leftint rightint size right-left+ if(size /insertion sort if small insertionsort(leftright)else /quicksort if large long median medianof (leftright)int partition partitionit(leftrightmedian)recquicksort(leftpartition- )recquicksort(partition+ right)/end recquicksort(//public long medianof (int leftint rightint center (left+right)/ /order left center ifthearray[leftthearray[centerswap(leftcenter)/order left right ifthearray[leftthearray[rightswap(leftright)/order center right ifthearray[centerthearray[rightswap(centerright)swap(centerright- )/put pivot on right return thearray[right- ]/return median value /end medianof (//public void swap(int dex int dex /swap two elements long temp thearray[dex ]/ into temp thearray[dex thearray[dex ]/ into thearray[dex temp/temp into
22,865
listing continued /end swap//public int partitionit(int leftint rightlong pivotint leftptr left/right of first elem int rightptr right /left of pivot while(truewhilethearray[++leftptrpivot /find bigger /(nopwhilethearray[--rightptrpivot /find smaller /(nopif(leftptr >rightptr/if pointers crossbreak/partition done else /not crossedso swap(leftptrrightptr)/swap elements /end while(trueswap(leftptrright- )/restore pivot return leftptr/return pivot location /end partitionit(///insertion sort public void insertionsort(int leftint rightint inout/sorted on left of out for(out=left+ out<=rightout++long temp thearray[out]/remove marked item in out/start shifts at out /until one is smallerwhile(in>left &thearray[in- >tempthearray[inthearray[in- ]/shift item to right --in/go left one position thearray[intemp/insert marked item /end for /end insertionsort(///end class arrayins
22,866
advanced sorting listing continued ///////////////////////////////////////////////////////////////class quicksort app public static void main(string[argsint maxsize /array size arrayins arr/reference to array arr new arrayins(maxsize)/create the array for(int = <maxsizej++/fill array with /random numbers long (int)(java lang math random()* )arr insert( )arr display()/display items arr quicksort()/quicksort them arr display()/display them again /end main(using the insertion sort for small subarrays turns out to be the fastest approach on our particular installationbut it is not much faster than sorting subarrays of three or fewer cells by handas in quicksort java the numbers of comparisons and copies are reduced substantially in the quicksort phasebut are increased by an almost equal amount in the insertion sortso the time savings are not dramatic howeverthis approach is probably worthwhile if you are trying to squeeze the last ounce of performance out of quicksort insertion sort following quicksort another option is to completely quicksort the array without bothering to sort partitions smaller than the cutoff this is shown with commented-out line in the quicksort(method (if this call is usedthe call to insertionsort(should be removed from recquicksort(when quicksort is finishedthe array will be almost sorted you then apply the insertion sort to the entire array the insertion sort is supposed to operate efficiently on almost-sorted arraysand this approach is recommended by some expertsbut on our installation it runs very slowly the insertion sort appears to be happier doing lot of small sorts than one big one removing recursion another embellishment recommended by many writers is removing recursion from the quicksort algorithm this involves rewriting the algorithm to store deferred
22,867
subarray bounds (left and righton stackand using loop instead of recursion to oversee the partitioning of smaller and smaller subarrays the idea in doing this is to speed up the program by removing method calls howeverthis idea arose with older compilers and computer architectureswhich imposed large time penalty for each method call it' not clear that removing recursion is much of an improvement for modern systemswhich handle method calls more efficiently efficiency of quicksort we've said that quicksort operates in ( *logntime as we saw in the discussion of mergesort in this is generally true of the divide-and-conquer algorithmsin which recursive method divides range of items into two groups and then calls itself to handle each group in this situation the logarithm actually has base of the running time is proportional to *log you can get an idea of the validity of this *log running time for quicksort by running one of the quicksort workshop applets with random bars and examining the resulting dotted horizontal lines each dotted line represents an array or subarray being partitionedthe pointers leftscan and rightscan moving toward each othercomparing each data item and swapping when appropriate we saw in the "partitioningsection that single partition runs in (ntime this tells us that the total length of all the dotted lines is proportional to the running time of quicksort but how long are all the linesmeasuring them with ruler on the screen would be tediousbut we can visualize them different way there is always line that runs the entire width of the graphspanning bars this results from the first partition there will also be lines (one below and one above the first linethat have an average length of / barstogether they are again bars long then there will be lines with an average length of / that again total barsthen lines linesand so on figure shows how this looks for and lines in this figure solid horizontal lines represent the dotted horizontal lines in the quicksort appletsand captions like / cells long indicate averagenot actualline lengths the circled numbers on the left show the order in which the lines are created each series of lines (the eight / linesfor examplecorresponds to level of recursion the initial call to recquicksort(is the first level and makes the first linethe two calls from within the first call--the second level of recursion--make the next two linesand so on if we assume we start with cellsthe results are shown in table
22,868
advanced sorting long four lines lines cells long one line / two lines / cells long / cells long cells eight figure table recursion level lines correspond to partitions line lengths and recursion step numbers in figure average line length (cellsnumber of lines total length (cells not shown not shown not shown total
22,869
where does this division process stopif we keep dividing by and count how many times we do thiswe get the series which is about seven levels of recursion this looks about right on the workshop appletsif you pick some point on the graph and count all the dotted lines directly above and below itthere will be an average of approximately seven (in figure because not all levels of recursion are shownonly four lines intersect any vertical slice of the graph table shows total of cells this is only an approximation because of roundoff errorsbut it' close to times the logarithm to the base of which is thusthis informal analysis suggests the validity of the *log running time for quicksort more specificallyin the section on partitioningwe found that there should be + comparisons and fewer than / swaps multiplying these quantities by log for various values of gives the results shown in table table swaps and comparisons in quicksort log *log comparisons( + )*log swapsfewer than / *log the log quantity used in table is actually true only in the best-case scenariowhere each subarray is partitioned exactly in half for random data the figure is slightly greater neverthelessthe quicksort and quicksort workshop applets approximate these results for and barsas you can see by running them and observing the swaps and comparisons fields because they have different cutoff points and handle the resulting small partitions differentlyquicksort performs fewer swaps but more comparisons than quicksort the number of swaps shown in table is the maximum (which assumes the data is inversely sortedfor random data the actual number of swaps turns out to be onehalf to two-thirds of the figures shown radix sort we'll close this by briefly mentioning sort that uses different approach the sorts we've looked at so far treat the key as simple numerical value that is compared with other values to sort the data the radix sort disassembles the key into digits and arranges the data items according to the value of the digits amazinglyno comparisons are necessary
22,870
advanced sorting algorithm for the radix sort we'll discuss the radix sort in terms of normal base- arithmeticwhich is easier to visualize howeveran efficient implementation of the radix sort would use base- arithmetic to take advantage of the computer' speed in bit manipulation we'll look at the radix sort rather than the similar but somewhat more complex radix-exchange sort the word radix means the base of system of numbers ten is the radix of the decimal system and is the radix of the binary system the sort involves examining each digit of the key separatelystarting with the (least significantdigit all the data items are divided into groupsaccording to the value of their digit these groups are then reassembledall the keys ending with go firstfollowed by all the keys ending in and so on up to we'll call these steps sub-sort in the second sub-sortall data is divided into groups againbut this time according to the value of their digit this must be done without changing the order of the previous sort that iswithin each of the groupsthe ordering of the items remains the same as it was after step the sub-sorts must be stable again the groups are recombinedthose with digit of firstthen those with digit of and so on up to this process is repeated for the remaining digits if some keys have fewer digits than otherstheir higher-order digits are considered to be here' an exampleusing seven data itemseach with three digits leading zeros are shown for clarity ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( /unsorted array /sorted on digit /sorted on digit /sorted on digit /sorted array the parentheses delineate the groups within each group the digits in the appropriate position are the same to convince yourself that this approach really workstry it on piece of paper with some numbers you make up designing program in practice the original data probably starts out in an ordinary array where should the groups gothere' problem with using another array or an array of
22,871
arrays it' not likely there will be exactly the same number of sand so on in every digit positionso it' hard to know how big to make the arrays one way to solve this problem is to use linked lists instead of arrays linked lists expand and contract as needed we'll use this approach an outer loop looks at each digit of the keys in turn there are two inner loopsthe first takes the data from the array and puts it on the liststhe second copies it from the lists back to the array you need to use the right kind of linked list to keep the sub-sorts stableyou need the data to come out of each list in the same order it went in which kind of linked list makes this easywe'll leave the coding details as an exercise efficiency of the radix sort at first glance the efficiency of the radix sort seems too good to be true all you do is copy the original data from the array to the lists and back again if there are data itemsthis is copies you repeat this procedure once for each digit if you assumesay -digit numbersthen you'll have * equals copies if you have data itemsthere are * equals , copies the number of copies is proportional to the number of data itemswhich is ( )the most efficient sorting algorithm we've seen unfortunatelyit' generally true that if you have more data itemsyou'll need longer keys if you have times as much datayou may need to add another digit to the key the number of copies is proportional to the number of data items times the number of digits in the key the number of digits is the log of the key valuesso in most situations we're back to ( *lognefficiencythe same as quicksort there are no comparisonsalthough it takes time to extract each digit from the number this must be done once for every two copies it may behoweverthat given computer can do the digit-extraction in binary more quickly than it can do comparison of courselike mergesortthe radix sort uses about twice as much memory as quicksort summary the shellsort applies the insertion sort to widely spaced elementsthen less widely spaced elementsand so on the expression -sorting means sorting every nth element sequence of numberscalled the interval sequenceor gap sequenceis used to determine the sorting intervals in the shellsort widely used interval sequence is generated by the recursive expression = * + where the initial value of is
22,872
advanced sorting if an array holds , itemsit could be -sorted -sorted -sorted -sorted -sortedand finally -sorted the shellsort is hard to analyzebut runs in approximately ( *(logn) time this is much faster than the ( algorithms like insertion sortbut slower than the ( *lognalgorithms like quicksort to partition an array is to divide it into two subarraysone of which holds items with key values less than specified valuewhile the other holds items with keys greater than or equal to this value the pivot value is the value that determines into which group an item will go during partitioning items smaller than the pivot value go in the left grouplarger items go in the right group in the partitioning algorithmtwo array indiceseach in its own while loopstart at opposite ends of the array and step toward each otherlooking for items that need to be swapped when an index finds an item that needs to be swappedits while loop exits when both while loops exitthe items are swapped when both while loops exitand the indices have met or passed each otherthe partition is complete partitioning operates in linear (ntimemaking plus or comparisons and fewer than / swaps the partitioning algorithm may require extra tests in its inner while loops to prevent the indices running off the ends of the array quicksort partitions an array and then calls itself twice recursively to sort the two resulting subarrays subarrays of one element are already sortedthis can be base case for quicksort the pivot value for partition in quicksort is the key value of specific itemcalled the pivot in simple version of quicksortthe pivot can always be the item at the right end of the subarray during the partition the pivot is placed out of the way on the rightand is not involved in the partitioning process later the pivot is swapped againinto the space between the two partitions this is its final sorted position
22,873
in the simple version of quicksortperformance is only ( for already-sorted (or inversely sorteddata in more advanced version of quicksortthe pivot can be the median of the firstlastand center items in the subarray this is called median-of-three partitioning median-of-three partitioning effectively eliminates the problem of ( performance for already-sorted data in median-of-three partitioningthe leftcenterand right items are sorted at the same time the median is determined this sort eliminates the need for the end-of-array tests in the inner while loops in the partitioning algorithm quicksort operates in ( *log ntime (except when the simpler version is applied to already-sorted datasubarrays smaller than certain size (the cutoffcan be sorted by method other than quicksort the insertion sort is commonly used to sort subarrays smaller than the cutoff the insertion sort can also be applied to the entire arrayafter it has been sorted down to cutoff point by quicksort the radix sort is about as fast as quicksort but uses twice as much memory questions these questions are intended as self-test for readers answers may be found in appendix the shellsort works by partitioning the array swapping adjacent elements dealing with widely separated elements starting with the normal insertion sort if an array has elementsthen knuth' algorithm would start with an interval of
22,874
advanced sorting to transform the insertion sort into the shellsortwhich of the following do you not doa substitute for insert an algorithm for creating gaps of decreasing width enclose the normal insertion sort in loop change the direction of the indices in the inner loop true or falsea good interval sequence for the shellsort is created by repeatedly dividing the array size in half fill in the big valuesthe speed of the shellsort is more than but less than partitioning is putting all elements larger than certain value on one end of the array dividing an array in half partially sorting parts of an array sorting each half of an array separately when partitioningeach array element is compared to the in partitioningif an array element is equal to the answer to question it is passed over it is passed over or notdepending on the other array element it is placed in the pivot position it is swapped true or falsein quicksortthe pivot can be an arbitrary element of the array assuming larger keys on the rightthe partition is the element between the left and right subarrays the key value of the element between the left and right subarrays the left element in the right subarray the key value of the left element in the right subarray quicksort involves partitioning the original array and then
22,875
after partition in simple version of quicksortthe pivot may be used to find the median of the array exchanged with an element of the right subarray used as the starting point of the next partition discarded median-of-three partitioning is way of choosing the in quicksortfor an array of elementsthe partitionit(method will examine each element approximately times true or falseyou can speed up quicksort if you stop partitioning when the partition size is and finish by using different sort experiments carrying out these experiments will help to provide insights into the topics covered in the no programming is involved find out what happens when you use the partition workshop applet on inversely sorted bars is the result almost sorted modify the shellsort java program (listing so it prints the entire contents of the array after completing each -sort the array should be small enough so its contents fit on one line analyze these intermediate steps to see if the algorithm is operating the way you think should modify the shellsort java (listing and the quicksort java (listing programs to sort appropriately large arraysand compare their speeds alsocompare these speeds with those of the sorts in programming projects writing programs that solve the programming projects helps to solidify your understanding of the material and demonstrates how the concepts are applied (as noted in the introductionqualified instructors may obtain completed solutions to the programming projects on the publisher' web site modify the partition java program (listing so that the partitionit(method always uses the highest-index (rightelement as the pivotrather than an arbitrary number (this is similar to what happens in the quicksort java program in listing make sure your routine will work for arrays of three or fewer elements to do soyou may need few extra statements
22,876
advanced sorting modify the quicksort java program (listing to count the number of copies and comparisons it makes during sort and then display the totals this program should duplicate the performance of the quicksort workshop appletso the copies and comparisons for inversely sorted data should agree (remember that swap is three copies in exercise in we suggested that you could find the median of set of data by sorting the data and picking the middle element you might think using quicksort and picking the middle element would be the fastest way to find the medianbut there' an even faster way it uses the partition algorithm to find the median without completely sorting the data to see how this worksimagine that you partition the dataandby chancethe pivot happens to end up at the middle element you're doneall the items to the right of the pivot are larger (or equal)and all the items to the left are smaller (or equal)so if the pivot falls in the exact center of the arraythen it' the median the pivot won' end up in the center very oftenbut we can fix that by repartitioning the partition that contains the middle element suppose your array has seven elements numbered from to the middle is element if you partition this array and the pivot ends up at then you need to partition again from to (the partition that contains )not to if the pivot ends up at you need to partition from to not to you continue partitioning the appropriate partitions recursivelyalways checking if the pivot falls on the middle element eventuallyit willand you're done because you need fewer partitions than in quicksortthis algorithm is faster extend programming project to find the median of an array you'll make recursive calls somewhat like those in quicksortbut they will only partition each subarraynot completely sort it the process stops when the median is foundnot when the array is sorted selection means finding the kth largest or kth smallest element from an array for exampleyou might want to select the th largest element finding the median (as in programming project is special case of selection the same partitioning process can be usedbut you look for an element with specified index number rather than the middle element modify the program from programming project to allow the selection of an arbitrary element how small an array can your program handle implement radix sort as described in the last section of this it should handle variable amounts of data and variable numbers of digits in the key you could make the number-base variable as well (so it can be something other than )but it will be hard to see what' happening unless you develop routine to print values in different bases
22,877
binary trees in this why use binary treestree terminology an analogy in this we switch from algorithmsthe focus of "advanced sorting,to data structures binary trees are one of the fundamental data storage structures used in programming they provide advantages that the data structures we've seen so far cannot in this we'll learn why you would want to use treeshow they workand how to go about creating them how do binary search trees workfinding node inserting node traversing the tree finding maximum and why use binary treeswhy might you want to use treeusuallybecause it combines the advantages of two other structuresan ordered array and linked list you can search tree quicklyas you can an ordered arrayand you can also insert and delete items quicklyas you can with linked list let' explore these topics bit before delving into the details of trees minimum values deleting node the efficiency of binary trees trees represented as arrays duplicate keys the complete tree java program slow insertion in an ordered array imagine an array in which all the elements are arranged in order--that isan ordered arraysuch as we saw in "arrays as we learnedyou can quickly search such an array for particular valueusing binary search you check in the center of the arrayif the object you're looking for is greater than what you find thereyou narrow your search to the top half of the arrayif it' lessyou narrow your search to the bottom half applying this process repeatedly finds the object in (logntime you can also quickly iterate through an ordered arrayvisiting each object in sorted order on the other handif you want to insert new object into an ordered arrayyou first need to find where the object will goand then move all the objects with greater keys up the huffman code
22,878
binary trees one space in the array to make room for it these multiple moves are time-consumingrequiringon the averagemoving half the items ( / movesdeletion involves the same multimove operation and is thus equally slow if you're going to be doing lot of insertions and deletionsan ordered array is bad choice slow searching in linked list on the other handas we saw in "linked lists,insertions and deletions are quick to perform on linked list they are accomplished simply by changing few references these operations require ( time (the fastest big timeunfortunatelyhoweverfinding specified element in linked list is not so easy you must start at the beginning of the list and visit each element until you find the one you're looking for thusyou will need to visit an average of / objectscomparing each one' key with the desired value this process is slowrequiring (ntime (notice that times considered fast for sort are slow for data structure operations you might think you could speed things up by using an ordered linked listin which the elements were arranged in orderbut this doesn' help you still must start at the beginning and visit the elements in orderbecause there' no way to access given element without following the chain of references to it (of coursein an ordered list it' much quicker to visit the nodes in order than it is in non-ordered listbut that doesn' help to find an arbitrary object trees to the rescue it would be nice if there were data structure with the quick insertion and deletion of linked listand also the quick searching of an ordered array trees provide both these characteristicsand are also one of the most interesting data structures what is treewe'll be mostly interested in particular kind of tree called binary treebut let' start by discussing trees in general before moving on to the specifics of binary trees tree consists of nodes connected by edges figure shows tree in such picture of tree (or in our workshop appletthe nodes are represented as circlesand the edges as lines connecting the circles trees have been studied extensively as abstract mathematical entitiesso there' large amount of theoretical knowledge about them tree is actually an instance of more general category called graphbut we don' need to worry about that here we'll discuss graphs in "graphs,and "weighted graphs
22,879
nodes edges figure general (non-binarytree in computer programsnodes often represent such entities as peoplecar partsairline reservationsand so on--in other wordsthe typical items we store in any kind of data structure in an oop language like java these real-world entities are represented by objects the lines (edgesbetween the nodes represent the way the nodes are related roughly speakingthe lines represent convenienceit' easy (and fastfor program to get from one node to another if there is line connecting them in factthe only way to get from node to node is to follow path along the lines generallyyou are restricted to going in one direction along edgesfrom the root downward edges are likely to be represented in program by referencesif the program is written in java (or by pointers if the program is written in or ++typicallythere is one node in the top row of treewith lines connecting to more nodes on the second roweven more on the thirdand so on thustrees are small on the top and large on the bottom this may seem upside-down compared with real treesbut generally program starts an operation at the small end of the treeand it' (arguablymore natural to think about going from top to bottomas in reading text there are different kinds of trees the tree shown in figure has more than two children per node (we'll see what "childrenmeans in moment howeverin this we'll be discussing specialized form of tree called binary tree each node in binary tree has maximum of two children more general treesin which nodes can have more than two childrenare called multiway trees we'll see an example in trees and external storage tree terminology many terms are used to describe particular aspects of trees you need to know few of them so our discussion will be comprehensible fortunatelymost of these terms are related to real-world trees or to family relationships (as in parents and children)
22,880
binary trees so they're not hard to remember figure shows many of these terms applied to binary tree root is the parent of and is the left child of the dashed line is path level level is the right child of subtree with as its root level level heijand are leaf nodes figure tree terms path think of someone walking from node to node along the edges that connect them the resulting sequence of nodes is called path root the node at the top of the tree is called the root there is only one root in tree for collection of nodes and edges to be defined as treethere must be one (and only one!path from the root to any other node figure shows non-tree you can see that it violates this rule figure non-tree
22,881
parent any node (except the roothas exactly one edge running upward to another node the node above it is called the parent of the node child any node may have one or more lines running downward to other nodes these nodes below given node are called its children leaf node that has no children is called leaf node or simply leaf there can be only one root in treebut there can be many leaves subtree any node may be considered to be the root of subtreewhich consists of its childrenand its children' childrenand so on if you think in terms of familiesa node' subtree contains all its descendants visiting node is visited when program control arrives at the nodeusually for the purpose of carrying out some operation on the nodesuch as checking the value of one of its data fields or displaying it merely passing over node on the path from one node to another is not considered to be visiting the node traversing to traverse tree means to visit all the nodes in some specified order for exampleyou might visit all the nodes in order of ascending key value there are other ways to traverse treeas we'll see later levels the level of particular node refers to how many generations the node is from the root if we assume the root is level then its children will be level its grandchildren will be level and so on keys we've seen that one data field in an object is usually designated key value this value is used to search for the item or perform other operations on it in tree diagramswhen circle represents node holding data itemthe key value of the item is typically shown in the circle (we'll see many figures later on that show nodes containing keys
22,882
binary trees binary trees if every node in tree can have at most two childrenthe tree is called binary tree in this we'll focus on binary trees because they are the simplest and the most common the two children of each node in binary tree are called the left child and the right childcorresponding to their positions when you draw picture of treeas shown in figure node in binary tree doesn' necessarily have the maximum of two childrenit may have only left childor only right childor it can have no children at all (in which case it' leafthe kind of binary tree we'll be dealing with in this discussion is technically called binary search tree figure shows binary search tree figure binary search tree note the defining characteristic of binary search tree is thisa node' left child must have key less than its parentand node' right child must have key greater than or equal to its parent an analogy one commonly encountered tree is the hierarchical file structure in computer system the root directory of given device (designated with the backslashas in :\on many systemsis the tree' root the directories one level below the root directory are its children there may be many levels of subdirectories files represent leavesthey have no children of their own
22,883
clearlya hierarchical file structure is not binary treebecause directory may have many children complete pathnamesuch as :\sales\east\november\smith datcorresponds to the path from the root to the smith dat leaf terms used for the file structuresuch as root and pathwere borrowed from tree theory hierarchical file structure differs in significant way from the trees we'll be discussing here in the file structuresubdirectories contain no datathey contain only references to other subdirectories or to files only files contain data in treeevery node contains data ( personnel recordcar-part specificationsor whateverin addition to the dataall nodes except leaves contain references to other nodes how do binary search trees worklet' see how to carry out the common binary tree operations of finding node with given keyinserting new nodetraversing the treeand deleting node for each of these operations we'll first show how to use the binary tree workshop applet to carry it outthen we'll look at the corresponding java code the binary tree workshop applet start up the binary tree workshop applet you'll see screen something like that shown in figure howeverbecause the tree in the workshop applet is randomly generatedit won' look exactly the same as the tree in the figure figure the binary tree workshop applet
22,884
binary trees using the workshop applet the key values shown in the nodes range from to of coursein real treethere would probably be larger range of key values for exampleif employeessocial security numbers were used for key valuesthey would range up to , , another difference between the workshop applet and real tree is that the workshop applet is limited to depth of fivethat isthere can be no more than five levels from the root to the bottom this restriction ensures that all the nodes in the tree will be visible on the screen in real tree the number of levels is unlimited (until you run out of memoryusing the workshop appletyou can create new tree whenever you want to do thisclick the fill button prompt will ask you to enter the number of nodes in the tree this can vary from to but will give you representative tree after typing in the numberpress fill twice more to generate the new tree you can experiment by creating trees with different numbers of nodes unbalanced trees notice that some of the trees you generate are unbalancedthat isthey have most of their nodes on one side of the root or the otheras shown in figure individual subtrees may also be unbalanced figure unbalanced subtree an unbalanced tree (with an unbalanced subtree
22,885
trees become unbalanced because of the order in which the data items are inserted if these key values are inserted randomlythe tree will be more or less balanced howeverif an ascending sequence (like and so onor descending sequence is generatedall the values will be right children (if ascendingor left children (if descendingand the tree will be unbalanced the key values in the workshop applet are generated randomlybut of course some short ascending or descending sequences will be created anywaywhich will lead to local imbalances when you learn how to insert items into the tree in the workshop appletyou can try building up tree by inserting such an ordered sequence of items and see what happens if you ask for large number of nodes when you use fill to create treeyou may not get as many nodes as you requested depending on how unbalanced the tree becomessome branches may not be able to hold full number of nodes this is because the depth of the applet' tree is limited to fivethe problem would not arise in real tree if tree is created by data items whose key values arrive in random orderthe problem of unbalanced trees may not be too much of problem for larger trees because the chances of long run of numbers in sequence is small but key values can arrive in strict sequencefor examplewhen data-entry person arranges stack of personnel files into order of ascending employee number before entering the data when this happenstree efficiency can be seriously degraded we'll discuss unbalanced trees and what to do about them in "red-black trees representing the tree in java code let' see how we might implement binary tree in java as with other data structuresthere are several approaches to representing tree in the computer' memory the most common is to store the nodes at unrelated locations in memoryand connect them using references in each node that point to its children you can also represent tree in memory as an arraywith nodes in specific positions stored in corresponding positions in the array we'll return to this possibility at the end of this for our sample java code we'll use the approach of connecting the nodes using references note as we discuss individual operationswe'll show code fragments pertaining to that operation the complete program from which these fragments are extracted can be seen toward the end of this in listing
22,886
binary trees the node class firstwe need class of node objects these objects contain the data representing the objects being stored (employees in an employee databasefor exampleand also references to each of the node' two children here' how that looksclass node int idatadouble fdatanode leftchildnode rightchild/data used as key value /other data /this node' left child /this node' right child public void displaynode(/(see listing for method bodysome programmers also include reference to the node' parent this simplifies some operations but complicates othersso we don' include it we do include method called displaynode(to display the node' databut its code isn' relevant here there are other approaches to designing class node instead of placing the data items directly into the nodeyou could use reference to an object representing the data itemclass node person node leftchildnode rightchild/reference to person object /this node' left child /this node' right child class person int idatadouble fdatathis approach makes it conceptually clearer that the node and the data item it holds aren' the same thingbut it results in somewhat more complicated codeso we'll stick to the first approach
22,887
the tree class we'll also need class from which to instantiate the tree itselfthe object that holds all the nodes we'll call this class tree it has only one fielda node variable that holds the root it doesn' need fields for the other nodes because they are all accessed from the root the tree class has number of methods they are used for findinginsertingand deleting nodesfor different kinds of traversesand for displaying the tree here' skeleton versionclass tree private node root/the only data field in tree public void find(int keypublic void insert(int iddouble ddpublic void delete(int id/various other methods /end class tree the treeapp class finallywe need way to perform operations on the tree here' how you might write class with main(routine to create treeinsert three nodes into itand then search for one of them we'll call this class treeappclass treeapp public static void main(string[argstree thetree new tree/make tree thetree insert( )thetree insert( )thetree insert( )/insert nodes node found thetree find( )if(found !null/find node with key
22,888
binary trees system out println("found the node with key ")else system out println("could not find node with key ")/end main(/end class treeapp tip in listing the main(routine also provides primitive user interface so you can decide from the keyboard whether you want to insertfinddeleteor perform other operations next we'll look at individual tree operationsfinding nodeinserting nodetraversing the treeand deleting node finding node finding node with specific key is the simplest of the major tree operationsso let' start with that remember that the nodes in binary search tree correspond to objects containing information they could be person objectswith an employee number as the key and also perhaps nameaddresstelephone numbersalaryand other fields or they could represent car partswith part number as the key value and fields for quantity on handpriceand so on howeverthe only characteristics of each node that we can see in the workshop applet are number and color node is created with these two characteristicsand keeps them throughout its life using the workshop applet to find node look at the workshop appletand pick nodepreferably one near the bottom of the tree (as far from the root as possiblethe number shown in this node is its key value we're going to demonstrate how the workshop applet finds the nodegiven the key value for purposes of this discussion we'll assume you've decided to find the node representing the item with key value as shown in figure of coursewhen you run the workshop appletyou'll get different tree and will need to pick different key value click the find button the prompt will ask for the value of the node to find enter (or whatever the number is on the node you choseclick find twice more
22,889
= figure finding node as the workshop applet looks for the specified nodethe prompt will display either going to left child or going to right childand the red arrow will move down one level to the right or left in figure the arrow starts at the root the program compares the key value with the value at the rootwhich is the key is lessso the program knows the desired node must be on the left side of the tree--either the root' left child or one of this child' descendants the left child of the root has the value so the comparison of and will show that the desired node is in the right subtree of the arrow will go to the root of this subtree here is again greater than the nodeso we go to the rightto and then to the leftto this time the comparison shows equals the node' key valueso we've found the node we want the workshop applet doesn' do anything with the node after finding itexcept to display message saying it has been found serious program would perform some operation on the found nodesuch as displaying its contents or changing one of its fields java code for finding node here' the code for the find(routinewhich is method of the tree classpublic node find(int keynode current root/find node with given key /(assumes non-empty tree/start at root
22,890
binary trees while(current idata !key/while no matchif(key current idata/go leftcurrent current leftchildelse current current rightchild/or go rightif(current =null/if no childreturn null/didn' find it return current/found it this routine uses variable current to hold the node it is currently examining the argument key is the value to be found the routine starts at the root (it has tothis is the only node it can access directly that isit sets current to the root thenin the while loopit compares the value to be foundkeywith the value of the idata field (the key fieldin the current node if key is less than this fieldcurrent is set to the node' left child if key is greater than (or equalto the node' idata fieldcurrent is set to the node' right child can' find the node if current becomes equal to nullwe couldn' find the next child node in the sequencewe've reached the end of the line without finding the node we were looking forso it can' exist we return null to indicate this fact found the node if the condition of the while loop is not satisfiedso that we exit from the bottom of the loopthe idata field of current is equal to keythat iswe've found the node we want we return the node so that the routine that called find(can access any of the node' data tree efficiency as you can seethe time required to find node depends on how many levels down it is situated in the workshop applet there can be up to nodesbut no more than five levels--so you can find any node using maximum of only five comparisons this is (logntimeor more specifically (log ntimethe logarithm to the base we'll discuss this further toward the end of this inserting node to insert nodewe must first find the place to insert it this is much the same process as trying to find node that turns out not to existas described in the "can'
22,891
find the nodesection we follow the path from the root to the appropriate nodewhich will be the parent of the new node when this parent is foundthe new node is connected as its left or right childdepending on whether the new node' key is less or greater than that of the parent using the workshop applet to insert node to insert new node with the workshop appletpress the ins button you'll be asked to type the key value of the node to be inserted let' assume we're going to insert new node with the value type this number into the text field the first step for the program in inserting node is to find where it should be inserted figure shows how this step looks null abefore insertion figure bafter insertion inserting node the value is less than but greater than so we arrive at node now we want to go left because is less than but has no left childits leftchild field is null when it sees this nullthe insertion routine has found the place to attach the new node the workshop applet does this by creating new node with the value (and randomly generated colorand connecting it as the left child of as shown in figure java code for inserting node the insert(function starts by creating the new nodeusing the data supplied as arguments
22,892
binary trees nextinsert(must determine where to insert the new node this is done using roughly the same code as finding nodedescribed in the section "java code for finding node the difference is that when you're simply trying to find node and you encounter null (non-existentnodeyou know the node you're looking for doesn' exist so you return immediately when you're trying to insert nodeyou insert it (creating it firstif necessarybefore returning the value to be searched for is the data item passed in the argument id the while loop uses true as its condition because it doesn' care if it encounters node with the same value as idit treats another node with the same key value as if it were simply greater than the key value (we'll return to the subject of duplicate nodes later in this place to insert new node will always be found (unless you run out of memory)when it isand the new node is attachedthe while loop exits with return statement here' the code for the insert(functionpublic void insert(int iddouble ddnode newnode new node()/make new node newnode idata id/insert data newnode ddata ddif(root==null/no node in root root newnodeelse /root occupied node current root/start at root node parentwhile(true/(exits internallyparent currentif(id current idata/go leftcurrent current leftchildif(current =null/if end of the line/insert on left parent leftchild newnodereturn/end if go left else /or go right
22,893
current current rightchildif(current =null/if end of the line /insert on right parent rightchild newnodereturn/end else go right /end while /end else not root /end insert(/we use new variableparent (the parent of current)to remember the last non-null node we encountered ( in figure this is necessary because current is set to null in the process of discovering that its previous value did not have an appropriate child if we didn' save parentwe would lose track of where we were to insert the new nodechange the appropriate child pointer in parent (the last nonnull node you encounteredto point to the new node if you were looking unsuccessfully for parent' left childyou attach the new node as parent' left childif you were looking for its right childyou attach the new node as its right child in figure is attached as the left child of traversing the tree traversing tree means visiting each node in specified order this process is not as commonly used as findinginsertingand deleting nodes one reason for this is that traversal is not particularly fast but traversing tree is useful in some circumstancesand it' theoretically interesting (it' also simpler than deletionthe discussion of which we want to defer as long as possible there are three simple ways to traverse tree they're called preorderinorderand postorder the order most commonly used for binary search trees is inorderso let' look at that first and then return briefly to the other two inorder traversal an inorder traversal of binary search tree will cause all the nodes to be visited in ascending orderbased on their key values if you want to create sorted list of the data in binary treethis is one way to do it the simplest way to carry out traversal is the use of recursion (discussed in "recursion" recursive method to traverse the entire tree is called with node as an argument initiallythis node is the root the method needs to do only three things
22,894
binary trees call itself to traverse the node' left subtree visit the node call itself to traverse the node' right subtree remember that visiting node means doing something to itdisplaying itwriting it to fileor whatever traversals work with any binary treenot just with binary search trees the traversal mechanism doesn' pay any attention to the key values of the nodesit only concerns itself with whether node has children java code for traversing the actual code for inorder traversal is so simple we show it before seeing how traversal looks in the workshop applet the routineinorder()performs the three steps already described the visit to the node consists of displaying the contents of the node like any recursive functionit must have base case--the condition that causes the routine to return immediatelywithout calling itself in inorder(this happens when the node passed as an argument is null here' the code for the inorder(methodprivate void inorder(node localrootif(localroot !nullinorder(localroot leftchild)system out print(localroot idata ")inorder(localroot rightchild)this method is initially called with the root as an argumentinorder(root)after thatit' on its owncalling itself recursively until there are no more nodes to visit traversing three-node tree let' look at simple example to get an idea of how this recursive traversal routine works imagine traversing tree with only three nodesa root ( )with left child ( )and right child ( )as shown in figure
22,895
inorder ( call inorder ( visit call inorder (cfigure inorder (binorder ( call inorder (null visit call inorder (null call inorder (null visit call inorder (nullinorder (nullinorder (nullinorder (nullinorder (nullreturns returns returns returns the inorder(method applied to three-node tree we start by calling inorder(with the root as an argument this incarnation of inorder(we'll call inorder(ainorder(afirst calls inorder(with its left childbas an argument this second incarnation of inorder(we'll call inorder(binorder(bnow calls itself with its left child as an argument howeverit has no left childso this argument is null this creates an invocation of inorder(we could call inorder(nullthere are now three instances of inorder(in existenceinorder( )inorder( )and inorder(nullhoweverinorder(nullreturns immediately when it finds its argument is null (we all have days like that now inorder(bgoes on to visit bwe'll assume this means to display it then inorder(bcalls inorder(againwith its right child as an argument again this argument is nullso the second inorder(nullreturns immediately now inorder(bhas carried out steps and so it returns (and thereby ceases to exist
22,896
binary trees now we're back to inorder( )just returning from traversing ' left child we visit and then call inorder(again with as an argumentcreating inorder(clike inorder( )inorder(chas no childrenso step returns with no actionstep visits cand step returns with no action inorder(bnow returns to inorder(ahoweverinorder(ais now doneso it returns and the entire traversal is complete the order in which the nodes were visited is abcthey have been visited inorder in binary search tree this would be the order of ascending keys more complex trees are handled similarly the inorder(function calls itself for each nodeuntil it has worked its way through the entire tree traversing with the workshop applet to see what traversal looks like with the workshop appletrepeatedly press the trav button (you don' need to type in any numbers here' what happens when you use the tree workshop applet to traverse inorder the tree shown in figure this is slightly more complex than the three-node tree seen previously the red arrow starts at the root table shows the sequence of node keys and the corresponding messages the key sequence is displayed at the bottom of the workshop applet screen visit visit visit visit visit figure traversing tree inorder
22,897
table workshop applet traversal step number red arrow on node message list of nodes visited (root will check left child will check left child will check left child will visit this node will check right child will go to root of previous subtree will visit this node will check right child will check left child will visit this node will check right child will go to root of previous subtree will visit this node will check right child will check left child will visit this node will check right child will go to root of previous subtree done traversal it may not be obviousbut for each nodethe routine traverses the node' left subtreevisits the nodeand traverses the right subtree for examplefor node this happens in steps and the traversal algorithm isn' as complicated as it looks the best way to get feel for what' happening is to traverse variety of different trees with the workshop applet preorder and postorder traversals you can traverse the tree in two ways besides inorderthey're called preorder and postorder it' fairly clear why you might want to traverse tree inorderbut the motivation for preorder and postorder traversals is more obscure howeverthese traversals are indeed useful if you're writing programs that parse or analyze algebraic expressions let' see why that should be true binary tree (not binary search treecan be used to represent an algebraic expression that involves the binary arithmetic operators +-/and the root node holds an operatorand the other nodes hold either variable name (like abor )or another operator each subtree is valid algebraic expression
22,898
binary trees infixa*( +cprefix* +bc postfixabc+figure tree representing an algebraic expression for examplethe binary tree shown in figure represents the algebraic expression *( +cthis is called infix notationit' the notation normally used in algebra (for more on infix and postfixsee the section "parsing arithmetic expressionsin "stacks and queues "traversing the tree inorder will generate the correct inorder sequence * +cbut you'll need to insert the parentheses yourself what does all this have to do with preorder and postorder traversalslet' see what' involved for these other traversals the same three steps are used as for inorderbut in different sequence here' the sequence for preorder(method visit the node call itself to traverse the node' left subtree call itself to traverse the node' right subtree traversing the tree shown in figure using preorder would generate the expression * +bc this is called prefix notation one of the nice things about it is that parentheses are never requiredthe expression is unambiguous without them starting on the lefteach operator is applied to the next two things in the expression for the first operator*these two things are and +bc for the second operator+the two things are and cso this last expression is + in inorder notation inserting that into the original expression * +bc (preordergives us *( +cin inorder by using different traversals of the treewe can transform one form of the algebraic expression into another
22,899
the third kind of traversalpostordercontains the three steps arranged in yet another way call itself to traverse the node' left subtree call itself to traverse the node' right subtree visit the node for the tree in figure visiting the nodes with postorder traversal would generate the expression abc+this is called postfix notation it means "apply the last operator in the expression*to the first and second things the first thing is aand the second thing is bcbcmeans "apply the last operator in the expression+to the first and second things the first thing is and the second thing is cso this gives us ( +cin infix inserting this in the original expression abc+(postfixgives us *( +cpostfix note the code in listing contains methods for preorder and postorder traversalsas well as for inorder we won' show the details herebut you can fairly easily construct tree like that in figure using postfix expression as input the approach is analogous to that of evaluating postfix expressionwhich we saw in the postfix java program (listing in howeverinstead of storing operands on the stackwe store entire subtrees we read along the postfix string as we did in postfix java here are the steps when we encounter an operand make tree with one node that holds the operand push this tree onto the stack here are the steps when we encounter an operator pop two operand trees and off the stack create new tree with the operator in its root attach as the right child of attach as the left child of push the resulting tree back on the stack