id
int64
0
25.6k
text
stringlengths
0
4.59k
19,900
point polygon rectangle text graphwin methods summary gregorian epact gui hailstone function halting problem happy happy birthday lyrics problem description program happy py hard drive hardware hash array hierarchy chartsee structure chart house house (of representatives) identifier definition of rules for forming idle if statement flowchart semantics syntax if-elif-else statement semantics syntax if-else statement decision tree nested semantics syntax image implementation import statement with "from" import statement indefinite loop indexing dictionary from the right list negative indexes string infinite loop inheritance inner product innerprod input validation input statement multiple values semantics syntax input/output devices instance instance variable accessing instance variable and object state int automatic conversion to float literal range of representation integer division interest calculation program interface interpreter diagram python vs compiler intractable problems investment doubling ipo (inputprocessoutput) iteration key cipher private public shared with dictionary key-value pair keyerror keywords koch curve
19,901
label ladder leap year len with string with list lexicographic ordering library definition of graphicssee graphics library mathsee math library randomsee random library lightning line line continuation using backslash (\) using brackets linear time list as sequence creation empty indexing merging methods operators removing items slice vs string lists decorated literal float int string log time long int loop accumulator variable as control structure counted definite end-of-file event loop for statement indefinite (conditional) index variable infinite interactive loop and half nested over sequence post-test using break using while pre-test while statement loop and half loopfib looppower lower lucasedouard machine code machine language maintenance makestudent mapping math domain error math library functions using max max-of- program max-of-three - maxn py mean median memory main secondary merge merge sort mergesort analysis message encoding algorithm message decoding algorithm problem description program message encoding problem description program meta-language method
19,902
parameterless accessor call (invoke) mutator normal parameter object parameters self parameter string methods activate clicked deactivate dictionary list model-view module file module hierarchy chartsee structure chart molecular weight monte carlo month abbreviation problem description program month py month py move movetower mpg msdie mutable mutator nameerror names nesting newline character (\ ) with readline newton' method none numbers text py numerology object aliasing application as as black box as parameter attributes definition of state object-oriented object-oriented design (ood) objects built-in none graphicssee graphics libraryobjects othersee classes objrball py old macdonald one-way decision open operator boolean as control structure definition of precedence relational short-circuit operators boolean del list mathematical python numeric operators relational or operational definition ord output labeling output statements oval override overtime palindrome parameter actual as function input formal matching by order multiple objects as removing code duplication scope issues self pi math library
19,903
execution modes there are two ways to use the python interpreterainteractive mode bscript mode interactive mode allows execution of individual statement instantaneously whereasscript mode allows us to write more than one instruction in file called python source code file that can be executed (ainteractive mode to work in the interactive modewe can simply type python statement on the prompt directly as soon as we press enterthe interpreter executes the statement and displays the result( )as shown in figure figure python interpreter in interactive mode working in the interactive mode is convenient for testing single line code for instant execution but in the interactive modewe cannot save the statements for future use and we have to retype the statements to run them again (bscript mode in the script modewe can write python program in filesave it and then use the interpreter to execute it python scripts are saved as files where file name has extension pyby defaultthe python scripts are saved in the python installation folder to execute scriptwe can eitheratype the file name along with the path at the prompt for exampleif the name of the file is prog - pywe type prog - py we can otherwise open the program directly from idle as shown in figure bwhile working in the script modeafter saving the fileclick [run]->[run modulefrom the menu as shown in figure -
19,904
computer science class xi cthe output appears on shell as shown in figure program - write program to show print statement in script mode figure python source code file (prog - pyfigure execution of python in script mode using idle figure output of program executed in script mode python keywords keywords are reserved words each keyword has specific meaning to the python interpreterand we can use keyword in our program only for the purpose for which it has been defined as python is case sensitivekeywords must be written exactly as given in table table python keywords false none class finally continue for - is lambda return try
19,905
true and as assert break def del elif else except from global if import in nonlocal not or pass raise while with yield identifiers in programming languagesidentifiers are names used to identify variablefunctionor other entities in program the rules for naming an identifier in python are as followsthe name should begin with an uppercase or lowercase alphabet or an underscore sign (_this may be followed by any combination of characters -za- - or underscore (_thusan identifier cannot start with digit it can be of any length (howeverit is preferred to keep it short and meaningfulit should not be keyword or reserved word given in table we cannot use special symbols like !@#$%etc in identifiers for exampleto find the average of marks obtained by student in three subjectswe can choose the identifiers as marks marks marks and avg rather than abcor abc avg (marks marks marks )/ similarlyto calculate the area of rectanglewe can use identifier namessuch as arealengthbreadth instead of single alphabets as identifiers for clarity and more readability area length breadth variables variable in program is uniquely identified by name (identifiervariable in python refers to an object -an item or element that is stored in the memory value of variable can be string ( ' ''global citizen')numeric ( or any combination of alphanumeric characters (cd in python we can use an assignment statement to create new variables and assign specific values to them - notes
19,906
computer science class xi gender 'mprice message "keep smilingprogram - write program to display values of variables in python #program - #to display values of variables message "keep smilingprint(messageuserno print('user number is'usernooutputkeep smiling user number is in the program - the variable message holds string type value and so its content is assigned within double quotes (can also be within single quotes ')whereas the value of variable userno is not enclosed in quotes as it is numeric value variable declaration is implicit in pythonmeans variables are automatically declared and defined when they are assigned value the first time variables must always be assigned values before they are used in expressions as otherwise it will lead to an error in the program wherever variable name occurs in an expressionthe interpreter replaces it with the value of that particular variable program - write python program to find the area of rectangle given that its length is units and breadth is units #program - #to find the area of rectangle length breadth area length breadth print(areaoutput comments comments are used to add remark or note in the source code comments are not executed by interpreter -
19,907
they are added with the purpose of making the source code easier for humans to understand they are used primarily to document the meaning and purpose of source code and its input and output requirementsso that we can remember later how it functions and how to use it for large and complex softwareit may require programmers to work in teams and sometimesa program written by one programmer is required to be used or maintained by another programmer in such situationsdocumentations in the form of comments are needed to understand the working of the program in pythona comment starts with (hash signeverything following the till the end of that line is treated as comment and the interpreter simply ignores it while executing the statement example #variable amount is the total spending on #grocery amount #totalmarks is sum of marks in all the tests #of mathematics totalmarks test test finaltest program - write python program to find the sum of two numbers #program - #to find the sum of two numbers num num result num num print(resultoutput everything is an object python treats every value or data item whether numericstringor other type (discussed in the next sectionas an object in the sense that it can be assigned to some variable or can be passed to function as an argument every object in python is assigned unique identity (idwhich remains the same for the lifetime of that object this id is akin to the memory address of the object the function id(returns the identity of an object - in the context of object oriented programming (oop)objects are representation of the real worldsuch as employeestudentvehicleboxbooketc in any object oriented programming language like ++javaetc each object has two things associated with it(idata or attributes and (iibehaviour or methods further there are concepts of class and class hierarchies from which objects can be instantiated howeveroop concepts are not in the scope of our present discussions python also comes under the category of object oriented programming howeverin pythonthe definition of object is loosely casted as some objects may not have attributes or others may not have methods
19,908
computer science class xi example num id(num #identity of num num id(num #identity of num and num #are same as both refers to #object data types every value belongs to specific data type in python data type identifies the type of data values variable can hold and the operations that can be performed on that data figure enlists the data types available in python dictionaries figure different data types in python number number data type stores numerical values only it is further classified into three different typesintfloat and complex table numeric data types typeclass int description examples integer numbers - - float real or floating point numbers - complex complex numbers boolean data type (boolis subtype of integer it is unique data typeconsisting of two constantstrue and false boolean true value is non-zeronon-null and non-empty boolean false is the value zero -
19,909
let us now try to execute few statements in interactive mode to determine the data type of the variable using built-in function type(example num type(num num - type(num var true type(var float - type(float float - * ** print(float type(float )- var - + print(var type(var )(- + jvariables of simple data types like integersfloatbooleanetc hold single values but such variables are not useful to hold long list of informationfor examplenames of the months in yearnames of students in classnames and numbers in phone book or the list of artefacts in museum for thispython provides data types like tupleslistsdictionaries and sets sequence python sequence is an ordered collection of itemswhere each item is indexed by an integer the three types of sequence data types available in python are stringslists and tuples we will learn about each of them in detail in later brief introduction to these data types is as follows(astring string is group of characters these characters may be alphabetsdigits or special characters including spaces string values are enclosed either in single quotation - notes
19,910
computer science class xi notes marks ( 'hello'or in double quotation marks ( "hello"the quotes are not part of the stringthey are used to mark the beginning and end of the string for the interpreter for examplestr 'hello friendstr " we cannot perform numerical operations on stringseven when the string contains numeric valueas in str (blist list is sequence of items separated by commas and the items are enclosed in square brackets example #to create list list [ "new delhi"" " #print the elements of the list list print(list [ 'new delhi'' ' (ctuple tuple is sequence of items separated by commas and items are enclosed in parenthesis this is unlike listwhere values are enclosed in brackets once createdwe cannot change the tuple example #create tuple tuple tuple ( "apple" ' '#print the elements of the tuple tuple print(tuple ( "apple" ' 'set set is an unordered collection of items separated by commas and the items are enclosed in curly brackets set is similar to listexcept that it cannot have duplicate entries once createdelements of set cannot be changed example #create set set { , , ,"new delhi"print(type(set )print(set { "new delhi"#duplicate elements are not included in set -
19,911
set { , , , print(set { none none is special data type with single value it is used to signify the absence of value in situation none supports no special operationsand it is neither same as false nor (zeroexample myvar none print(type(myvar)print(myvarnone mapping mapping is an unordered data type in python currentlythere is only one standard mapping data type in python called dictionary (adictionary dictionary in python holds data items in key-value pairs items in dictionary are enclosed in curly brackets dictionaries permit faster access to data every key is separated from its value using colon (:sign the key value pairs of dictionary can be accessed using the key the keys are usually strings and their values can be any data type in order to access any value in the dictionarywe have to specify its key in square brackets example #create dictionary dict {'fruit':'apple''climate':'cold''price(kg)': print(dict {'fruit''apple''climate''cold''price(kg)' print(dict ['price(kg)'] mutable and immutable data types sometimes we may require to change or update the values of certain variables used in program howeverfor certain data typespython does not allow us to -
19,912
computer science class xi change the values once variable of that type has been created and assigned values variables whose values can be changed after they are created and assigned are called mutable variables whose values cannot be changed after they are created and assigned are called immutable when an attempt is made to update the value of an immutable variablethe old variable is destroyed and new variable is created by the same name in memory python data types can be classified into mutable and immutable as shown in figure figure classification of data types let us now see what happens when an attempt is made to update the value of variable num this statement will create an object with value and the object is referenced by the identifier num as shown in figure figure object and its identifier figure variables with same value have same identifier - num num the statement num num will make num refer to the value also being referred by num and stored at memory location numbersay sonum shares the referenced location with num as shown in figure
19,913
in this manner makes the assignment effective by copying only the referenceand not the datanum num figure variables with different values have different identifiers this statement num num links the variable num to new object stored at memory location number say having value as num is an integerwhich is an immutable typeit is rebuiltas shown in figure deciding usage of python data types it is preferred to use lists when we need simple iterable collection of data that may go for frequent modifications for exampleif we store the names of students of class in listthen it is easy to update the list when some new students join or some leave the course tuples are used when we do not need any change in the data for examplenames of months in year when we need uniqueness of elements and to avoid duplicacy it is preferable to use setsfor examplelist of artefacts in museum if our data is being constantly modified or we need fast lookup based on custom key or we need logical association between the key value pairit is advised to use dictionaries mobile phone book is good application of dictionary operators an operator is used to perform specific mathematical or logical operation on values the values that the operators work on are called operands for examplein the expression numthe value and the variable num are operands and the (plussign is an operator python supports several kinds of operators whose categorisation is briefly explained in this section - python compares strings lexicographicallyusing ascii value of the characters if the first character of both the strings are samethe second character is comparedand so on
19,914
arithmetic operators python supports arithmetic operators that are used to perform the four basic arithmetic operations as well as modular divisionfloor division and exponentiation table arithmetic operators in python operator operation addition example (try in labdescription adds the two numeric values on num num either side of the operator num num this operator can also be used to concatenate two strings on either str "helloside of the operator str "indiasubtraction multiplication str str 'helloindiasubtracts the operand on the right num num from the operand on the left multiplies the two values on both side of the operator repeats the item on left of the operator if first operand is string and second operand is an integer value division num num - num num num num str 'indiastr 'indiaindiadivides the operand on the left num by the operand on the right and num returns the quotient num num modulus divides the operand on the left by the operand on the right and returns the remainder /floor division divides the operand on the left by the operand on the right and returns the quotient by removing the decimal part it is sometimes also called integer division *exponent performs exponential (powercalculation on operands that israise the operand on the left to the power of the operand on the right num num num num num num num /num num /num num num num *num relational operators relational operator compares the values of the operands on its either side and determines the relationship among -
19,915
them assume the python variables num num num str "good"str "afternoonfor the following examplestable relational operators in python operator operation =equals to description example (try in labif the values of two operands are num =num equalthen the condition is truefalse otherwise it is false >str =str !not equal to if values of two operands are not equalthen condition is trueotherwise it is false greater than if the value of the left-side operand is greater than the value of the rightside operandthen condition is trueotherwise it is false less than if the value of the left-side operand is less than the value of the rightside operandthen condition is trueotherwise it is false >greater than or equal to if the value of the left-side operand is greater than or equal to the value of the right-side operandthen condition is trueotherwise it is false <less than or equal to if the value of the left operand is less than or equal to the value of the right operandthen is true otherwise it is false false num !num true str !str true num !num false num num true str str true num num false str str true num >num true num >num false str >str true num <num false num <num true str <str false assignment operators assignment operator assigns or changes the value of the variable on its left table assignment operators in python operator example (try in labassigns value from right-side operand to leftnum num num side operand num country 'indiacountry 'indiadescription -
19,916
+it adds the value of right-side operand to the left-side operand and assigns the result to the left-side operand notex + is same as num num num +num num num str 'hellostr 'indiastr +str str 'helloindia-it subtracts the value of right-side operand from the left-side operand and assigns the result to left-side operand notex - is same as num num num -num num *it multiplies the value of right-side operand with the value of left-side operand and assigns the result to left-side operand notex * is same as num num num * num 'indiaa * 'indiaindiaindia/it divides the value of left-side operand by the value of right-side operand and assigns the result to left-side operand notex / is same as num num num /num num %it performs modulus operation using two operands and assigns the result to left-side operand notex % is same as num num num %num num //it performs floor division using two operands and assigns the result to left-side operand notex // is same as / **it performs exponential (powercalculation on operators and assigns value to the left-side operand notex ** is same as * - num num num //num num num num num **num num
19,917
logical operators there are three logical operators supported by python these operators (andornotare to be written in lower case only the logical operator evaluates to either true or false based on the logical operands on either side every value is logically either true or false by defaultall values are true except nonefalse (zero)empty collections ""()[]{}and few other special values so if we say num num - then both num and num are logically true table logical operators in python operator operation description example (try in labtrue and true if both the operands are truethen condition true becomes true num num - bool(num and num true true and false false num bool(num and num false false and false false and logical and or logical or if any of the two operands are truethen condition becomes true not logical not used to reverse the logical num bool(num state of its operand true or true true true or false true bool(num or num true false or false false true not num bool(num false identity operators identity operators are used to determine whether the value of variable is of certain type or not identity operators can also be used to determine whether two -
19,918
computer science class xi variables are referring to the same object or not there are two identity operators table identity operators in python operator is is not description example (try in labnum type(num is int true num num id(num id(num num is num true evaluates to false if the variables on num is not num either side of the operator point to the same false evaluates true if the variables on either side of the operator point towards the same memory location and false otherwise var is var results to true if id(var is equal to id(var memory location and true otherwise var is not var results to true if id(var is not equal to id(var membership operators membership operators are used to check if value is member of the given sequence or not table membership operators in python operator in description example (try in labreturns true if the variable/value is found in the [ , , in specified sequence and false otherwise true ' in false not in returns true if the variable/value is not found in [ , , not in the specified sequence and false otherwise true not in false expressions an expression is defined as combination of constantsvariablesand operators an expression always evaluates to value value or standalone variable is also considered as an expression but standalone operator is not an expression some examples of valid expressions are given below ( (iinum (iiinum - (iv ( / - ( - (vi"global"citizen
19,919
precedence of operators evaluation of the expression is based on precedence of operators when an expression contains different kinds of operatorsprecedence determines which operator should be applied first higher precedence operator is evaluated before the lower precedence operator most of the operators studied till now are binary operators binary operators are operators with two operands the unary operators need only one operandand they have higher precedence than the binary operators the minus (-as well as (plusoperators can act as both unary and binary operatorsbut not is unary logical operator #depth is using (minusas unary operator value -depth #not is unary operatornegates true print(not(true)the following table lists precedence of all operators from highest to lowest table precedence of all operators in python order of precedence description operators *exponentiation (raise to the power,/%/multiplydividemodulo and floor division addition and subtraction >==!relational and comparison operators =%=/=//=-=+=assignment operators *=**isis not identity operators innot in membership operators and logical operators ,+ complementunary plus and unary minus + not or noteaparenthesis can be used to override the precedence of operators the expression within (is evaluated first bfor operators with equal precedencethe expression is evaluated from left to right example how will python evaluate the following expression -
19,920
computer science class xi notes solution ( #step #step #step #precedence of is more than that of example how will python evaluate the following expression solutionthe two operators (-and (+have equal precedence thusthe first operatori subtraction is applied before the second operatori addition (left to right( #step - #step #step example how will python evaluate the following expressionsolution( step #using parenthesis()we have forced precedence of to be more than that of step step ( example how will the following expression be evaluated in pythonsolution ( ( #step #step #step #step statement in pythona statement is unit of code that the python interpreter can execute example #assignment statement cube * #assignment statement print (xcube#print statement -
19,921
notes input and output sometimesa program needs to interact with the user' to get some input data or information from the end user and process it to give the desired output in pythonwe have the input(function for taking the user input the input(function prompts the user to enter data it accepts all user input as string the user may enter number or string but the input(function treats them as strings only the syntax for input(isinput ([prompt]prompt is the string we may like to display on the screen prior to taking the inputand it is optional when prompt is specifiedfirst it is displayed on the screen after which the user can enter data the input(takes exactly what is typed from the keyboardconverts it into string and assigns it to the variable on left-hand side of the assignment operator (=entering data for the input function is terminated by pressing the enter key example fname input("enter your first name"enter your first namearnab age input("enter your age"enter your age type(agethe variable fname will get the string 'arnab'entered by the user similarlythe variable age will get the string ' we can typecast or change the datatype of the string data accepted from user to an appropriate numeric value for examplethe following statement will convert the accepted string to an integer if the user enters any non-numeric valuean error will be generated example #function int(to convert string to integer age intinput("enter your age:")enter your age type(agepython uses the print(function to output data to standard output device -the screen we will learn about function in the function print(evaluates the expression before displaying it on the screen the print( -
19,922
computer science class xi outputs complete line and then moves to the next line for subsequent output the syntax for print(isobserve that plus sign does not add any space between the two strings while comma inserts space between two strings in print statement print(value [sep 'end '\ ']septhe optional parameter sep is separator between the output values we can use characterinteger or string as separator the default separator is space endthis is also optional and it allows us to specify any string to be appended after the last value the default is new line example statement print("hello"print( * print(" "love"my"country"print(" ' " "years old"output hello ilovemycountry ' years old the third print function in the above example is concatenating stringsand we use (plusbetween two strings to concatenate them the fourth print function also appears to be concatenating strings but uses commas (,between strings actuallyhere we are passing multiple argumentsseparated by commas to the print function as arguments can be of different typeshence the print function accepts integer ( along with strings here but in case the print statement has values of different types and '+is used instead of commait will generate an error as discussed in the next section under explicit conversion type conversion consider the following program num input("enter number and 'll double it"num num print(num the program was expected to display double the value of the number received and store in variable num so if user enters and expects the program to display as the outputthe program displays the following resultenter number and 'll double it -
19,923
this is because the value returned by the input function is string (" "by default as resultin statement num num num has string value and acts as repetition operator which results in output as " to get as outputwe need to convert the data type of the value entered by the user to integer thuswe modify the program as followsnum input("enter number and 'll double it"num int(num #convert string input to #integer num num print(num nowthe program will display the expected output as followsenter number and 'll double it let us now understand what is type conversion and how it works as and when requiredwe can change the data type of variable in python from one type to another such data type conversion can happen in two wayseither explicitly (forcedwhen the programmer specifies for the interpreter to convert data type to another typeor implicitlywhen the interpreter understands such need by itself and does the type conversion automatically explicit conversion explicit conversionalso called type casting happens when data type conversion takes place because the programmer forced it in the program the general form of an explicit data type conversion is(new_data_type(expressionwith explicit type conversionthere is risk of loss of information since we are forcing an expression to be of specific type for exampleconverting floating value of into an integer typei int(xwill discard the fractional part following are some of the functions in python that are used for explicitly converting an expression or variable to different type table explicit type conversion functions in python function int(xfloat(xdescription converts to an integer converts to floating-point number - notes
19,924
computer science class xi str(xchr(xord(xconverts to string representation converts ascii value of to character returns the character associated with the ascii code program - program of explicit type conversion from int to float #program - #explicit type conversion from int to float num num num num num print(num print(type(num )num float(num num print(num print(type(num )output program - program of explicit type conversion from float to int #program - #explicit type conversion from float to int num num num (num num print(num print(type(num )num int(num num print(num print(type(num )output program - example of type conversion between numbers and strings #program - #type conversion between numbers and strings priceicecream pricebrownie totalprice priceicecream pricebrownie print("the total is rs totalprice -
19,925
figure output of program - on executionprogram - gives an error as shown in figure informing that the interpreter cannot convert an integer value to string implicitly it may appear quite intuitive that the program should convert the integer value to string depending upon the usage howeverthe interpreter may not decide on its own when to convert as there is risk of loss of information python provides the mechanism of the explicit type conversion so that one can clearly state the desired outcome program - works perfectly using explicit type castingprogram - program to show explicit type casting #program - #explicit type casting priceicecream pricebrownie totalprice priceicecream pricebrownie print("the total in rs str(totalprice)outputthe total in rs similarlytype casting is needed to convert float to string in pythonone can convert string to integer or float values whenever required program - program to show explicit type conversion #program - #explicit type conversion icecream ' brownie ' #string concatenation price icecream brownie print("total price rs price#explicit type conversion string to integer -
19,926
computer science class xi price int(icecream)+int(brownieprint("total price rs str(price)outputtotal price rs total price rs implicit conversion implicit conversionalso known as coercionhappens when data type conversion is done automatically by python and is not instructed by the programmer program - program to show implicit conversion from int to float #program - #implicit type conversion from int to float num num sum num num and an integer print(sum print(type(sum )#num is an integer #num is float #sum is sum of float output in the above examplean integer value stored in variable num is added to float value stored in variable num and the result was automatically converted to float value stored in variable sum without explicitly telling the interpreter this is an example of implicit data conversion one may wonder why was the float value not converted to an integer insteadthis is due to type promotion that allows performing operations (whenever possibleby converting data into wider-sized data type without any loss of information debugging programmer can make mistakes while writing programand hencethe program may not execute or may generate wrong output the process of identifying and removing such mistakesalso known as bugs or errorsfrom program is called debugging errors occurring in programs can be categorised asisyntax errors iilogical errors iiiruntime errors -
19,927
syntax errors like other programming languagespython has its own rules that determine its syntax the interpreter interprets the statements only if it is syntactically (as per the rules of pythoncorrect if any syntax error is presentthe interpreter shows error message(sand stops the execution there for exampleparentheses must be in pairsso the expression ( is syntactically correctwhereas ( is not due to absence of right parenthesis such errors need to be removed before the execution of the program logical errors logical error is bug in the program that causes it to behave incorrectly logical error produces an undesired output but without abrupt termination of the execution of the program since the program interprets successfully even when logical errors are present in itit is sometimes difficult to identify these errors the only evidence to the existence of logical errors is the wrong output while working backwards from the output of the programone can identify what went wrong for exampleif we wish to find the average of two numbers and and we write the code as / it would run successfully and produce the result surely is not the average of and the correct code to find the average should have been ( )/ to give the correct output as logical errors are also called semantic errors as they occur when the meaning of the program (its semanticsis not correct runtime error runtime error causes abnormal termination of program while it is executing runtime error is when the statement is correct syntacticallybut the interpreter cannot execute it runtime errors do not appear until after the program starts running or executing for examplewe have statement having division operation in the program by mistakeif the denominator entered is zero then it will give runtime error like "division by zerolet us look at the program - showing two types of runtime errors when user enters non-integer value -
19,928
computer science class xi or value ' the program generates correct output when the user inputs an integer value for num program - example of program which generates runtime error #program - #runtime errors example num num int(input("num ")#if user inputs string or zeroit leads to runtime error print(num /num figure output of program - summary python is an open-sourcehigh levelinterpreterbased language that can be used for multitude of scientific and non-scientific computing purposes comments are non-executable statements in program an identifier is user defined name given to variable or constant in program the process of identifying and removing errors from computer program is called debugging trying to use variable that has not been assigned value gives an error there are several data types in python -integerbooleanfloatcomplexstringlisttuplesetsnone and dictionary -
19,929
datatype conversion can happen either explicitly or implicitly operators are constructs that manipulate the value of operands operators may be unary or binary an expression is combination of valuesvariables and operators python has input(function for taking user input python has print(function to output data to standard output device exercise which of the following identifier names are invalid and whyi ii iii iv serial_no st_room hundredtotal marks vi vii viii total_marks total-marks _percentage true write the corresponding statementspython assignment aassign to variable length and to variable breadth bassign the average of values of variables length and breadth to variable sum cassign list containing strings 'paper''gel pen'and 'eraserto variable stationery dassign the strings 'mohandas''karamchand'and 'gandhito variables firstmiddle and last eassign the concatenated value of string variables firstmiddle and last to variable fullname make sure to incorporate blank spaces appropriately between different parts of names write logical expressions corresponding to the following statements in python and evaluate the expressions (assuming variables num num num firstmiddlelast are already having meaningful values)athe sum of and - is less than bnum is not more than - notes
19,930
computer science class xi notes is between the values of integers num and num dthe string 'middleis larger than the string 'firstand smaller than the string 'lastelist stationery is empty add pair of parentheses to each expression so that it evaluates to true abc = = = = write the output of the followingabcnum num num num print (num num num num num num num num print (num num num num num num num num print (num num num which data type will be used to represent the following data values and whyanumber of months in year bresident of delhi or not cmobile number dpocket money evolume of sphere fperimeter of square ghname of the student address of the student give the output of the following when num num num abcdnum +num num print (num num num *(num num print (num num **num num num ' ' print(num -
19,931
notes print( /( + )num + *(( * )- )/ print(num efnum / / print(num gnum float( print (num num int(' 'print (num print('bye='bye'print( ! and > print( * ! // - and > / print( and < print(( or (not( = and ( < ))hijklmn categorise the following as syntax errorlogical error or runtime errorab num num num /num dartboard of radius units and the wall it is hanging on are represented using two-dimensional coordinate systemwith the board' center at coordinate ( , variables and store the -coordinate and the -coordinate of dart that hits the dartboard write python expression using variables and that evaluates to true if the dart hits (is withinthe dartboardand then evaluate the expression for these dart coordinatesabcd( , ( , ( ( , write python program to convert temperature in degree celsius to degree fahrenheit if water boils at degree and freezes as degree cuse the program to find out what is the boiling point and freezing point of water on the fahrenheit scale (hintt(degft(degcx / write python program to calculate the amount payable if money has been lent on simple interest -
19,932
computer science class xi notes principal or money lent prate of interest rper annum and time years then simple interest (si( ) amount payable principal si pr and are given as input to the program write program to calculate in how many days work will be completed by three persons ab and together abc take daysy days and days respectively to do the job alone the formula to calculate the number of days if they work together is xyz/(xy yz xzdays where xyand are given as input to the program write program to enter two integers and perform all arithmetic operations on them write program to swap two numbers using third variable write program to swap two numbers without using third variable write program to repeat the string ''good morningn times here 'nis an integer entered by the user write program to find average of three numbers the volume of sphere with radius is / pr write python program to find the volume of spheres with radius cm cm cmrespectively write program that asks the user to enter their name and age print message addressed to the user that tells the user the year in which they will turn years old the formula mc states that the equivalent energy (ecan be calculated as the mass (mmultiplied by the speed of light ( about /ssquared write program that accepts the mass of an object and determines its energy presume that ladder is put upright against wall let variables length and angle store the length of the ladder and the angle that it forms with the ground as it leans against the wall write python program to compute -
19,933
introduction these lecture notes cover the key ideas involved in designing algorithms we shall see how they depend on the design of suitable data structuresand how some structures and algorithms are more efficient than others for the same task we will concentrate on few basic taskssuch as storingsorting and searching datathat underlie much of computer sciencebut the techniques discussed will be applicable much more generally we will start by studying some key data structuressuch as arrayslistsqueuesstacks and treesand then move on to explore their use in range of different searching and sorting algorithms this leads on to the consideration of approaches for more efficient storage of data in hash tables finallywe will look at graph based representations and cover the kinds of algorithms needed to work efficiently with them throughoutwe will investigate the computational efficiency of the algorithms we developand gain intuitions about the pros and cons of the various potential approaches for each task we will not restrict ourselves to implementing the various data structures and algorithms in particular computer programming languages ( javac ocaml )but specify them in simple pseudocode that can easily be implemented in any appropriate language algorithms as opposed to programs an algorithm for particular task can be defined as " finite sequence of instructionseach of which has clear meaning and can be performed with finite amount of effort in finite length of timeas suchan algorithm must be precise enough to be understood by human beings howeverin order to be executed by computerwe will generally need program that is written in rigorous formal languageand since computers are quite inflexible compared to the human mindprograms usually need to contain more details than algorithms here we shall ignore most of those programming details and concentrate on the design of algorithms rather than programs the task of implementing the discussed algorithms as computer programs is importantof coursebut these notes will concentrate on the theoretical aspects and leave the practical programming aspects to be studied elsewhere having said thatwe will often find it useful to write down segments of actual programs in order to clarify and test certain theoretical aspects of algorithms and their data structures it is also worth bearing in mind the distinction between different programming paradigmsimperative programming describes computation in terms of instructions that change the program/data statewhereas declarative programming
19,934
will primarily be concerned with developing algorithms that map easily onto the imperative programming approach algorithms can obviously be described in plain englishand we will sometimes do that howeverfor computer scientists it is usually easier and clearer to use something that comes somewhere in between formatted english and computer program codebut is not runnable because certain details are omitted this is called pseudocodewhich comes in variety of forms often these notes will present segments of pseudocode that are very similar to the languages we are mainly interested innamely the overlap of and javawith the advantage that they can easily be inserted into runnable programs fundamental questions about algorithms given an algorithm to solve particular problemwe are naturally led to ask what is it supposed to do does it really do what it is supposed to do how efficiently does it do itthe technical terms normally used for these three aspects are specification verification performance analysis the details of these three aspects will usually be rather problem dependent the specification should formalize the crucial details of the problem that the algorithm is intended to solve sometimes that will be based on particular representation of the associated dataand sometimes it will be presented more abstractly typicallyit will have to specify how the inputs and outputs of the algorithm are relatedthough there is no general requirement that the specification is complete or non-ambiguous for simple problemsit is often easy to see that particular algorithm will always worki that it satisfies its specification howeverfor more complicated specifications and/or algorithmsthe fact that an algorithm satisfies its specification may not be obvious at all in this casewe need to spend some effort verifying whether the algorithm is indeed correct in generaltesting on few particular inputs can be enough to show that the algorithm is incorrect howeversince the number of different potential inputs for most algorithms is infinite in theoryand huge in practicemore than just testing on particular cases is needed to be sure that the algorithm satisfies its specification we need correctness proofs although we will discuss proofs in these notesand useful relevant ideas like invariantswe will usually only do so in rather informal manner (thoughof coursewe will attempt to be rigorousthe reason is that we want to concentrate on the data structures and algorithms formal verification techniques are complex and will normally be left till after the basic ideas of these notes have been studied finallythe efficiency or performance of an algorithm relates to the resources required by itsuch as how quickly it will runor how much computer memory it will use this will
19,935
of the algorithm indeedthis is what normally drives the development of new data structures and algorithms we shall study the general ideas concerning efficiency in and then apply them throughout the remainder of these notes data structuresabstract data typesdesign patterns for many problemsthe ability to formulate an efficient algorithm depends on being able to organize the data in an appropriate manner the term data structure is used to denote particular way of organizing data for particular types of operation these notes will look at numerous data structures ranging from familiar arrays and lists to more complex structures such as treesheaps and graphsand we will see how their choice affects the efficiency of the algorithms based upon them often we want to talk about data structures without having to worry about all the implementational details associated with particular programming languagesor how the data is stored in computer memory we can do this by formulating abstract mathematical models of particular classes of data structures or data types which have common features these are called abstract data typesand are defined only by the operations that may be performed on them typicallywe specify how they are built out of more primitive data types ( integers or strings)how to extract that data from themand some basic checks to control the flow of processing in algorithms the idea that the implementational details are hidden from the user and protected from outside access is known as encapsulation we shall see many examples of abstract data types throughout these notes at an even higher level of abstraction are design patterns which describe the design of algorithmsrather the design of data structures these embody and generalize important design concepts that appear repeatedly in many problem contexts they provide general structure for algorithmsleaving the details to be added as required for particular problems these can speed up the development of algorithms by providing familiar proven algorithm structures that can be applied straightforwardly to new problems we shall see number of familiar design patterns throughout these notes textbooks and web-resources to fully understand data structures and algorithms you will almost certainly need to complement the introductory material in these notes with textbooks or other sources of information the lectures associated with these notes are designed to help you understand them and fill in some of the gaps they containbut that is unlikely to be enough because often you will need to see more than one explanation of something before it can be fully understood there is no single best textbook that will suit everyone the subject of these notes is classical topicso there is no need to use textbook published recently books published or years ago are still goodand new good books continue to be published every year the reason is that these notes cover important fundamental material that is taught in all university degrees in computer science these days there is also lot of very useful information to be found on the internetincluding complete freely-downloadable books it is good idea to go to your library and browse the shelves of books on data structures and algorithms if you like any of themdownloadborrow or buy copy for yourselfbut make sure that most of the
19,936
reliable information on all the relevant topicsbut you hopefully shouldn' need reminding that not everything you read on the internet is necessarily true it is also worth pointing out that there are often many different equally-good ways to solve the same taskdifferent equally-sensible names used for the same thingand different equally-valid conventions used by different peopleso don' expect all the sources of information you find to be an exact match with each other or with what you find in these notes overview these notes will cover the principal fundamental data structures and algorithms used in computer scienceand bring together broad range of topics covered elsewhere into coherent framework data structures will be formulated to represent various types of information in such way that it can be conveniently and efficiently manipulated by the algorithms we develop throughoutthe recurring practical issues of algorithm specificationverification and performance analysis will be discussed we shall begin by looking at some widely used basic data structures (namely arrayslinked listsstacks and queues)and the advantages and disadvantages of the associated abstract data types then we consider the ubiquitous problem of searchingand how that leads on to the general ideas of computational efficiency and complexity that will leave us with the necessary tools to study three particularly important data structurestrees (in particularbinary search trees and heap trees)hash tablesand graphs we shall learn how to develop and analyse increasingly efficient algorithms for manipulating and performing useful operations on those structuresand look in detail at developing efficient processes for data storingsortingsearching and analysis the idea is that once the basic ideas and examples covered in these notes are understooddealing with more complex problems in the future should be straightforward
19,937
arraysiterationinvariants data is ultimately stored in computers as patterns of bitsthough these days most programming languages deal with higher level objectssuch as charactersintegersand floating point numbers generallywe need to build algorithms that manipulate collections of such objectsso we need procedures for storing and sequentially processing them arrays in computer sciencethe obvious way to store an ordered collection of items is as an array array items are typically stored in sequence of computer memory locationsbut to discuss themwe need convenient way to write them down on paper we can just write the items in orderseparated by commas and enclosed by square brackets thus[ is an example of an array of integers if we call this array awe can write it asa [ this array has itemsand hence we say that its size is in everyday lifewe usually start counting from when we work with arrays in computer sciencehoweverwe more often (though not alwaysstart from thusfor our array aits positions are the element in the th position is and we use the notation [ to denote this element more generallyfor any integer denoting positionwe write [ito denote the element in the ith position this position is called an index (and the plural is indicesthenin the above examplea[ [ [ and so on it is worth noting at this point that the symbol is quite overloaded in mathematicsit stands for equality in most modern programming languagesdenotes assignmentwhile equality is expressed by =we will typically use in its mathematical meaningunless it is written as part of code or pseudocode we say that the individual items [iin the array are accessed using their index iand one can move sequentially through the array by incrementing or decrementing that indexor jump straight to particular item given its index value algorithms that process data stored as arrays will typically need to visit systematically all the items in the arrayand apply appropriate operations on them
19,938
loops and iteration the standard approach in most programming languages for repeating process certain number of timessuch as moving sequentially through an array to perform the same operations on each iteminvolves loop in pseudocodethis would typically take the general form for ,ndo something and in programming languages like and java this would be written as the for-loop fori +/do something in which counter keep tracks of doing "the somethingn times for examplewe could compute the sum of all items in an array using fori sum +sum + [ ]we say that there is iteration over the index the general for-loop structure is forinitialization condition update repeated process in which any of the four parts are optional one way to write this out explicitly is initialization if not condition go to loop finished loop start repeated process update if condition go to loop start loop finished in these noteswe will regularly make use of this basic loop structure when operating on data stored in arraysbut it is important to remember that different programming languages use different syntaxand there are numerous variations that check the condition to terminate the repetition at different points invariants an invariantas the name suggestsis condition that does not change during execution of given program or algorithm it may be simple inequalitysuch as " "or something more abstractsuch as "the items in the array are sortedinvariants are important for data structures and algorithms because they enable correctness proofs and verification in particulara loop-invariant is condition that is true at the beginning and end of every iteration of the given loop consider the standard simple example of procedure that finds the minimum of numbers stored in an array
19,939
float min [ ]/min equals the minimum item in [ ], [ for(int ! ++/min equals the minimum item in [ ], [ - if ( [iminmin [ ]/min equals the minimum item in [ ], [ - ]and == return minat the beginning of each iterationand end of any iterations beforethe invariant "min equals the minimum item in [ ] [ ]is true it starts off trueand the repeated process and update clearly maintain its truth hencewhen the loop terminates with " = "we know that "min equals the minimum item in [ ] [ ]and hence we can be sure that min can be returned as the required minimum value this is kind of proof by inductionthe invariant is true at the start of the loopand is preserved by each iteration of the looptherefore it must be true at the end of the loop as we noted earlierformal proofs of correctness are beyond the scope of these notesbut identifying suitable loop invariants and their implications for algorithm correctness as we go along will certainly be useful exercise we will also see how invariants (sometimes called inductive assertionscan be used to formulate similar correctness proofs concerning properties of data structures that are defined inductively
19,940
listsrecursionstacksqueues we have seen how arrays are convenient way to store collections of itemsand how loops and iteration allow us to sequentially process those items howeverarrays are not always the most efficient way to store collections of items in this sectionwe shall see that lists may be better way to store collections of itemsand how recursion may be used to process them as we explore the details of storing collections as liststhe advantages and disadvantages of doing so for different situations will become apparent linked lists list can involve virtually anythingfor examplea list of integers [ ] shopping list [applesbutterbreadcheese]or list of web pages each containing picture and link to the next web page when considering listswe can speak about-them on different levels on very abstract level (on which we can define what we mean by list)on level on which we can depict lists and communicate as humans about themon level on which computers can communicateor on machine level in which they can be implemented graphical representation non-empty lists can be represented by two-cellsin each of which the first cell contains pointer to list element and the second cell contains pointer to either the empty list or another two-cell we can depict pointer to the empty list by diagonal bar or cross through the cell for instancethe list [ can be represented as abstract data type "liston an abstract level list can be constructed by the two constructorsemptylistwhich gives you the empty listand
19,941
using thoseour last example list can be constructed as makelist( makelist( makelist( makelist( makelist( emptylist))))and it is clearly possible to construct any list in this way this inductive approach to data structure creation is very powerfuland we shall use it many times throughout these notes it starts with the "base case"the emptylistand then builds up increasingly complex lists by repeatedly applying the "induction step"the makelist(elementlistoperator it is obviously also important to be able to get back the elements of listand we no longer have an item index to use like we have with an array the way to proceed is to note that list is always constructed from the first element and the rest of the list soconverselyfrom non-empty list it must always be possible to get the first element and the rest this can be done using the two selectorsalso called accessor methodsfirst(list)and rest(listthe selectors will only work for non-empty lists (and give an error or exception on the empty list)so we need condition which tells us whether given list is emptyisempty(listthis will need to be used to check every list before passing it to selector we call everything list that can be constructed by the constructors emptylist and makelistso that with the selectors first and rest and the condition isemptythe following relationships are automatically satisfied ( true)isempty(emptylistnot isempty(makelist(xl)(for any and lfirst(makelist(xl) rest(makelist(xl) in addition to constructing and getting back the components of listsone may also wish to destructively change lists this would be done by so-called mutators which change either the first element or the rest of non-empty listreplacefirst(xlreplacerest(rlfor instancewith [ ]applying replacefirst( lchanges to [ and then applying replacerest([ ]lchanges it to [ we shall see that the concepts of constructorsselectors and conditions are common to virtually all abstract data types throughout these noteswe will be formulating our data representations and algorithms in terms of appropriate definitions of them
19,942
in order to communicate data structures between different computers and possibly different programming languagesxml (extensible markup languagehas become quasi-standard the above list could be represented in xml as howeverthere are usually many different ways to represent the same object in xml for instancea cell-oriented representation of the above list would be emptylist while this looks complicated for simple listit is notit is just bit lengthy xml is flexible enough to represent and communicate very complicated structures in uniform way implementation of lists there are many different implementations possible for listsand which one is best will depend on the primitives offered by the programming language being used the programming language lisp and its derivatesfor instancetake lists as the most important primitive data structure in some other languagesit is more natural to implement
19,943
sizewhich means array based implementation with fixed-sized arrays can only approximate the general concept for many applicationsthis is not problem because maximal number of list members can be determined priori ( the maximum number of students taking one particular module is limited by the total number of students in the universitymore general purpose implementations follow pointer based approachwhich is close to the diagrammatic representation given above we will not go into the details of all the possible implementations of lists herebut such information is readily available in the standard textbooks recursion we previously saw how iteration based on for-loops was natural way to process collections of items stored in arrays when items are stored as linked-liststhere is no index for each itemand recursion provides the natural way to process them the idea is to formulate procedures which involve at least one step that invokes (or callsthe procedure itself we will now look at how to implement two important derived procedures on listslast and appendwhich illustrate how recursion works to find the last element of list we can simply keep removing the first remaining item till there are no more left this algorithm can be written in pseudocode aslast(lif isempty(lerror('errorempty list in last'elseif isempty(rest( )return first(lelse return last(rest( )the running time of this depends on the length of the listand is proportional to that lengthsince last is called as often as there are elements in the list we say that the procedure has linear time complexitythat isif the length of the list is increased by some factorthe execution time is increased by the same factor compared to the constant time complexity which access to the last element of an array hasthis is quite bad it does not meanhoweverthat lists are inferior to arrays in generalit just means that lists are not the ideal data structure when program has to access the last element of long list very often another useful procedure allows us to append one list to another list againthis needs to be done one item at timeand that can be accomplished by repeatedly taking the first remaining item of and adding it to the front of the remainder appended to append( , if isempty( return else return makelist(first( ),append(rest( ), )the time complexity of this procedure is proportional to the length of the first listl since we have to call append as often as there are elements in
19,944
stacks stacks areon an abstract levelequivalent to linked lists they are the ideal data structure to model first-in-last-out (filo)or last-in-first-out (lifo)strategy in search graphical representation their relation to linked lists means that their graphical representation can be the samebut one has to be careful about the order of the items for instancethe stack created by inserting the numbers [ in that order would be represented as abstract data type "stackdespite their relation to linked liststheir different use means the primitive operators for stacks are usually given different names the two constructors areemptystackthe empty stackand push(elementstack)which takes an element and pushes it on top of an existing stackand the two selectors aretop(stack)which gives back the top most element of stackand pop(stack)which gives back the stack without the top most element the selectors will work only for non-empty stackshence we need condition which tells whether stack is emptyisempty(stackwe have equivalent automatically-true relationships to those we had for the listsisempty(emptystacknot isempty(push(xs)(for any and stop(push(xs) pop(push(xs) in summarywe have the direct correspondenceslist stack constructors emptylist makelist emptystack push selectors first rest top pop condition isempty isempty sostacks and linked lists are the same thingapart from the different names that are used for their constructors and selectors
19,945
there are two different ways we can think about implementing stacks so far we have implied functional approach that ispush does not change the original stackbut creates new stack out of the original stack and new element that isthere are at least two stacks aroundthe original one and the newly created one this functional view is quite convenient if we apply top to particular stackwe will always get the same element howeverfrom practical point of viewwe may not want to create lots of new stacks in programbecause of the obvious memory management implications instead it might be better to think of single stack which is destructively changedso that after applying push the original stack no longer exitsbut has been changed into new stack with an extra element this is conceptually more difficultsince now applying top to given stack may give different answersdepending on how the state of the system has changed howeveras long as we keep this difference in mindignoring such implementational details should not cause any problems queues queue is data structure used to model first-in-first-out (fifostrategy conceptuallywe add to the end of queue and take away elements from its front graphical representation queue can be graphically represented in similar way to list or stackbut with an additional two-cell in which the first element points to the front of the list of all the elements in the queueand the second element points to the last element of the list for instanceif we insert the elements [ into an initially empty queuewe get this arrangement means that taking the first element of the queueor adding an element to the back of the queuecan both be done efficiently in particularthey can both be done with constant efforti independently of the queue length abstract data type "queueon an abstract levela queue can be constructed by the two constructorsemptyqueuethe empty queueand push(elementqueue)which takes an element and queue and returns queue in which the element is added to the original queue at the end for instanceby applying push( qwhere is the queue abovewe get
19,946
the two selectors are the same as for stackstop(queue)which gives the top element of queuethat is in the exampleand pop(queue)which gives the queue without the top element andas with stacksthe selectors only work for non-empty queuesso we again need condition which returns whether queue is emptyisempty(queuein later we shall see practical examples of how queues and stacks operate with different effect doubly linked lists doubly linked list might be useful when working with something like list of web pageswhich has each page containing picturea link to the previous pageand link to the next page for simple list of numbersa linked list and doubly linked list may look the samee [ howeverthe doubly linked list also has an easy way to get the previous elementas well as to the next element graphical representation non-empty doubly linked lists can be represented by three-cellswhere the first cell contains pointer to another three-cell or to the empty listthe second cell contains pointer to the list element and the third cell contains pointer to another three-cell or the empty list againwe depict the empty list by diagonal bar or cross through the appropriate cell for instance[ would be represented as doubly linked list as abstract data type "doubly linked liston an abstract level doubly linked list can be constructed by the three constructorsemptylistthe empty listand
19,947
returns new doubly linked list with the element added to the left of the original doubly linked list makelistright(elementlist)which takes an element and doubly linked list and returns new doubly linked list with the element added to the right of the original doubly linked list it is clear that it may possible to construct given doubly linked list in more that one way for examplethe doubly linked list represented above can be constructed by either ofmakelistleft( makelistleft( makelistleft( makelistleft( makelistleft( emptylist))))makelistleft( makelistleft( makelistright( makelistright( makelistleft( emptylist))))in the case of doubly linked listswe have four selectorsfirstleft(list)restleft(list)firstright(list)and restright(listthensince the selectors only work for non-empty listswe also need condition which returns whether list is emptyisempty(listthis leads to automatically-true relationships such asisempty(emptylistnot isempty(makelistleft(xl)(for any and lnot isempty(makelistright(xl)(for any and lfirstleft(makelistleft(xl) restleft(makelistleft(xl) firstright(makelistright(xl) restright(makelistright(xl) circular doubly linked list as simple extension of the standard doubly linked listone can define circular doubly linked list in which the left-most element points to the right-most elementand vice versa this is useful when we might need to move efficiently through whole list of itemsbut might not be starting from one of two particular end points
19,948
advantage of abstract data types it is clear that the implementation of the abstract linked-list data type has the disadvantage that certain useful procedures may not be directly accessible for instancethe standard abstract data type of list does not offer an efficient procedure last(lto give the last element in the listwhereas it would be trivial to find the last element of an array of known number of elements one could modify the linked-list data type by maintaining pointer to the last itemas we did for the queue data typebut we still wouldn' have an easy way to access intermediate items while last(land getitem(ilprocedures can easily be implemented using the primitive constructorsselectorsand conditionsthey are likely to be less efficient than making use of certain aspects of the underlying implementation that disadvantage leads to an obvious questionwhy should we want to use abstract data types when they often lead to less efficient algorithmsahohopcroft and ullman ( provide clear answer in their book"at firstit may seem tedious writing procedures to govern all accesses to the underlying structures howeverif we discipline ourselves to writing programs in terms of the operations for manipulating abstract data types rather than making use of particular implementations detailsthen we can modify programs more readily by reimplementing the operations rather than searching all programs for places where we have made accesses to the underlying data structures this flexibility can be particularly important in large software effortsand the reader should not judge the concept by the necessarily tiny examples found in this book this advantage will become clearer when we study more complex abstract data types and algorithms in later
19,949
searching an important and recurring problem in computing is that of locating information more succinctlythis problem is known as searching this is good topic to use for preliminary exploration of the various issues involved in algorithm design requirements for searching clearlythe information to be searched has to first be represented (or encoded somehow this is where data structures come in of coursein computereverything is ultimately represented as sequences of binary digits (bits)but this is too low level for most purposes we need to develop and study useful data structures that are closer to the way humans thinkor at least more structured than mere sequences of bits this is because it is humans who have to develop and maintain the software systems computers merely run them after we have chosen suitable representationthe represented information has to be processed somehow this is what leads to the need for algorithms in this casethe process of interest is that of searching in order to simplify matterslet us assume that we want to search collection of integer numbers (though we could equally well deal with strings of charactersor any other data type of interestto begin withlet us consider the most obvious and simple representation two potential algorithms for processing with that representation as we have already notedarrays are one of the simplest possible ways of representing collections of numbers (or stringsor whatever)so we shall use that to store the information to be searched later we shall look at more complex data structures that may make storing and searching more efficient supposefor examplethat the set of integers we wish to search is { , , , , , , , , we can write them in an array as [ if we ask where is in this arraythe answer is the index of that element if we ask where isthe answer is nowhere it is useful to be able to represent nowhere by number that is not used as possible index since we start our index counting from any negative number would do we shall follow the convention of using the number - to represent nowhere other (perhaps betterconventions are possiblebut we will stick to this here
19,950
specification of the search problem we can now formulate specification of our search problem using that data structuregiven an array and integer xfind an integer such that if there is no such that [jis xthen is - otherwisei is any for which [jis the first clause says that if does not occur in the array then should be - and the second says that if it does occur then should be position where it occurs if there is more than one position where occursthen this specification allows you to return any of them for examplethis would be the case if were [ and were thusthe specification is ambiguous hence different algorithms with different behaviours can satisfy the same specification for exampleone algorithm may return the smallest position at which occursand another may return the largest there is nothing wrong with ambiguous specifications in factin practicethey occur quite often simple algorithmlinear search we can conveniently express the simplest possible algorithm in form of pseudocode which reads like englishbut resembles computer program without some of the precision or detail that computer usually requires/this assumes we are given an array of size and key for , , - if [iis equal to xthen we have suitable and can terminate returning if we reach this pointthen is not in and hence we must terminate returning - some aspectssuch as the ellipsis "are potentially ambiguousbut weas human beingsknow exactly what is meantso we do not need to worry about them in programming language such as or javaone would write something that is more precise likefor +if [ = return ireturn - in the case of javathis would be within method of classand more details are neededsuch as the parameter for the method and declaration of the auxiliary variable in the case of this would be within functionand similar missing details are needed in eitherthere would need to be additional code to output the result in suitable format in this caseit is easy to see that the algorithm satisfies the specification (assuming is the correct size of the arraywe just have to observe thatbecause we start counting from zerothe last position of the array is its size minus one if we forget thisand let run from to insteadwe get an incorrect algorithm the practical effect of this mistake is that the execution of this algorithm gives rise to an error when the item to be located in the array is
19,951
on the particular languageoperating system and machine you are usingthe actual effect of this error will be different for examplein running under unixyou may get execution aborted followed by the message "segmentation fault"or you may be given the wrong answer as the output in javayou will always get an error message more efficient algorithmbinary search one always needs to consider whether it is possible to improve upon the performance of particular algorithmsuch as the one we have just created in the worst casesearching an array of size takes steps on averageit will take / steps for large collections of datasuch as all web-pages on the internetthis will be unacceptable in practice thuswe should try to organize the collection in such way that more efficient algorithm is possible as we shall see laterthere are many possibilitiesand the more we demand in terms of efficiencythe more complicated the data structures representing the collections tend to become here we shall consider one of the simplest we still represent the collections by arraysbut now we enumerate the elements in ascending order the problem of obtaining an ordered list from any given list is known as sorting and will be studied in detail in later thusinstead of working with the previous array [ ]we would work with [ ]which has the same items but listed in ascending order then we can use an improved algorithmwhich in english-like pseudocode form is/this assumes we are given sorted array of size and key /use integers left and right (initially set to and - and mid while left is less than rightset mid to the integer part of (left+right)/ and if is greater than [mid]then set left to mid+ otherwise set right to mid if [leftis equal to xthen terminate returning leftotherwise terminate returning - and would correspond to segment of or java code like/data *int [ , , , , , , , , ]int int /program *int left right - midwhile left right mid left right if [midleft mid+ else right midif [left= return leftelse return -
19,952
to midand the other going from mid to rightwhere mid is the position half way from lef to rightand whereinitiallylef and right are the leftmost and rightmost positions of the array because the array is sortedit is easy to see which of each pair of segments the searched-for item is inand the search can then be restricted to that segment moreoverbecause the size of the sub-array going from locations lef to right is halved at each iteration of the while-loopwe only need log steps in either the average or worst case to see that this runtime behaviour is big improvementin practiceover the earlier linear-search algorithmnotice that log is approximately so that for an array of size only iterations are needed in the worst case of the binary-search algorithmwhereas are needed in the worst case of the linear-search algorithm with the binary search algorithmit is not so obvious that we have taken proper care of the boundary condition in the while loop alsostrictly speakingthis algorithm is not correct because it does not work for the empty array (that has size zero)but that can easily be fixed apart from thatis it correcttry to convince yourself that it isand then try to explain your argument-for-correctness to colleague having done thattry to write down some convincing argumentsmaybe one that involves loop invariant and one that doesn' most algorithm developers stop at the first stagebut experience shows that it is only when we attempt to write down seemingly convincing arguments that we actually find all the subtle mistakes moreoverit is not unusual to end up with better/clearer algorithm after it has been modified to make its correctness easier to argue it is worth considering whether linked-list versions of our two algorithms would workor offer any advantages it is fairly clear that we could perform linear search through linked list in essentially the same way as with an arraywith the relevant pointer returned rather than an index converting the binary search to linked list form is problematicbecause there is no efficient way to split linked list into two segments it seems that our array-based approach is the best we can do with the data structures we have studied so far howeverwe shall see later how more complex data structures (treescan be used to formulate efficient recursive search algorithms notice that we have not yet taken into account how much effort will be required to sort the array so that the binary search algorithm can work on it until we know thatwe cannot be sure that using the binary search algorithm really is more efficient overall than using the linear search algorithm on the original unsorted array that may also depend on further detailssuch as how many times we need to performa search on the set of items just onceor as many as times we shall return to these issues later first we need to consider in more detail how to compare algorithm efficiency in reliable manner
19,953
efficiency and complexity we have already noted thatwhen developing algorithmsit is important to consider how efficient they areso we can make informed choices about which are best to use in particular circumstances sobefore moving on to study increasingly complex data structures and algorithmswe first look in more detail at how to measure and describe their efficiency time versus space complexity when creating software for serious applicationsthere is usually need to judge how quickly an algorithm or program can complete the given tasks for exampleif you are programming flight booking systemit will not be considered acceptable if the travel agent and customer have to wait for half an hour for transaction to complete it certainly has to be ensured that the waiting time is reasonable for the size of the problemand normally faster execution is better we talk about the time complexity of the algorithm as an indicator of how the execution time depends on the size of the data structure another important efficiency consideration is how much memory given program will require for particular taskthough with modern computers this tends to be less of an issue than it used to be here we talk about the space complexity as how the memory requirement depends on the size of the data structure for given taskthere are often algorithms which trade time for spaceand vice versa for examplewe will see thatas data storage devicehash tables have very good time complexity at the expense of using more memory than is needed by other algorithms it is usually up to the algorithm/program designer to decide how best to balance the trade-off for the application they are designing worst versus average complexity another thing that has to be decided when making efficiency considerations is whether it is the average case performance of an algorithm/program that is importantor whether it is more important to guarantee that even in the worst case the performance obeys certain rules for many applicationsthe average case is more importantbecause saving time overall is usually more important than guaranteeing good behaviour in the worst case howeverfor time-critical problemssuch as keeping track of aeroplanes in certain sectors of air spaceit may be totally unacceptable for the software to take too long if the worst case arises
19,954
of the worst case for examplethe most efficient algorithm on average might have particularly bad worst case efficiency we will see particular examples of this when we consider efficient algorithms for sorting and searching concrete measures for performance these dayswe are mostly interested in time complexity for thiswe first have to decide how to measure it something one might try to do is to just implement the algorithm and run itand see how long it takes to runbut that approach has number of problems for oneif it is big application and there are several potential algorithmsthey would all have to be programmed first before they can be compared so considerable amount of time would be wasted on writing programs which will not get used in the final product alsothe machine on which the program is runor even the compiler usedmight influence the running time you would also have to make sure that the data with which you tested your program is typical for the application it is created for againparticularly with big applicationsthis is not really feasible this empirical method has another disadvantageit will not tell you anything useful about the next time you are considering similar problem therefore complexity is usually best measured in different way firstin order to not be bound to particular programming language or machine architectureit is better to measure the efficiency of the algorithm rather than that of its implementation for this to be possiblehoweverthe algorithm has to be described in way which very much looks like the program to be implementedwhich is why algorithms are usually best expressed in form of pseudocode that comes close to the implementation language what we need to do to determine the time complexity of an algorithm is count the number of times each operation will occurwhich will usually depend on the size of the problem the size of problem is typically expressed as an integerand that is typically the number of items that are manipulated for examplewhen describing search algorithmit is the number of items amongst which we are searchingand when describing sorting algorithmit is the number of items to be sorted so the complexity of an algorithm will be given by function which maps the number of items to the (usually approximatenumber of time steps the algorithm will take when performed on that many items in the early days of computersthe various operations were each counted in proportion to their particular 'time cost'and added upwith multiplication of integers typically considered much more expensive than their addition in today' worldwhere computers have become much fasterand often have dedicated floating-point hardwarethe differences in time costs have become less important howeverwe still we need to be careful when deciding to consider all operations as being equally costly applying some functionfor examplecan take much longer than simply adding two numbersand swaps generally take many times longer than comparisons just counting the most costly operations is often good strategy big- notation for complexity class very oftenwe are not interested in the actual function (nthat describes the time complexity of an algorithm in terms of the problem size nbut just its complexity class this ignores any constant overheads and small constant factorsand just tells us about the principal growth
19,955
the algorithm on large numbers of items if an algorithm is such that we may consider all steps equally costlythen usually the complexity class of the algorithm is simply determined by the number of loops and how often the content of those loops are being executed the reason for this is that adding constant number of instructions which does not change with the size of the problem has no significant effect on the overall complexity for large problems there is standard notationcalled the big- notationfor expressing the fact that constant factors and other insignificant details are being ignored for examplewe saw that the procedure last(lon list had time complexity that depended linearly on the size of the listso we would say that the time complexity of that algorithm is (nsimilarlylinear search is (nfor binary searchhoweverthe time complexity is (log nbefore we define complexity classes in more formal mannerit is worth trying to gain some intuition about what they actually mean for this purposeit is useful to choose one function as representative of each of the classes we wish to consider recall that we are considering functions which map natural numbers (the size of the problemto the set of nonnegative real numbers rso the classes will correspond to common mathematical functions such as powers and logarithms we shall consider later to what degree representative can be considered 'typicalfor its class the most common complexity classes (in increasing orderare the followingo( )pronounced 'oh of one'or constant complexityo(log log )'oh of log log en' (log )'oh of log en'or logarithmic complexityo( )'oh of en'or linear complexityo(nlog )'oh of en log en' ( )'oh of en squared'or quadratic complexityo( )'oh of en cubed'or cubic complexityo( )'oh of two to the en'or exponential complexity as representativewe choose the function which gives the class its name for (nwe choose the function (nnfor (log nwe choose (nlog nand so on so assume we have algorithms with these functions describing their complexity the following table lists how many operations it will take them to deal with problem of given sizef ( log log log nlog =
19,956
time span they describe hence the following table gives time spans rather than instruction countsbased on the assumption that we have computer which can operate at speed of mipwhere one mip million instructions per secondf ( log log log nlog = usec usec usec usec usec usec usec usec usec usec usec usec usec usec msec msec usec usec usec usec msec msec sec yr usec usec usec msec msec sec min yr usec usec usec sec sec wk yr yr it is clear thatas the sizes of the problems get really bigthere can be huge differences in the time it takes to run algorithms from different complexity classes for algorithms with exponential complexityo( )even modest sized problems have run times that are greater than the age of the universe (about yr)and current computers rarely run uninterrupted for more than few years this is why complexity classes are so important they tell us how feasible it is likely to be to run program with particular large number of data items typicallypeople do not worry much about complexity for sizes below or maybe but the above numbers make it clear why it is worth thinking about complexity classes where bigger applications are concerned another useful way of thinking about growth classes involves considering how the compute time will vary if the problem size doubles the following table shows what happens for the various complexity classesf ( log log log nlog if the size of the problem doubles then (nwill be the samef ( nf (nalmost the samelog (log ( )log (log ( more by log ( nf ( twice as big as beforef ( (na bit more than twice as big as before nlog ( (nlog four times as big as beforef ( (neight times as big as beforef ( (nthe square of what it was beforef ( ( ( )) this kind of information can be very useful in practice we can test our program on problem that is half or quarter or one eighth of the full sizeand have good idea of how long we will have to wait for the full size problem to finish moreoverthat estimate won' be affected by any constant factors ignored in computing the growth classor the speed of the particular computer it is run on the following graph plots some of the complexity class functions from the table note that although these functions are only defined on natural numbersthey are drawn as though they were defined for all real numbersbecause that makes it easier to take in the information presented
19,957
log log it is clear from these plots why the non-principal growth terms can be safely ignored when computing algorithm complexity formal definition of complexity classes we have noted that complexity classes are concerned with growthand the tables and graph above have provided an idea of what different behaviours mean when it comes to growth there we have chosen representative for each of the complexity classes consideredbut we have not said anything about just how 'representativesuch an element is let us now consider more formal definition of 'big oclassdefinition function belongs to the complexity class ( if there is number and constant such that for all > we have that ( < (nwe say that the function is 'eventually smallerthan the function it is not totally obvious what this implies firstwe do not need to know exactly when becomes smaller than we are only interested in the existence of such thatfrom then ong is smaller than secondwe wish to consider the efficiency of an algorithm independently of the speed of the computer that is going to execute it this is why is multiplied by constant the idea is that when we measure the time of the steps of particular algorithmwe are not sure how long each of them takes by definitiong ( means that eventually (namely beyond the point )the growth of will be at most as much as the growth of this definition also makes it clear that constant factors do not change the growth class (or -classof function hence (nn is in the same growth class as ( / or ( so we can write ( ( ( / typicallyhoweverwe choose the simplest representativeas we did in the tables above in this case it is (
19,958
( (log log no(log ( ) (no(nlog no( ( ( we only consider the principal growth classso when adding functions from different growth classestheir sum will always be in the larger growth class this allows us to simplify terms for examplethe growth class of ( log can be determined as follows the summand with the largest growth class is (we say that this is the 'principal sub-termor 'dominating sub-termof the function)and we are allowed to drop constant factorsso this function is in the class ( when we say that an algorithm 'belongs tosome class ( )we mean that it is at most as fast growing as we have seen that 'linear searching(where one searches in collection of data items which is unsortedhas linear complexityi it is in growth class (nthis holds for the average case as well as the worst case the operations needed are comparisons of the item we are searching for with all the items appearing in the data collection in the worst casewe have to check all entries until we find the right onewhich means we make comparisons on averagehoweverwe will only have to check / entries until we hit the correct oneleaving us with / operations both those functionsc(nn and (nn/ belong to the same complexity classnamely (nhoweverit would be equally correct to say that the algorithm belongs to ( )since that class contains all of (nbut this would be less informativeand we would not say that an algorithm has quadratic complexity if we know thatin factit is linear sometimes it is difficult to be sure what the exact complexity is (as is the case with the famous np problem)in which case one might say that an algorithm is 'at most'sayquadratic the issue of efficiency and complexity classand their computationwill be recurring feature throughout the to come we shall see that concentrating only on the complexity classrather than finding exact complexity functionscan render the whole process of considering efficiency much easier in most caseswe can determine the time complexity by simple counting of the loops and tree heights howeverwe will also see at least one case where that results in an overestimateand more exact computation is required
19,959
trees in computer sciencea tree is very general and powerful data structure that resembles real tree it consists of an ordered set of linked nodes in connected graphin which each node has at most one parent nodeand zero or more children nodes with specific order general specification of trees generallywe can specify tree as consisting of nodes (also called vertices or pointsand edges (also called linesorin order to stress the directednessarcswith tree-like structure it is usually easiest to represent trees pictoriallyso we shall frequently do that simple example is given in figure figure example of tree more formallya tree can be defined as either the empty treeor node with list of successor trees nodes are usuallythough not alwayslabelled with data item (such as number or search keywe will refer to the label of node as its value in our exampleswe will generally use nodes labelled by integersbut one could just as easily choose something elsee strings of characters in order to talk rigorously about treesit is convenient to have some terminologythere always has to be unique 'top levelnode known as the root in figure this is the node labelled with it is important to note thatin computer sciencetrees are normally displayed upside-downwith the root forming the top level thengiven nodeevery node on the next level 'down'that is connected to the given node via branchis child of that node in
19,960
oneconnected to the given node (via an edgeon the level aboveis its parent for instancenode is the parent of node (and of node as wellnodes that have the same parent are known as siblings siblings areby definitionalways on the same level if node is the child of child of of another node then we say that the first node is descendent of the second node converselythe second node is an ancestor of the first node nodes which do not have any children are known as leaves ( the nodes labelled with and in figure path is sequence of connected edges from one node to another trees have the property that for every node there is unique path connecting it with the root in factthat is another possible definition of tree the depth or level of node is given by the length of this path hence the root has level its children have level and so on the maximal length of path in tree is also called the height of the tree path of maximal length always goes from the root to leaf the size of tree is given by the number of nodes it contains we shall normally assume that every tree is finitethough generally that need not be the case the tree in figure has height and size tree consisting of just of one node has height and size the empty tree obviously has size and is defined (convenientlythough somewhat artificiallyto have height - like most data structureswe need set of primitive operators (constructorsselectors and conditionsto build and manipulate the trees the details of those depend on the type and purpose of the tree we will now look at some particularly useful types of tree quad-trees quadtree is particular type of tree in which each leaf-node is labelled by value and each non-leaf node has exactly four children it is used most often to partition two dimensional space ( pixelated imageby recursively dividing it into four quadrants formallya quadtree can be defined to be either single node with number or value ( in the range to )or node without value but with four quadtree childrenlullruand rl it can thus be defined "inductivelyby the following rulesdefinition quad tree is either (rule root node with valueor (rule root node without value and four quad tree childrenlullruand rl in which rule is the "base caseand rule is the "induction stepwe say that quadtree is primitive if it consists of single node/numberand that can be tested by the corresponding conditionisvalue(qt)which returns true if quad-tree qt is single node to build quad-tree we have two constructorsbaseqt(value)which returns single node quad-tree with label value makeqt(luqtruqtllqtrlqt)which builds quad-tree from four constituent quadtrees luqtllqtruqtrlqt
19,961
lu(qt)which returns the left-upper quad-tree ru(qt)which returns the right-upper quad-tree ll(qt)which returns the left-lower quad-tree rl(qt)which returns the right-lower quad-tree which can be applied whenever isvalue(qtis false for cases when isvalue(qtis truewe could define an operator value(qtthat returns the valuebut conventionally we simply say that qt itself is the required value quad-trees of this type are most commonly used to store grey-value pictures (with representing black and whitea simple example would be we can then create algorithms using the operators to perform useful manipulations of the representation for examplewe could rotate picture qt by usingrotate(qtif isvalue(qtreturn qt else return makeqtrotate(rl(qt))rotate(ll(qt))rotate(ru(qt))rotate(lu(qt)or we could compute average values by recursively averaging the constituent sub-trees there exist numerous variations of this general ideasuch coloured quadtrees which store value-triples that represent colours rather than grey-scaleand edge quad-trees which store lines and allow curves to be represented with arbitrary precision binary trees binary trees are the most common type of tree used in computer science binary tree is tree in which every node has at most two childrenand can be defined "inductivelyby the following rules
19,962
(rule the empty tree emptytreeor (rule it consists of node and two binary treesthe left subtree and right subtree againrule is the "base caseand rule is the "induction stepthis definition may appear circularbut actually it is notbecause the subtrees are always simpler than the original oneand we eventually end up with an empty tree you can imagine that the (infinitecollection of (finitetrees is created in sequence of days day is when you "get off the groundby applying rule to get the empty tree on later daysyou are allowed to use any trees that you have created on earlier days to construct new trees using rule thusfor exampleon day you can create exactly trees that have root with valuebut no children ( both the left and right subtrees are the empty treecreated at day on day you can use new node with valuewith the empty tree and/or the one-node treeto create more trees thusbinary trees are the objects created by the above two rules in finite number of steps the height of treedefined aboveis the number of days it takes to create it using the above two ruleswhere we assume that only one rule is used per dayas we have just discussed (exercisework out the sequence of steps needed to create the tree in figure and hence prove that it is in fact binary tree primitive operations on binary trees the primitive operators for binary trees are fairly obvious we have two constructors which are used to build treesemptytreewhich returns an empty treemaketree(vlr)which builds binary tree from root node with label and two constituent binary trees and ra condition to test whether tree is emptyisempty( )which returns true if tree is the emptytreeand three selectors to break non-empty tree into its constituent partsroot( )which returns the value of the root node of binary tree tleft( )which returns the left sub-tree of binary tree tright( )which returns the right sub-tree of binary tree these operators can be used to create all the algorithms we might need for manipulating binary trees for convenience thoughit is often good idea to define derived operators that allow us to write simplermore readable algorithms for examplewe can define derived constructorleaf(vmaketree(vemptytreeemptytreethat creates tree consisting of single node with label vwhich is the root and the unique leaf of the tree at the same time then the tree in figure can be constructed as
19,963
maketree( ,maketree( ,emptytree,leaf( )),maketree( ,leaf( ),leaf( )))which is much simpler than the construction using the primitive operatorst maketree( maketree( ,maketree( ,emptytree,emptytree)maketree( ,emptytree,maketree( ,emptytree,emptytree)))maketree( ,maketree( ,emptytree,maketree( ,emptytree,emptytree))maketree( ,maketree( ,emptytree,emptytree)maketree( ,emptytree,emptytree)))note that the selectors can only operate on non-empty trees for examplefor the tree defined above we have root(left(left( ) but the expression root(left(left(left( )))does not make sense because left(left(left( ))emptytree and the empty tree does not have root in language such as javathis would typically raise an exception in language such as this would cause an unpredictable behaviourbut if you are luckya core dump will be produced and the program will be aborted with no further harm when writing algorithmswe need to check the selector arguments using isempty(tbefore allowing their use the following equations should be obvious from the primitive operator definitionsroot(maketree( , , ) left(maketree( , , ) right(maketree( , , ) isempty(emptytreetrue isempty(maketree( , , )false the following makes sense only under the assumption that is non-empty treemaketree(root( ),left( ),right( ) it just says that if we break apart non-empty tree and use the pieces to build new treethen we get an identical tree back it is worth emphasizing that the above specifications of quad-trees and binary trees are further examples of abstract data typesdata types for which we exhibit the constructors and destructors and describe their behaviour (using equations such as defined above for listsstacksqueuesquad-trees and binary trees)but for which we explicitly hide the implementational details the concrete data type used in an implementation is called data structure for examplethe usual data structures used to implement the list and tree data types are records and pointers but other implementations are possible the important advantage of abstract data types is that we can develop algorithms without having to worry about the details of the representation of the data or the implementation of courseeverything will ultimately be represented as sequences of bits in computerbut we clearly do not generally want to have to think in such low level terms
19,964
the height of binary tree binary trees don' have simple relation between their size and height the maximum height of binary tree with nodes is ( )which happens when all non-leaf nodes have precisely one childforming something that looks like chain on the other handsuppose we have nodes and want to build from them binary tree with minimal height we can achieve this by 'fillingeach successive level in turnstarting from the root it does not matter where we place the nodes on the last (bottomlevel of the treeas long as we don' start adding to the next level before the previous level is full terminology variesbut we shall say that such trees are perfectly balanced or height balanced and we shall see later why they are optimal for many of our purposes basicallyif done appropriatelymany important tree-based operations (such as searchingtake as many steps as the height of the treeso minimizing the height minimizes the time needed to perform those operations we can easily determine the maximum number of nodes that can fit into binary tree of given height calling this size function ( )we obtainh ( in factit seems fairly obvious that ( + this hypothesis can be proved by induction using the definition of binary tree as follows(athe base case applies to the empty tree that has height - which is consistent with (- - + nodes being stored (bthen for the induction stepa tree of height has root node plus two subtrees of height by the induction hypothesiseach subtree can store ( + nodesso the total number of nodes that can fit in height tree is ( + + ( + )+ ( it follows that if (his correct for the empty treewhich it was shown to be in the base case abovethen it is correct for all an obvious potential problem with any proof by induction like thishoweveris the need to identify an induction hypothesis to start withand that is not always easy another way to proceed here would be to simply sum the series ( algebraically to get the answer sometimeshoweverthe relevant series is too complicated to sum easily an alternative is to try to identify two different expressions for ( as function of ( )and solve them for (hheresince level of tree clearly has nodeswe can explicitly add in the + nodes of the last level of the height tree to give ( ( + alsosince height tree is made up of root node plus two trees of height ( (hthen subtracting the second equation from the first gives ( +
19,965
log ( log in which the approximation is valid for large hence perfectly balanced tree consisting of nodes has height approximately log this is goodbecause log is very smalleven for relatively large nn log we shall see later how we can use binary trees to hold data in such way that any search has at most as many steps as the height of the tree thereforefor perfectly balanced trees we can reduce the search time considerably as the table demonstrates howeverit is not always easy to create perfectly balanced treesas we shall also see later the size of binary tree usually binary tree will not be perfectly balancedso we will need an algorithm to determine its sizei the number of nodes it contains this is easy if we use recursion the terminating case is very simplethe empty tree has size otherwiseany binary tree will always be assembled from root nodea left sub-tree land right sub-tree rand its size will be the sum of the sizes of its componentsi for the rootplus the size of lplus the size of we have already defined the primitive operator isempty(tto check whether binary tree is emptyand the selectors left(tand right(twhich return the left and right sub-trees of binary tree thus we can easily define the procedure size( )which takes binary tree and returns its sizeas followssize(tif isempty(treturn else return ( size(left( )size(right( ))this recursively processes the whole treeand we know it will terminate because the trees being processed get smaller with each calland will eventually reach an empty tree which returns simple value implementation of trees the natural way to implement trees is in terms of records and pointersin similar way to how linked lists were represented as two-cells consisting of pointer to list element and pointer to the next two-cell obviouslythe details will depend on how many children each node can havebut trees can generally be represented as data structures consisting of pointer to the root-node content (if anyand pointers to the children sub-trees the inductive definition
19,966
pointer to the relevant root-noderather than having to pass complete copies of whole trees how data structures and pointers are implemented in different programming languages will varyof coursebut the general idea is the same binary tree can be implemented as data record for each node consisting simply of the node value and two pointers to the children nodes then maketree simply creates new data record of that formand rootleft and right simply read out the relevant contents of the record the absence of child node can be simply represented by null pointer recursive algorithms some people have difficulties with recursion source of confusion is that it appears that "the algorithm calls itselfand it might therefore get confused about what it is operating on this way of putting thingsalthough suggestivecan be misleading the algorithm itself is passive entitywhich actually cannot do anything at alllet alone call itself what happens is that processor (which can be machine or personexecutes the algorithm so what goes on when processor executes recursive algorithm such as the size(talgorithm abovean easy way of understanding this is to imagine that whenever recursive call is encounterednew processors are given the task with copy of the same algorithm for examplesuppose that john (the first processor in this taskwants to compute the size of given tree using the above recursive algorithm thenaccording to the above algorithmjohn first checks whether it is empty if it ishe simply returns zero and finishes his computation if it isn' emptythen his tree must have left and right subtrees and (which mayor may notbe emptyand he can extract them using the selectors left(tand right(the can then ask two of his studentssay steve and maryto execute the same algorithmbut for the trees and when they finishsay returning results and respectivelyhe computes and returns + +nbecause his tree has root node in addition to the left and right sub-trees if steve and mary aren' given empty treesthey will themselves have to delegate executions of the same algorithmwith their sub-treesto other people thusthe algorithm is not calling itself what happensis that there are many people running their own copies of the same algorithm on different trees in this examplein order to make things understandablewe assumed that each person executes single copy of the algorithm howeverthe same processorwith some difficultycan impersonate several processorsin such way that it achieves the same result as the execution involving many processors this is achieved via the use of stack that keeps track of the various positions of the same algorithm that are currently being executed but this knowledge is not needed for our purposes note that there is nothing to stop us keeping count of the recursions by passing integers along with any data structures being operated onfor examplefunction(int ntree /terminating condition and return /procedure details return function( -
19,967
recursive factorial functionfactorial(int nif = return return *factorial( - another examplewith two termination or base-case conditionsis direct implementation of the recursive definition of fibonacci numbers (see appendix ) (int nif = return if = return return ( - ( - though this is an extremely inefficient algorithm for computing these numbers exerciseshow that the time complexity of this algorithm is ( )and that there exists straightforward iterative algorithm that has only (ntime complexity is it possible to create an (nrecursive algorithm to compute these numbersin most caseshoweverwe won' need to worry about countersbecause the relevant data structure has natural end point conditionsuch as isempty( )that will bring the recursion to an end
19,968
binary search trees we now look at binary search treeswhich are particular type of binary tree that provide an efficient way of storing data that allows particular items to be found as quickly as possible then we consider further elaborations of these treesnamely avl trees and -treeswhich operate more efficiently at the expense of requiring more sophisticated algorithms searching with arrays or lists as we have already seen in many computer science applications involve searching for particular item in collection of data if the data is stored as an unsorted array or listthen to find the item in questionone obviously has to check each entry in turn until the correct one is foundor the collection is exhausted on averageif there are itemsthis will take / checksand in the worst caseall items will have to be checked if the collection is largesuch as all items accessible via the internetthat will take too much time we also saw that if the items are sorted before storing in an arrayone can perform binary search which only requires log checks in the average and worst cases howeverthat involves an overhead of sorting the array in the first placeor maintaining sorted array if items are inserted or deleted over time the idea here is thatwith the help of binary treeswe can speed up the storing and search process without needing to maintain sorted array search keys if the items to be searched are labelled by comparable keysone can order them and store them in such way that they are sorted already being 'sortedmay mean different things for different keysand which key to choose is an important design decision in our examplesthe search keys willfor simplicityusually be integer numbers (such as student id numbers)but other choices occur in practice for examplethe comparable keys could be words in that casecomparability usually refers to the alphabetical order if and are wordswe write to mean that precedes in the alphabetical order if bed and sky then the relation holdsbut this is not the case if bed and abacus classic example of collection to be searched is dictionary each entry of the dictionary is pair consisting of word and definition the definition is sequence of words and punctuation symbols the search keyin this exampleis the word (to which definition is attached in the dictionary entrythusabstractlya dictionary is sequence of
19,969
from the point of view of the search algorithms we are going to consider in what followswe shall concentrate on the search keysbut should always bear in mind that there is usually more substantial data entry associated with it notice the use of the word "abstracthere what we mean is that we abstract or remove any details that are irrelevant from the point of view of the algorithms for examplea dictionary usually comes in the form of bookwhich is sequence of pages but for usthe distribution of dictionary entries into pages is an accidental feature of the dictionary all that matters for us is that the dictionary is sequence of entries so "abstractionmeans "getting rid of irrelevant detailsfor our purposesonly the search key is importantso we will ignore the fact that the entries of the collection will typically be more complex objects (as in the example of dictionary or phone booknote that we should always employ the data structure to hold the items which performs best for the typical application there is no easy answer as to what the best choice is the particular circumstances have to be inspectedand decision has to be made based on that howeverfor many applicationsthe kind of binary trees we studied in the last are particularly useful here binary search trees the solution to our search problem is to store the collection of data to be searched using binary tree in such way that searching for particular item takes minimal effort the underlying idea is simpleat each tree nodewe want the value of that node to either tell us that we have found the required itemor tell us which of its two subtrees we should search for it in for the momentwe shall assume that all the items in the data collection are distinctwith different search keysso each possible node value occurs at most oncebut we shall see later that it is easy to relax this assumption hence we definedefinition binary search tree is binary tree that is either empty or satisfies the following conditionsall values occurring in the left subtree are smaller than that of the root all values occurring in the right subtree are larger than that of the root the left and right subtrees are themselves binary search trees so this is just particular type of binary treewith node values that are the search keys this means we can inherit many of the operators and algorithms we defined for general binary trees in particularthe primitive operators maketree(vlr)root( )left( )right(tand isempty(tare the same we just have to maintain the additional node value ordering building binary search trees when building binary search treeone naturally starts with the root and then adds further new nodes as needed soto insert new value vthe following cases ariseif the given tree is emptythen simply assign the new value to the rootand leave the left and right subtrees empty
19,970
if is smaller than the value of the rootinsert into the left sub-tree if is larger than the value of the rootinsert into the right sub-tree if is equal to the value of the rootreport violated assumption thususing the primitive binary tree operatorswe have the procedureinsert( ,bstif isempty(bstreturn maketree(vemptytreeemptytreeelseif root(bstreturn maketree(root(bst)insert( ,left(bst))right(bst)elseif root(bstreturn maketree(root(bst)left(bst)insert( ,right(bst))else error('errorviolated assumption in procedure insert 'which inserts node with value into an existing binary search tree bst note that the node added is always leaf the resulting tree is once again binary search tree this can be proved rigorously via an inductive argument note that this procedure creates new tree out of given tree bst and new value vwith the new value inserted at the right position the original tree bst is not modifiedit is merely inspected howeverwhen the tree represents large databaseit would clearly be more efficient to modify the given treerather than to construct whole new tree that can easily be done by using pointerssimilar to the way we set up linked lists for the momentthoughwe shall not concern ourselves with such implementational details searching binary search tree searching binary search tree is not dissimilar to the process performed when inserting new item we simply have to compare the item being looked for with the rootand then keep 'pushingthe comparison down into the left or right subtree depending on the result of each root comparisonuntil match is found or leaf is reached algorithms can be expressed in many ways here is concise description in words of the search algorithm that we have just outlinedin order to search for value in binary search tree tproceed as follows if is emptythen does not occur in tand hence we stop with false otherwiseif is equal to the root of tthen does occur in tand hence we stop returning true ifon the other handv is smaller than the rootthenby definition of binary search treeit is enough to search the left sub-tree of hence replace by its left sub-tree and carry on in the same way similarlyif is bigger than the rootreplace by its right sub-tree and carry on in the same way notice that such description of an algorithm embodies both the steps that need to be carried out and the reason why this gives correct solution to the problem this way of describing algorithms is very common when we do not intend to run them on computer
19,971
normally write the algorithm in pseudocodesuch as the following recursive procedureisin(value vtree tif isempty(treturn false elseif =root(treturn true elseif root(treturn isin(vleft( )else return isin(vright( )each recursion restricts the search to either the left or right subtree as appropriatereducing the search tree height by oneso the algorithm is guaranteed to terminate eventually in this casethe recursion can easily be transformed into while-loopisin(value vtree twhile (not isempty( )and ( !root( )if ( root(tt left(telse right(treturn not isempty(thereeach iteration of the while-loop restricts the search to either the left or right subtree as appropriate the only way to leave the loop is to have found the required valueor to only have an empty tree remainingso the procedure only needs to return whether or not the final tree is empty in practicewe often want to have more than simple true/false returned for exampleif we are searching for student idwe usually want pointer to the full record for that studentnot just confirmation that they exist in that casewe could store record pointer associated with the search key (idat each tree nodeand return the record pointer or null pointerrather than simple true or falsewhen an item is found or not found clearlythe basic tree structures we have been discussing can be elaborated in many different ways like this to form whatever data-structure is most appropriate for the problem at handbutas noted abovewe can abstract out such details for current purposes time complexity of insertion and search as alwaysit is important to understand the time complexity of our algorithms both item insertion and search in binary search tree will take at most as many comparisons as the height of the tree plus one at worstthis will be the number of nodes in the tree but how many comparisons are required on averageto answer this questionwe need to know the average height of binary search tree this can be calculated by taking all possible binary search trees of given size and measuring each of their heightswhich is by no means an
19,972
by successive insertions as we have seen aboveperfectly balanced trees achieve minimal height for given number of nodesand it turns out that the more balanced treethe more ways there are of building it this is demonstrated in the figure below the only way of getting the tree on the left hand side is by inserting into the empty tree in that order the tree on the righthowevercan be reached in two waysinserting in the order or in the order ideallyof courseone would only use well-balanced trees to keep the height minimalbut they do not have to be perfectly balanced to perform better than binary search trees without restrictions carrying out exact tree height calculations is not straightforwardso we will not do that here howeverif we assume that all the possible orders in which set of nodes might be inserted into binary search tree are equally likelythen the average height of binary search tree turns out to be (log nit follows that the average number of comparisons needed to search binary search tree is (log )which is the same complexity we found for binary search of sorted array howeverinserting new node into binary search tree also depends on the tree height and requires (log nstepswhich is better than the (ncomplexity of inserting an item into the appropriate point of sorted array interestinglythe average height of binary search tree is quite bit better than the average height of general binary tree consisting of the same nodes that have not been built into binary search tree the average height of general binary tree is actually onthe reason for that is that there is relatively large proportion of high binary trees that are not valid binary search trees deleting nodes from binary search tree supposefor some reasonan item needs to be removed or deleted from binary search tree it would obviously be rather inefficient if we had to rebuild the remaining search tree again from scratch for items that would require steps of (log ncomplexityand hence have overall time complexity of (nlog nby comparisondeleting an item from sorted array would only have time complexity ( )and we certainly want to do better than that insteadwe need an algorithm that produces an updated binary search tree more efficiently this is more complicated than one might assume at first sightbut it turns out that the following algorithm works as desiredif the node in question is leafjust remove it if only one of the node' subtrees is non-empty'move upthe remaining subtree if the node has two non-empty sub-treesfind the 'left-mostnode occurring in the right sub-tree (this is the smallest item in the right subtreeuse this node to overwrite the
19,973
otherwise just delete it the last part works because the left-most node in the right sub-tree is guaranteed to be bigger than all nodes in the left sub-treesmaller than all the other nodes in the right sub-treeand have no left sub-tree itself for instanceif we delete the node with value from the tree in figure we get the tree displayed in figure figure example of node deletion in binary search tree in practicewe need to turn the above algorithm (specified in wordsinto more detailed algorithm specified using the primitive binary tree operatorsdelete(value vtree tif isempty(terror('errorgiven item is not in given tree'else if root( /delete from left sub-tree return maketree(root( )delete( ,left( ))right( ))else if root( /delete from right sub-tree return maketree(root( )left( )delete( ,right( )))else /the item to be deleted is root(tif isempty(left( )return right(telseif isempty(right( )return left(telse /difficult case with both subtrees non-empty return maketree(smallestnode(right( ))left( )removesmallestnode(right( )if the empty tree condition is metit means the search item is not in the treeand an appropriate error message should be returned the delete procedure uses two sub-algorithms to find and remove the smallest item of given sub-tree since the relevant sub-trees will always be non-emptythese sub-algorithms can be written with that precondition howeverit is always the responsibility of the programmer to ensure that any preconditions are met whenever given procedure is usedso it is important to say explicitly what the preconditions are it is often safest to start each procedure with
19,974
produced when they are notbut that may have significant time cost if the procedure is called many times firstto find the smallest nodewe havesmallestnode(tree /preconditiont is non-empty binary search tree if isempty(left(treturn root(telse return smallestnode(left( ))which uses the fact thatby the definition of binary search treethe smallest node of is the left-most node it recursively looks in the left sub-tree till it reaches an empty treeat which point it can return the root the second sub-algorithm uses the same idearemovesmallestnode(tree /preconditiont is non-empty binary search tree if isempty(left(treturn right(telse return maketree(root( )removesmallestnode(left( ))right( )except that the remaining tree is returned rather than the smallest node these procedures are further examples of recursive algorithms in each casethe recursion is guaranteed to terminatebecause every recursive call involves smaller treewhich means that we will eventually find what we are looking for or reach an empty tree it is clear from the algorithm that the deletion of node requires the same number of steps as searching for nodeor inserting new nodei the average height of the binary search treeor (log nwhere is the total number of nodes on the tree checking whether binary tree is binary search tree building and using binary search trees as discussed above is usually enough howeveranother thing we sometimes need to do is check whether or not given binary tree is binary search treeso we need an algorithm to do that we know that an empty tree is (trivialbinary search treeand also that all nodes in the left sub-tree must be smaller than the root and themselves form binary search treeand all nodes in the right sub-tree must be greater than the root and themselves form binary search tree thus the obvious algorithm isisbst(tree tif isempty(treturn true else return allsmaller(left( ),root( )and isbst(left( )and allbigger(right( ),root( )and isbst(right( )
19,975
if isempty(treturn true else return (root(tvand allsmaller(left( ),vand allsmaller(right( ),vallbigger(tree tvalue vif isempty(treturn true else return (root(tvand allbigger(left( ),vand allbigger(right( ),vhoweverthe simplest or most obvious algorithm is not always the most efficient exerciseidentify what is inefficient about this algorithmand formulate more efficient algorithm sorting using binary search trees sorting is the process of putting collection of items in order we shall formulate and discuss many sorting algorithms laterbut we are already able to present one of them the node values stored in binary search tree can be printed in ascending order by recursively printing each left sub-treerootand right sub-tree in the right order as followsprintinorder(tree tif not isempty(tprintinorder(left( )print(root( )printinorder(right( )thenif the collection of items to be sorted is given as an array of known size nthey can be printed in sorted order by the algorithmsort(array of size nt emptytree for , , - insert( [ ],tprintinorder(twhich starts with an empty treeinserts all the items into it using insert(vtto give binary search treeand then prints them in order using printinorder(texercisemodify this algorithm so that instead of printing the sorted valuesthey are put back into the original array in ascending order
19,976
balancing binary search trees if the items are added to binary search tree in random orderthe tree tends to be fairly well balanced with height not much more than log howeverthere are many situations where the added items are not in random ordersuch as when adding new student ids in the extreme case of the new items being added in ascending orderthe tree will be one long branch off to the rightwith height log if all the items to be inserted into binary search tree are already sortedit is straightforward to build perfectly balanced binary tree from them one simply has to recursively build binary tree with the middle ( medianitem as the rootthe left subtree made up of the smaller itemsand the right subtree made up of the larger items this idea can be used to rebalance any existing binary search treebecause the existing tree can easily be output into sorted array as discussed in section exercisewrite an algorithm that rebalances binary search tree in this wayand work out its time complexity another way to avoid unbalanced binary search trees is to rebalance them from time to time using tree rotations such tree rotations are best understood as followsany binary search tree containing at least two nodes can clearly be drawn in one of the two formswhere and are the required two nodes to be rotatedand ac and are binary search sub-trees (any of which may be emptythe two forms are related by left and right tree rotations which clearly preserve the binary search tree property in this caseany nodes in sub-tree would be shifted up the tree by right rotationand any nodes in sub-tree would be shifted up the tree by left rotation for exampleif the left form had consisting of two nodesand and consisting of one nodethe height of the tree would be reduced by one and become perfectly balanced by right tree rotation typicallysuch tree rotations would need to be applied to many different sub-trees of full tree to make it perfectly balanced for exampleif the left form had consisting of two nodesand and consisting of one nodethe tree would be balanced by first performing left rotation of the - - sub-treefollowed by right rotation of the whole tree in practicefinding suitable sequences of appropriate tree rotations to rebalance an arbitrary binary search tree is not straightforwardbut it is possible to formulate systematic balancing algorithms that are more efficient than outputting the whole tree and rebuilding it self-balancing avl trees self-balancing binary search trees avoid the problem of unbalanced trees by automatically rebalancing the tree throughout the insertion process to keep the height close to log at each stage obviouslythere will be cost involved in such rebalancingand there will be
19,977
of the treebut generally it is worthwhile the earliest type of self-balancing binary search tree was the avl tree (named after its inventors adelson-velskii and landisthese maintain the difference in heights of the two sub-trees of all nodes to be at most one this requires the tree to be periodically rebalanced by performing one or more tree rotations as discussed abovebut the complexity of insertiondeletion and search remain at (log nthe general idea is to keep track of the balance factor for each nodewhich is the height of the left sub-tree minus the height of the right sub-tree by definitionall the nodes in an avl-tree will have balance factor in the integer range [- howeverinsertion or deletion of node could leave that in the wider range [- requiring tree-rotation to bring it back into avl form exercisefind some suitable algorithms for performing efficient avl tree rotations compare them with other self-balancing approaches such as red-black trees -trees -tree is generalization of self-balancing binary search tree in which each node can hold more than one search key and have more than two children the structure is designed to allow more efficient self-balancingand offers particular advantages when the node data needs to be kept in external storage such as disk drives the standard (knuthdefinition isdefinition -tree of order is tree which satisfies the following conditionsevery node has at most children every non-leaf node (except the root nodehas at least / children the root nodeif it is not leaf nodehas at least two children non-leaf node with children contains search keys which act as separation values to divide its sub-trees all leaf nodes appear in the same leveland carry information there appears to be no definitive answer to the question of what the "bin " -treestands for it is certainly not "binary"but it could equally well be "balanced""broador "bushy"or even "boeingbecause they were invented by people at boeing research labs the standard representation of simple order example with search keys would bethe search keys held in each node are ordered ( in the example)and the non-leaf node' search keys ( the items and in the exampleact as separation values to divide
19,978
tree separates the values held in its two sub-trees for exampleif node has child nodes (or sub-treesthen it must have separation values and all values in the leftmost subtree will be less than all values in the middle subtree will be between and and all values in the rightmost subtree will be greater than that allows insertion and searching to proceed from the root down in similar way to binary search trees the restriction on the number of children to lie between / and means that the best case height of an order -tree containing search keys is logm and the worst case height is logm/ clearly the costs of insertiondeletion and searching will all be proportional to the tree heightas in binary search treewhich makes them very efficient the requirement that all the leaf nodes are at the same level means that -trees are always balanced and thus have minimal heightthough rebalancing will often be required to restore that property after insertions and deletions the order of -tree is typically chosen to optimize particular application and implementation to maintain the conditions of the -tree definitionnon-leaf nodes often have to be split or joined when new items are inserted into or deleted from the tree (which is why there is factor of two between the minimum and maximum number of children)and rebalancing is often required this renders the insertion and deletion algorithms somewhat more complicated than for binary search trees an advantage of -trees over self balancing binary search treeshoweveris that the range of child nodes means that rebalancing is required less frequently disadvantage is that there may be more space wastage because nodes will rarely be completely full there is also the cost of keeping the items within each node orderedand having to search among thembut for reasonably small orders mthat cost is low exercisefind some suitable insertiondeletion and rebalancing algorithms for -trees
19,979
priority queues and heap trees trees stored in arrays it was noted earlier that binary trees can be stored with the help of pointer -like structuresin which each item contains references to its children if the tree in question is complete binary treethere is useful array based alternative definition binary tree is complete if every levelexcept possibly the lastis completely filledand all the leaves on the last level are placed as far to the left as possible intuitivelya complete binary tree is one that can be obtained by filling the nodes starting with the rootand then each next level in turnalways from the leftuntil one runs out of nodes complete binary trees always have minimal height for their size nnamely log nand are always perfectly balanced (but not every perfectly balanced tree is complete in the sense of the above definitionmoreoverand more importantlyit is possible for them to be stored straightforwardly in arraystop-to-bottom left-to-rightas in the following examplea[ [ [ [ [ [ [ for complete binary treessuch arrays provide very tight representations notice that this time we have chosen to start the array with index rather than this has several computational advantages the nodes on level then have indices + the level of node with index is blog icthat islog rounded down the children of node with index iif they existhave indices and the parent of child with index has index / (using integer divisionthis allows the following simple algorithmsboolean isroot(int ireturn =
19,980
return log(iint parent(int ireturn int left(int ireturn int right(int ireturn which make the processing of these trees much easier this way of storing binary tree as an arrayhoweverwill not be efficient if the tree is not completebecause it involves reserving space in the array for every possible node in the tree since keeping binary search trees balanced is difficult problemit is therefore not really viable option to adapt the algorithms for binary search trees to work with them stored as arrays array-based representations will also be inefficient for binary search trees because node insertion or deletion will usually involve shifting large portions of the array howeverwe shall now see that there is another kind of binary tree for which array-based representations allow very efficient processing priority queues and binary heap trees while most queues in every-day life operate on first comefirst served basisit is sometimes important to be able to assign priority to the items in the queueand always serve the item with the highest priority next an example of this would be in hospital casualty departmentwhere life-threatening injuries need to be treated first the structure of complete binary tree in array form is particularly useful for representing such priority queues it turns out that these queues can be implemented efficiently by particular type of complete binary tree known as binary heap tree the idea is that the node labelswhich were the search keys when talking about binary search treesare now numbers representing the priority of each item in question (with higher numbers meaning higher priority in our exampleswith heap treesit is possible to insert and delete elements efficiently without having to keep the whole tree sorted like binary search tree this is because we only ever want to remove one element at timenamely the one with the highest priority presentand the idea is that the highest priority item will always be found at the root of the tree definition binary heap tree is complete binary tree which is either empty or satisfies the following conditionsthe priority of the root is higher than (or equal tothat of its children the left and right subtrees of the root are heap trees
19,981
every node is higher than (or equal tothat of all its descendants oras complete binary tree for which the priorities become smaller along every path down through the tree the most obvious difference between binary heap tree and binary search trees is that the biggest number now occurs at the root rather than at the right-most node secondlywhereas with binary search treesthe left and right sub-trees connected to given parent node play very different rolesthey are interchangeable in binary heap trees three examples of binary trees that are valid heap trees are and three which are not valid heap trees are the first because violates the required priority orderingthe second because it is not perfectly balanced and hence not completeand the third because it is not complete due to the node on the last level not being as far to the left as possible basic operations on binary heap trees in order to develop algorithms using an array representationwe need to allocate memory and keep track of the largest position that has been filled so farwhich is the same as the current number of nodes in the heap tree this will involve something likeint max int heap[max+ int /maximum number of nodes allowed /stores priority values of nodes of heap tree /largest position that has been filled so far for heap trees to be useful representation of priority queueswe must be able to insert new nodes (or customerswith given prioritydelete unwanted nodesand identify and remove the top-priority nodei the root (that is'servethe highest priority customerwe also need to be able to determine when the queue/tree is empty thusassuming the priorities are given by integerswe need constructormutators/selectorsand conditioninsert(int parray heapint ndelete(int iarray heapint nint root(array heapint nboolean heapempty(array heapint nidentifying whether the heap tree is emptyand getting the root and last leafis easy
19,982
return = int root(array heapint nif heapempty(heap,nerror('heap is empty'else return heap[ int lastleaf(array heapint nif heapempty(heap,nerror('heap is empty'else return heap[ninserting and deleting heap tree nodes is also straightforwardbut not quite so easy inserting new heap tree node since we always keep track of the last position in the tree which has been filled so farwe can easily insert new element at position provided there is still room in the arrayand increment the tree that results will still be complete binary treebut the heap tree priority ordering property might have been violated hence we may need to 'bubble upthe new element into valid position this can be done easily by comparing its priority with that of its parentand if the new element has higher prioritythen it is exchanged with its parent we may have to repeat this processbut once we reach parent that has higher or equal prioritywe can stop because we know there can be no lower priority items further up the tree hence an algorithm which inserts new heap tree node with priority isinsert(int parray heapint nif =max error('heap is full'else heap[ + bubbleup( + ,heap, + bubbleup(int iarray heapint nif isroot(ireturn elseif heap[iheap[parent( )swap heap[iand heap[parent( )bubbleup(parent( ),heap,
19,983
separately by whatever algorithm calls it inserting node takes at most (log nstepsbecause the maximum number of times we may have to 'bubble upthe new element is the height of the tree which is log deleting heap tree node to use binary heap tree as priority queuewe will regularly need to delete the rooti remove the node with the highest priority we will then be left with something which is not binary tree at all howeverwe can easily make it into complete binary tree again by taking the node at the 'lastposition and using that to fill the new vacancy at the root howeveras with insertion of new itemthe heap tree (priority orderingproperty might be violated in that casewe will need to 'bubble downthe new root by comparing it with both its children and exchanging it with the largest this process is then repeated until the new root element has found valid place thusa suitable algorithm isdeleteroot(array heapint nif error('node does not exist'else heap[ heap[nbubbledown( ,heap, - similar process can also be applied if we need to delete any other node from the heap treebut in that case we may need to 'bubble upthe shifted last node rather than bubble it down since the original heap tree is ordereditems will only ever need to be bubbled up or downnever bothso we can simply call bothbecause neither procedure changes anything if it is not required thusan algorithm which deletes any node from heap tree isdelete(int iarray heapint nif error('node does not exist'else heap[iheap[nbubbleup( ,heap, - bubbledown( ,heap, - the bubble down process is more difficult to implement than bubble upbecause node may have noneone or two childrenand those three cases have to be handled differently in the case of two childrenit is crucial that when both children have higher priority than the given nodeit is the highest priority one that is swapped upor their priority ordering will be violated thus we have
19,984
if left(in /no children return elseif right(in /only left child if heap[iheap[left( )swap heap[iand heap[left( )else /two children if heap[left( )heap[right( )and heap[iheap[left( )swap heap[iand heap[left( )bubbledown(left( ),heap,nelseif heap[iheap[right( )swap heap[iand heap[right( )bubbledown(right( ),heap,nin the same way that the insert algorithm does not increment the heap sizethis delete algorithm does not decrement the heap size that has to be done separately by whatever algorithm calls it note also that this algorithm does not attempt to be fair in the sense that if two or more nodes have the same priorityit is not necessarily the one that has been waiting longest that will be removed first howeverthis factor could easily be fixedif requiredby keeping track of arrival times and using that in cases of equal priority as with insertiondeletion takes at most (log nstepsbecause the maximum number of times it may have to bubble down or bubble up the replacement element is the height of the tree which is log building new heap tree from scratch sometimes one is given whole set of new items in one goand there is need to build binary heap tree containing them in other wordswe have set of items that we wish to heapify one obvious possibility would be to insert the items one by one into heap treestarting from an empty treeusing the (log 'bubble upbased insert algorithm discussed earlier that would clearly have overall time complexity of (nlog nit turns outhoweverthat rearranging an array of items into heap tree form can be done more efficiently using 'bubble downfirst note thatif we have the items in an array in positions nthen all the items with an index greater than / will be leavesand not need bubbling down thereforeif we just bubble down all the non-leaf items [ / ] [ by exchanging them with the larger of their children until they either are positioned at leafor until their children are both smallerwe obtain valid heap tree consider simple example array if items from which heap tree must be built we can start by simply drawing the array as treeand see that the last entries (those with indices greater than / are leaves of the treeas follows
19,985
then the rearrangement algorithm starts by bubbling down [ / [ which turns out not to be necessaryso the array remains the same next [ is bubbled downswapping with [ giving next [ is bubbled downswapping with [ giving finallya[ is bubbled downswapping with [ to give first then swapping with [ to give and finally swapping with [ to give which has the array rearranged as the required heap tree thususing the above bubbledown procedurethe algorithm to build complete binary heap tree from any given array of size is simplyheapify(array aint nfori / -bubbledown( , ,nthe time complexity of this heap tree creation algorithm might be computed as followsit potentially bubbles down bn/ itemsnamely those with indices bn/ the maximum number of bubble down steps for each of those items is the height of the treewhich is log nand each step involves two comparisons one to find the highest priority child nodeand one to compare the item with that child node so the total number of comparisons involved is at most ( / log nlog nwhich is the same as we would have by inserting the array items one at time into an initially empty tree in factthis is good example of situation in which naive counting of loops and tree heights over-estimates the time complexity this is because the number of bubble down steps
19,986
treethere are more nodesand fewer potential bubble down stepsso the total number of operations will actually be much less than nlog to be sure of the complexity classwe need to perform more accurate calculation at each level of tree of height there will be nodeswith at most bubble down stepseach with comparisonsso the total number of comparisons for tree of height will on average be (hh ( = - = - = the final sum converges to as increases (see appendix )so for large we have ( = + and the worst case will be no more than twice that thusthe total number of operations is ( + ( )meaning that the complexity class of heapify is actually ( )which is better than the (nlog ncomplexity of inserting the items one at time merging binary heap trees frequently one needs to merge two existing priority queues based on binary heap trees into single priority queue to achieve thisthere are three obvious ways of merging two binary heap trees and of similar size into single binary heap tree move all the items from the smaller heap tree one at time into the larger heap tree using the standard insert algorithm this will involve moving (nitemsand each of them will need to be bubbled up at cost (log )giving an overall time complexity of (nlog repeatedly move the last items from one heap tree to the other using the standard insert algorithmuntil the new binary tree maketree( , ,sis complete then move the last item of the new tree to replace the dummy root " "and bubble down that new root how this is best done will depend on the sizes of the two treesso this algorithm is not totally straightforward on averagearound half the items in the last level of one tree will need moving and bubblingso that will be (nmoveseach with cost of (log )again giving an overall time complexity of (nlog nhoweverthe actual number of operations required willon averagebe lot less than the previous approachby something like factor of fourso this approach is more efficienteven though the algorithm is more complex simply concatenate the array forms of the heap trees and and use the standard heapify algorithm to convert that array into new binary heap tree the heapify algorithm has time complexity ( )and the concatenation need be no more than thatso this approach has (noverall time complexitymaking it in the best general approach of all three thusthe merging of binary heap trees generally has (ntime complexity
19,987
approachthen that may look like better choice of approach than the third approach howevermaketree will itself generally be an (nprocedure if the trees are array-basedrather than pointer-basedwhich they usually are for binary heap trees sofor array-based similarly-sized binary heapsthe third approach is usually best if the heap trees to be merged have very different sizes and nthe first approach will have overall time complexity (mlog )which could be more efficient than an (napproach if in practicea good general purpose merge algorithm would check the sizes of the two trees and use them to determine the best approach to apply binomial heaps binomial heap is similar to binary heap as described abovebut has the advantage of more efficient procedures for insertion and merging unlike binary heapwhich consists of single binary treea binomial heap is implemented as collection of binomial trees definition binomial tree is defined recursively as followsa binomial tree of order is single node binomial tree of order has root node with children that are roots of binomial trees of orders (in that orderthusa binomial tree of order has height kcontains nodesand is trivially constructed by attaching one order - binomial tree as the left-most child of another order - binomial tree binomial trees of order and take the formand it is clear from these what higher order trees will look like binomial heap is constructed as collection of binomial trees with particular structure and node ordering propertiesthere can only be zero or one binomial tree of each order each constituent binomial tree must satisfy the priority ordering propertyi each node must have priority less than or equal to its parent
19,988
contains exactly nodesand binomial heap can only contain zero or one binomial tree of each orderso the total number of nodes in binomial heap must be nx bk bk [ = where bk specifies the number of trees of order thus there is one-to-one mapping between the binomial heap structure and the standard binary representation of the number nand since the binary representation is clearly uniqueso is the binomial heap structure the maximum number of trees in heap with nodes therefore equals the number of digits when is written in binary without leading zerosi log the heap can be stored efficiently as linked list of root nodes ordered by increasing tree order the most important operation for binomial heaps is mergebecause that can be used as sub-process for most other operations underlying that is the merge of two binomial trees of order into binomial tree of order by definitionthat is achieved by adding one of those trees as the left most sub-tree of the root of the otherand preservation of the priority ordering simply requires that it is the tree with the highest priority root that provides the root of the combined tree this clearly has ( time complexity then merging two whole binomial heaps is achieved by merging the constituent trees whenever there are two of the same orderin sequential manner analogous to the addition of two binary numbers in this casethe ( insert complexity will be multiplied by the number of treeswhich is (log )so the overall time complexity of merge is (log nthis is better than the (ncomplexity of merging binary heaps that can be achieved by concatenating the heap arrays and using the (nheapify algorithm insertion of new element into an existing binomial heap can easily be done by treating the new element as binomial heap consisting of single node ( an order zero tree)and merging that using the standard merge algorithm the average time complexity of that insert is given by computing the average number of ( tree combinations required the probability of needing the order zero combination is the probability of needing second combination is and the third is and so onwhich sum to one so insertion has ( overall time complexity that is better than the (log ncomplexity of insertion into standard binary heap creating whole new binomial heap from scratch can be achieved by using the ( insert process for each of the itemsgiving an overall time complexity of (nin this casethere is no better processso heapify here has the same time complexity as the heapify algorithm for binary heaps another important heap operation in practice is that of updating the heap after increasing node priority for standard binary heapsthat simply requires application of the usual bubble-up process with (log ncomplexity clearlya similar process can be used in binomial heapsand that will also be of (log ncomplexity the highest priority node in binomial heap will clearly be the highest priority root nodeand pointer to that can be maintained by each heap update operation without increasing the complexity of the operation serving the highest priority item requires deleting the highest priority node from the order tree it appears inand that will break it up into another binomial heap consisting of trees of all orders from to howeverthose trees can easily be merged back into the original heap using the standard merge algorithmwith the
19,989
the existing operations by increasing the relevant node priority to infinitybubbling-upand using the root delete operationagain with (log ncomplexity overall sothe complexity of delete is always (log nexercisefind pseudocode versions of the mergeinsert and delete algorithms for binomial heapsand see exactly how their time complexities arise fibonacci heaps fibonacci heap is another collection of trees that satisfy the standard priority-ordering property it can be used to implement priority queue in similar way to binary or binomial heapsbut the structure of fibonacci heaps are more flexible and efficientwhich allows them to have better time complexities they are named after the fibonacci numbers that restrict the tree sizes and appear in their time complexity analysis the flexibility and efficiency of fibonacci heaps comes at the cost of more complexitythe trees do not have fixed shapeand in the extreme cases every element in the heap can be in separate tree normallythe roots of all the trees are stored using circular doubly linked listand the children of each node are handled in the same way pointer to the highest priority root node is maintainedmaking it trivial to find the highest priority node in the heap the efficiency is achieved by performing many operations in lazy mannerwith much of the work postponed for later operations to deal with fibonacci heaps can easily be merged with ( complexity by simply concatenating the two lists of root nodesand then insertion can be done by merging the existing heap with new heap consisting only of the new node by inserting items one at timea whole heap can be created from scratch with (ncomplexity obviouslyat some pointorder needs to be introduced into the heap to achieve the overall efficiency this is done by keeping the number of children of all nodes to be at most (log )and the size of subtree rooted in node with children is at least fk+ where fk is the kth fibonacci number the number of trees in the heap is decreased as part of the delete operation that is used to remove the highest priority node and update the pointer to the highest priority root this delete algorithm is quite complex first it removes the highest priority rootleaving its children to become roots of new trees within the heapthe processing of which will be (log nthen the number of trees is reduced by linking together trees that have roots with the same number of childrensimilar to binomial heapuntil every root has different number of childrenleaving at most (log ntrees finally the roots of those trees are checked to reset the pointer to the highest priority it can be shown that all the required processes can be completed with (log naverage time complexity for each nodea record is kept of its number of children and whether it is marked the mark indicates that at least one of its children has been separated since the node was made child of another nodeso all roots are unmarked the mark is used by the algorithm for increasing node prioritywhich is also complexbut can be achieved with ( complexity this gives fibonacci heaps an important advantage over both binary and binomial heaps for which this operation has (log ntime complexity finallyan arbitrary node can be deleted from the heap by increasing its node priority to infinity and applying the delete highest priority algorithmresulting in an overall time complexity of (log
19,990
out how fibonacci numbers are involved in computing their time complexities comparison of heap time complexities it is clear that the more complex binomial and fibonacci heaps offer average time complexity advantages over simple binary heap trees the following table summarizes the average time complexities of the crucial heap operationsheap type binary binomial fibonacci insert (log no( ( delete (log no(log no(log nmerge (no(log no( heapify (no(no(nup priority (log no(log no( obviously it will depend on the application in question whether using more complicated heap is worth the effort we shall see later that fibonacci heaps are important in practice because they are used in the most efficient versions of many algorithms that can be implemented using priority queuessuch as dijkstra' algorithm for finding shortest routesand prim' algorithm for finding minimal spanning trees
19,991
sorting the problem of sorting in computer science'sortingusually refers to bringing set of items into some well-defined order to be able to do thiswe first need to specify the notion of order on the items we are considering for examplefor numbers we can use the usual numerical order (that isdefined by the mathematical 'less thanor '<relationand for strings the so-called lexicographic or alphabetic orderwhich is the one dictionaries and encyclopedias use usuallywhat is meant by sorting is that once the sorting process is finishedthere is simple way of 'visitingall the items in orderfor example to print out the contents of database this may well mean different things depending on how the data is being stored for exampleif all the objects are sorted and stored in an array of size nthen for , - print( [ ]would print the items in ascending order if the objects are stored in linked listwe would expect that the first entry is the smallestthe next the second-smallestand so on oftenmore complicated structures such as binary search trees or heap trees are used to sort the itemswhich can then be printedor written into an array or linked listas desired sorting is important because having the items in order makes it much easier to find given itemsuch as the cheapest item or the file corresponding to particular student it is thus closely related to the problem of searchas we saw with the discussion of binary search tress if the sorting can be done beforehand (off-line)this enables faster access to the required itemwhich is important because that often has to be done on the fly (on-linewe have already seen thatby having the data items stored in sorted array or binary search treewe can reduce the average (and worst casecomplexity of searching for particular item to (log nstepswhereas it would be (nsteps without sorting soif we often have to look up itemsit is worth the effort to sort the whole collection first imagine using dictionary or phone book in which the entries do not appear in some known logical order it follows that sorting algorithms are important tools for program designers different algorithms are suited to different situationsand we shall see that there is no 'bestsorting algorithm for everythingand therefore number of them will be introduced in these notes it is worth noting that we will be far from covering all existing sorting algorithms in factthe field is still very much aliveand new developments are taking place all the time however
19,992
new algorithms tend to be derived by simply tweaking existing principlesalthough we still do not have accurate measures of performance for some sorting algorithms common sorting strategies one way of organizing the various sorting algorithms is by classifying the underlying ideaor 'strategysome of the key strategies areenumeration sorting consider all items if we know that there are items which are smaller than the one we are currently consideringthen its final position will be at number exchange sorting if two items are found to be out of orderexchange them repeat till all items are in order selection sorting find the smallest itemput it in the first positionfind the smallest of the remaining itemsput it in the second position insertion sorting take the items one at time and insert them into an initially empty data structure such that the data structure continues to be sorted at each stage divide and conquer recursively split the problem into smaller sub-problems till you just have single items that are trivial to sort then put the sorted 'partsback together in way that preserves the sorting all these strategies are based on comparing items and then rearranging them accordingly these are known as comparison-based sorting algorithms we will later consider other noncomparison-based algorithms which are possible when we have specific prior knowledge about the items that can occuror restrictions on the range of items that can occur the ideas above are based on the assumption that all the items to be sorted will fit into the computer' internal memorywhich is why they are often referred to as being internal sorting algorithms if the whole set of items cannot be stored in the internal memory at one timedifferent techniques have to be used these daysgiven the growing power and memory of computersexternal storage is becoming much less commonly needed when sortingso we will not consider external sorting algorithms in detail suffice to saythey generally work by splitting the set of items into subsets containing as many items as can be handled at one timesorting each subset in turnand then carefully merging the results how many comparisons must it takean obvious way to compute the time complexity of sorting algorithms is to count the number of comparisons they need to carry outas function of the number of items to be sorted there is clearly no general upper bound on the number of comparisons usedsince particularly stupid algorithm might compare the same two items indefinitely we are more interested in having lower bound for the number of comparisons needed for the best algorithm in the worst case in other wordswe want to know the minimum number of comparisons required
19,993
how well particular sorting algorithms compare against that theoretical lower bound in generalquestions of this kind are rather hardbecause of the need to consider all possible algorithms in factfor some problemsoptimal lower bounds are not yet known one important example is the so-called travelling salesman problem (tsp)for which all algorithmswhich are known to give the correct shortest route solutionare extremely inefficient in the worst case (many to the extent of being useless in practicein these casesone generally has to relax the problem to find solutions which are probably approximately correct for the tspit is still an open problem whether there exists feasible algorithm that is guaranteed to give the exact shortest route for sorting algorithms based on comparisonshoweverit turns out that tight lower bound does exist clearlyeven if the given collection of items is already sortedwe must still check all the items one at time to see whether they are in the correct order thusthe lower bound must be at least nthe number of items to be sortedsince we need at least steps to examine every element if we already knew sorting algorithm that works in stepsthen we could stop looking for better algorithmn would be both lower bound and an upper bound to the minimum number of stepsand hence an exact bound howeveras we shall shortly seeno algorithm can actually take fewer than (nlog ncomparisons in the worst case ifin additionwe can design an algorithm that works in (nlog nstepsthen we will have obtained an exact bound we shall start by demonstrating that every algorithm needs at least (nlog ncomparisons to begin withlet us assume that we only have three itemsijand if we have found that < and <kthen we know that the sorted order isijk so it took us two comparisons to find this out in some caseshoweverit is clear that we will need as many as three comparisons for exampleif the first two comparisons tell us that and <kthen we know that is the smallest of the three itemsbut we cannot say from this information how and relate third comparison is needed so what is the average and worst number of comparisons that are neededthis can best be determined from the so-called decision treewhere we keep track of the information gathered so far and count the number of comparisons needed the decision tree for the three item example we were discussing isi < yes no < < yes no yes < < < yes < < no < < < no yes < < < < no < < so what can we deduce from this about the general casethe decision tree will obviously always be binary tree it is also clear that its height will tell us how many comparisons will be needed in the worst caseand that the average length of path from the root to leaf will give us the average number of comparisons required the leaves of the decision tree are
19,994
itemsso we are asking how many ways there are of arranging items the first item can be any of the itemsthe second can be any of the remaining itemsand so forthso their total number is ( )( nthus we want to know the height of binary tree that can accommodate as many as nleaves the number of leaves of tree of height is at most so we want to find such that >nh >log ( !or there are numerous approximate expressions that have been derived for log ( !for large nbut they all have the same dominant termnamely nlog (remember thatwhen talking about time complexitywe ignore any sub-dominant terns and constant factors henceno sorting algorithm based on comparing items can have better average or worst case performance than using number of comparisons that is approximately nlog for large it remains to be seen whether this (nlog ncomplexity can actually be achieved in practice to do thiswe would have to exhibit at least one algorithm with this performance behaviour (and convince ourselves that it really does have this behaviourin factwe shall shortly see that there are several algorithms with this behaviour we shall proceed now by looking in turn at number of sorting algorithms of increasing sophisticationthat involve the various strategies listed above the way they work depends on what kind of data structure contains the items we wish to sort we start with approaches that work with simple arraysand then move on to using more complex data structures that lead to more efficient algorithms bubble sort bubble sort follows the exchange sort approach it is very easy to implementbut tends to be particularly slow to run assume we have array of size that we wish to sort bubble sort starts by comparing [ - with [ - and swaps them if they are in the wrong order it then compares [ - and [ - and swaps those if need beand so on this means that once it reaches [ ]the smallest entry will be in the correct place it then starts from the back againcomparing pairs of 'neighbours'but leaving the zeroth entry alone (which is known to be correctafter it has reached the front againthe second-smallest entry will be in place it keeps making 'passesover the array until it is sorted more generallyat the ith stage bubble sort compares neighbouring entries 'from the back'swapping them as needed the item with the lowest index that is compared to its right neighbour is [ - after the ith stagethe entries [ ], [ - are in their final position at this point it is worth introducing simple 'test-caseof size to demonstrate how the various sorting algorithms work bubble sort starts by comparing [ ]= with [ ]= since they are not in orderit swaps themgiving it then compares [ ]= with [ ]= since those are in orderit leaves them where they are then it compares [ ]= with [ ]= and those are not in order once againso they have to be swapped we get note that the smallest entry has reached its final place this will always happen after bubble sort has done its first 'passover the array
19,995
[ ]= with [ ]= these entries are in orderso nothing happens (note that these numbers have been compared before there is nothing in bubble sort that prevents it from repeating comparisonswhich is why it tends to be pretty slow!then it compares [ ]= and [ ]= these are not in orderso they have to be swappedgiving since we already know that [ contains the smallest itemwe leave it aloneand the second pass is finished note that now the second-smallest entry is in placetoo the algorithm now starts the third and final passcomparing [ ]= and [ ]= again these are out of order and have to be swappedgiving since it is known that [ and [ contain the correct items alreadythey are not touched furthermorethe third-smallest item is in place nowwhich means that the fourth-smallest has to be correcttoo thus the whole array is sorted it is now clear that bubble sort can be implemented as followsfor +for - > -if [ja[ - swap [jand [ - the outer loop goes over all positions that may still need to be swapped to the leftand the inner loop goes from the end of the array back to that position as is usual for comparison-based sorting algorithmsthe time complexity will be measured by counting the number of comparisons that are being made the outer loop is carried out times the inner loop is carried out ( ( times so the number of comparisons is the same in each casenamely - - = = - ( ii= ( ( ( thus the worst case and average case number of comparisons are both proportional to and hence the average and worst case time complexities are ( insertion sort insertion sort is (not surprisinglya form of insertion sorting it starts by treating the first entry [ as an already sorted arraythen checks the second entry [ and compares it with the first if they are in the wrong orderit swaps the two that leaves [ ], [ sorted then it takes the third entry and positions it in the right placeleaving [ ], [ ], [ sortedand so on more generallyat the beginning of the ith stageinsertion sort has the entries [ ] [ - sorted and inserts [ ]giving sorted entries [ ], [ifor the example starting array insertion sort starts by considering [ ]= as sortedthen picks up [ and 'inserts itinto the already sorted arrayincreasing the size of it by since [ ]= is smaller than [ ]= it has to be inserted in the zeroth slot
19,996
being taken to remember [ first!)and then we can move the old [ to [ ]giving at the next stepthe algorithm treats [ ], [ as an already sorted array and tries to insert [ ]= this value obviously has to fit between [ ]= and [ ]= this is achieved by moving [ 'upone slot to [ (the value of which we assume we have remembered)allowing us to move the current value into [ ]giving finallya[ ]= has to be inserted into the sorted array [ ], [ since [ ]= is bigger than it is moved 'upone slotand the same happens for [ ]= comparison with [ ]= shows that [ was the slot we were looking forgiving the general algorithm for insertion sort can therefore be writtenfor +forj -if [ja[ - swap [jand [ - else break the outer loop goes over the items to be insertedand the inner loop takes each next item and swaps it back through the currently sorted portion till it reaches its correct position howeverthis typically involves swapping each next item many times to get it into its right positionso it is more efficient to store each next item in temporary variable and only insert it into its correct position when that has been found and its content movedfor + [jwhile & [ - [ja[ - - [jt the outer loop again goes over - itemsand the inner loop goes back through the currently sorted portion till it finds the correct position for the next item to be inserted the time complexity is again taken to be the number of comparisons performed the outer loop is always carried out times how many times the inner loop is carried out depends on the items being sorted in the worst caseit will be carried out timeson averageit will be half that often hence the number of comparison in the worst case isn- xx = = - = ( (
19,997
number of steps for of insertion sort are both proportional to and hence the average and worst case time complexities are both ( selection sort selection sort is (not surprisinglya form of selection sorting it first finds the smallest item and puts it into [ by exchanging it with whichever item is in that position already then it finds the second-smallest item and exchanges it with the item in [ it continues this way until the whole array is sorted more generallyat the ith stageselection sort finds the ith-smallest item and swaps it with the item in [ - obviously there is no need to check for the ith-smallest item in the first elements of the array for the example starting array selection sort first finds the smallest item in the whole arraywhich is [ ]= and swaps this value with that in [ giving thenfor the second stepit finds the smallest item in the reduced array [ ], [ ], [ ]that is [ ]= and swaps that into [ ]giving finallyit finds the smallest of the reduced array [ ], [ ]that is [ ]= and swaps that into [ ]or recognizes that swap is not neededgiving the general algorithm for selection sort can be writtenfor - + for + +if [ja[kk swap [iand [kthe outer loop goes over the first positions to be filledand the inner loop goes through the currently unsorted portion to find the next smallest item to fill the next position note thatunlike with bubble sort and insertion sortthere is exactly one swap for each iteration of the outer loopthe time complexity is again the number of comparisons carried out the outer loop is carried out times in the inner loopwhich is carried out ( timesone comparison occurs hence the total number of comparisons isn- - = = + - ( ii= ( ( therefore the number of comparisons for selection sort is proportional to in the worst case as well as in the average caseand hence the average and worst case time complexities are both ( note that bubblesortinsertion sort and selection sort all involve two nested for loops over (nitemsso it is easy to see that their overall complexities will be ( without having to compute the exact number of comparisons
19,998
comparison of ( sorting algorithms we have now seen three different array based sorting algorithmsall based on different sorting strategiesand all with ( time complexity so one might imagine that it does not make much difference which of these algorithms is used howeverin practiceit can actually make big difference which algorithm is chosen the following table shows the measured running times of the three algorithms applied to arrays of integers of the size given in the top rowalgorithm bubble sort insertion sort selection sort here denotes an array with entries which are already sortedand is an array which is sorted in the reverse orderthat isfrom biggest to smallest all the other arrays were filled randomly warningtables of measurements like this are always dependent on the random ordering usedthe implementation of the programming language involvedand on the machine it was run onand so will never be exactly the same so where exactly do these differences come fromfor startselection sort always makes ( )/ comparisonsbut carries out at most swaps each swap requires three assignments and takesin factmore time than comparison bubble sorton the other handdoes lot of swaps insertion sort does particularly well on data which is sorted already in such caseit only makes comparisons it is worth bearing this in mind for some applicationsbecause if only few entries are out of placeinsertion sort can be very quick these comparisons serve to show that complexity considerations can be rather delicateand require good judgement concerning what operations to count it is often good idea to run some experiments to test the theoretical considerations and see whether any simplifications made are realistic in practice for instancewe have assumed here that all comparisons cost the samebut that may not be true for big numbers or strings of characters what exactly to count when considering the complexity of particular algorithm is always judgement call you will have to gain experience before you feel comfortable with making such decisions yourself furthermorewhen you want to improve the performance of an algorithmyou may want to determine the biggest user of computing resources and focus on improving that something else to be aware of when making these calculations is that it is not bad idea to keep track of any constant factorsin particular those that go with the dominating sub-term in the above examplesthe factor applied to the dominating sub-termnamely varies it is / for the average case of bubble sort and selection sortbut only / for insertion sort it is certainly useful to know that an algorithm that is linear will perform better than quadratic one provided the size of the problem is large enoughbut if you know that your problem has size ofsayat most then complexity of ( / ) will be preferable to one of or if you know that your program is only ever used on fairly small samplesthen using the simplest algorithm you can find might be beneficial overall it is easier to programand there is not lot of compute time to be saved finallythe above numbers give you some idea whyfor program designersthe general rule is to never use bubble sort it is certainly easy to programbut that is about all it has going for it you are better off avoiding it altogether
19,999
sorting algorithm stability one often wants to sort items which might have identical keys ( ages in yearsin such way that items with identical keys are kept in their original orderparticularly if the items have already been sorted according to different criteria ( alphabeticalsoif we denote the original order of an array of items by subscriptswe want the subscripts to end up in order for each set of items with identical keys for exampleif we start out with the array [ ]it should be sorted to [ and not to [ sorting algorithms which satisfy this useful property are said to be stable the easiest way to determine whether given algorithm is stable is to consider whether the algorithm can ever swap identical items past each other in this waythe stability of the sorting algorithms studied so far can easily be establishedbubble sort this is stable because no item is swapped past another unless they are in the wrong order so items with identical keys will have their original order preserved insertion sort this is stable because no item is swapped past another unless it has smaller key so items with identical keys will have their original order preserved selection sort this is not stablebecause there is nothing to stop an item being swapped past another item that has an identical key for examplethe array [ would be sorted to [ which has items and in the wrong order the issue of sorting stability needs to be considered when developing more complex sorting algorithms often there are stable and non-stable versions of the algorithmsand one has to consider whether the extra cost of maintaining stability is worth the effort treesort let us now consider way of implementing an insertion sorting algorithm using data structure better suited to the problem the idea herewhich we have already seen beforeinvolves inserting the items to be sorted into an initially empty binary search tree thenwhen all items have been insertedwe know that we can traverse the binary search tree to visit all the items in the right order this sorting algorithm is called treesortand for the basic versionwe require that all the search keys be different obviouslythe tree must be kept balanced in order to minimize the number of comparisonssince that depends on the height of the tree for balanced tree that is (log nif the tree is not kept balancedit will be more than thatand potentially (ntreesort can be difficult to compare with other sorting algorithmssince it returns treerather than an arrayas the sorted data structure it should be chosen if it is desirable to have the items stored in binary search tree anyway this is usually the case if items are frequently deleted or insertedsince binary search tree allows these operations to be implemented efficientlywith time complexity (log nper item moreoveras we have seen beforesearching for items is also efficientagain with time complexity (log