You are on page 1of 381

I

[P

GILLES BRASSARD
PAUL BRATLEY

ALGORITHMICS

ALGORITHMICS
Theory and Practice

Gilles Brassard and Paul Bratley
Departement d'informatique et de recherche operationnelle
Universitd de Montreal

PRENTICE HALL, Englewood Cliffs, New Jersey

07632

Library of Congress Cataloging-in-Publication Data
Brassard, Gilles
Algorithmics : theory and practice.

1. Recursion theory. 2. Algorithms. I. Bratley,
Paul. II. Title.

QA9.6.B73 1987
ISBN 0-13-023243-2

51 I'.3

88-2326

Editorial/production supervision: Editing, Design & Production, Inc.
Cover design: Lundgren Graphics, Ltd.
Manufacturing buyer: Cindy Grant

© 1988 by Prentice-Hall, Inc.
A division of Simon & Schuster
Englewood Cliffs, New Jersey 07632

All rights reserved. No part of this book may be
reproduced, in any form or by any means,
without permission in writing from the publisher.

Printed in the United States of America
10 9 8 7 6 5 4 3 2 1

ISBN 0-13-023243-2

PRENTICE-HALL INTERNATIONAL (UK) LIMITED. London
PRENTICE-HALL OF AUSTRALIA PTY. LIMITED, Sydney
PRENTICE-HALL CANADA INC., Toronto
PRENTICE-HALL HISPANOAMERICANA, S.A., Mexico

PRENTICE-HALL OF INDIA PRIVATE LIMITED, New Delhi
PRENTICE-HALL OF JAPAN, INC., Tokyo
SIMON & SCHUSTER ASIA PTE. LTD., Singapore
EDITORA PRENTICE-HALL DO BRASIL, LTDA., Rio de Janeiro

for Isabelle and Pat

1.2.1.5.4.7.1.9.7. Some Practical Examples 1. 13 Evaluating Determinants. 1 5 12 Sorting. 1.6.3. Data Structures 1.8. What Is an Algorithm? 1.7. 1.7.7. 11 Lists. 14 Calculating the Greatest Common Divisor.3. When Is an Algorithm Specified? 1.Contents Xiii Preface 1 Preliminaries 1 1. 16 Fourier Transforms.5.6. 1. 1.1. The Efficiency of Algorithms 1. Calculating the Fibonacci Sequence.4.7. Why Do We Need Efficient Algorithms? 1. 19 1.9. 15 19 20 20 VII .7. Average and Worst-Case Analysis 7 1. Problems and Instances 4 1. What Is an Elementary Operation? 9 1. 13 Multiplication of Large Integers.2.

1. 21 Rooted Trees.3. 1. 68 Change of Variable. 3. 3. Introduction 3. 2. 41 Asymptotic Notation with Several Parameters.3. 3. Disjoint Set Structures. 79 Greedy Algorithms for Scheduling 3. 75 Supplementary Problems.9. 2. 76 References and Further Reading 78 3 Greedy Algorithms 79 3. 65 Inhomogeneous Recurrences. 2.1.1.2.4. 2.1.3. 2.1. 87 81 81 92 Minimizing Time in the System.2.of " . 2. Heaps. Graphs.5.1. 3.9.2. 2. 2.6.1. 45 Asymptotic Recurrences.7. 37 Other Asymptotic Notation. 2. 92 Scheduling with Deadlines.2.4. 25 1.2. 48 For Further Reading.4. Minimal Spanning Trees.2.5.1. 47 Constructive Induction.8.5. Shortest Paths.3.2. 2.1. References and Further Reading 104 102 .3. 3.1.9.1.3. 3.1. Greedy Algorithms and Graphs 3.10. Operations on Asymptotic Notation. 2. Solving Recurrences Using the Characteristic Equation 2.3.2. 95 Greedy Heuristics 3.1.1.3. 65 100 Colouring a Graph.3.1. 23 30 References and Further Reading 35 2 Analysing the Efficiency of Algorithms 2. Analysis of Algorithms 2. 37 37 A Notation for "the order. Asymptotic Notation 2.4.viii Contents 1. 101 The Travelling Salesperson Problem. 52 Homogeneous Recurrences.4.2. 43 51 2.9. 1.3.1.4.2. 43 Conditional Asymptotic Notation. 72 Range Transformations.3.3. 2.5.4.

The World Series 5. References and Further Reading 142 144 146 150 154 159 162 164 167 6 Exploring Graphs 169 6. Arithmetic with Large Integers 4. Depth-First Search : Undirected Graphs 6.1.7. Introduction 4. Traversing Trees 6. Optimal Search Trees 5.11.4.3. Binary Searching 4. Introduction 6.5.6. Exponentiation : An Introduction to Cryptology 4.7.ix Contents 4 Divide and Conquer 105 4.8.9.12.3.2.3. The Travelling Salesperson Problem 5.2. Sorting by Merging 4.1.2.5.1.6. 174 171 . Shortest Paths 5.8. Chained Matrix Multiplication 5.! 128 169 170 Articulation Points.3. Introduction 5. Matrix Multiplication 4. Supplementary Problems 4. Supplementary Problems 5. Selection and the Median 4.10. Exchanging Two Sections of an Array 4. Quicksort 4.9. Determining the Threshold 4. References and Further Reading 105 107 109 115 116 119 124 132 134 136 140 5 Dynamic Programming 142 5.4. Memory Functions 5.

Introduction 8. 176 182 184 Backtracking.3.1.1. 245 Las Vegas Algorithms 247 226 237 . 7.2.1. 232 More Probabilistic Counting.2.6. 179 6.1. 8.6. 8. Buffon's Needle.2.1.5. Implicit Graphs and Trees 6. Sherwood Algorithms 8.4. Numerical Probabilistic Algorithms 228 8.6. 6. Breadth-First Search 6. Supplementary Problems 6.1.4. Introduction. 6. 223 238 Selection and Sorting.1.2. 185 Graphs and Games: An Introduction. 7.6. 213 The Boyer-Moore Algorithm.2.3.1. Depth-First Search : Directed Graphs 6. 228 Numerical Integration.1. 8.2. Acyclic Graphs: Topological Sorting. Branch-and-Bound.4.4. 7. Preconditioning 205 7. 6.3. 238 Stochastic Preconditioning.4.3. 235 Numerical Problems in Linear Algebra. 8. 209 Precomputation for String-Searching Problems 211 7. 230 Probabilistic Counting.1.2. 8. 8. 7.3.5. 207 Repeated Evaluation of a Polynomial. 8.4. 178 Strongly Connected Components.4.3.2. 216 References and Further Reading 222 8 Probabilistic Algorithms 223 8.x Contents 6. 240 Searching an Ordered List. 242 Universal Hashing. 8.3.7.5.2.1.3.2.3. 205 Ancestry in a rooted tree. 7. 199 6.3.4.8. Classification of Probabilistic Algorithms 8. References and Further Reading 204 189 202 7 Preconditioning and Precomputation 7.2. 7.3. 211 The Knuth-Morris-Pratt Algorithm. 8. 205 Signatures.4.3.4.

2.5. 8.1.6. 271 Matrix Multiplication Revisited. The Eight Queens Problem Revisited.6. 8.1. 8. Multiplication of Large Integers 9. NP-Complete Problems. 260 Monte Carlo Algorithms 8. 268 Probabilistic Primality Testing. 9 10 248 262 Majority Element in an Array.2. 302 10. Reductions Among Graph Problems.1. 328 10. 10.3. Square Roots Modulo p.4. 274 References and Further Reading 274 Transformations of the Domain 277 9.3. Reductions Among Matrix Problems. Decision Trees 10. The Classes P and NP.4.3. 8. 332 10.2. Introduction to NP-Completeness 308 315 10.3. 252 8.2.3.3.5.xi Contents 8.7.5.1. 316 10. Cook's Theorem. 269 A Probabilistic Test for Set Equality.4.1.5. The Inverse Transform 9.2. 324 References and Further Reading 336 Table of Notation 338 Bibliography 341 Index 353 .2. Nondeterminism. 304 10. Introduction 9.4. Factorizing Integers. 256 Choosing a Leader. The Discrete Fourier Transform 279 9. 8.6.6. 1.5. Reductions Among Arithmetic and Polynomial Problems. References and Further Reading 290 277 280 284 286 Introduction to Complexity 10.4.3.5.2.2. 325 10.3.3.2. 10. Reduction 292 292 300 10. 8.6.6. Some Reductions. Symbolic Operations on Polynomials 9. 8.3.3.

.

even on a machine a thousand times faster.7. As the power of computing machinery grows. sometimes with slightly different meanings. quoted in the opening sentence of Chapter 1.000 items in 30 seconds using a good algorithm. but that would require millions of years with less efficient algorithms (read Section 1. however. we are aware of no dictionary of the English language that has an entry for algorithmics. We chose the word algorithmics to translate the more common French term algorithmique. calculations once infeasible become routine. has had an even more important effect in extending the frontiers of feasible computation: the use of efficient algorithms. The same word was coined independently by several people. using a more naive algorithm. the subject matter of this book. Harel (1987) calls algorithmics "the spirit of xiii .Preface The explosion in computing we are witnessing is arousing extraordinary interest at every level of society. For instance. (This situation is not corrected in the OED Supplement. For instance.) In a nutshell.) Although the Concise Oxford Dictionary offers a more upto-date definition for the word algorithm. There are other examples of tasks that can be completed in a small fraction of a second. The Oxford English Dictionary defines algorithm as an "erroneous refashioning of algorism" and says about algorism that it "passed through many pseudo- etymological perversions. (Although this word appears in some French dictionaries. Another factor. the definition does not correspond to modern usage. algorithmics is the systematic study of the fundamental techniques used to design and analyse efficient algorithms. whereas such speed would be impossible.3 for more detail). including a recent algorithm". today's typical medium-sized computers can easily sort 100.

Thus we concentrate on the techniques used to design and analyse efficient algorithms. in whatever field of application they may be required. Generally speaking. Our book can also be used for independent study: anyone who needs to write better. The use of Pascal or similarly structured language will help reduce this effort to the minimum necessary. we use no particular programming language. cryptography. symbolic computation. Our book is neither a programming manual nor an account of the proper use of data structures. more efficient algorithms can benefit from it. Each technique is first presented in full generality. From time to time a passage requires more advanced mathematical knowledge. If used as the basis for a course at the graduate level. We take it for granted that the reader is familiar with such notions as mathematical induction. adopting the wider perspective that it Preface is "the area of human study. knowledge and expertise that concerns algorithms". On the other hand.xiv computing". fundamental treatment of the material ensure that the ideas presented here will not lose their relevance. Berkeley. the teacher should bear in mind that the first two chapters are essential to understanding the rest of . Thereafter it is illustrated by concrete examples of algorithms taken from such different applications as optimization. we suggest that the material be supplemented by attacking some subjects in greater depth. artificial intel- ligence. A certain mathematical maturity is more important still. an introductory undergraduate course in algebra and another in calculus should provide sufficient background. linear algebra. Some of the chapters. In making a choice of subjects. and so on. numerical analysis. Our book is intended as a textbook for an upper-level undergraduate or a lowerlevel graduate course in algorithmics. nor are the examples for any particular machine. but giving at best a vague idea of the principles involved in their design. but such passages can be skipped on the first reading with no loss of continuity. On the contrary. you should have some previous programming experience. perhaps using the excellent texts by Garey and Johnson (1979) or Tarjan (1983). It is unrealistic to hope to cover all the material in this book in an undergraduate course with 45 hours or so of classes. Some basic mathematical knowledge is required to understand this book. However. Still less is it a "cookbook" containing a long catalogue of programs ready to be used directly on a machine to solve certain specific problems. in particular the one concerned with probabilistic algorithms. set notation. This and the general. computing in the humanities. Although our approach is rigorous and theoretical. and the concept of a graph. most of the algorithms presented also have real-life applications. you should not expect to be able to use the algorithms we give directly: you will always be obliged to make the necessary effort to transcribe them into some appropriate programming language. To profit fully from this book. operations research. we do not neglect the needs of practitioners: besides illustrating the design techniques employed. contain original material. We have used preliminary versions at both the University de Montreal and the University of California. the aim of our book is to give the reader some basic tools needed to develop his or her own algorithms.

nor do we think it advisable to provide a solutions manual. however. The choice of the remaining material to be studied depends on the teacher's preferences and inclinations. the experience gained in using the French version. Several problems call for an algorithm to be implemented on a computer so that its efficiency may be measured experimentally and compared to the efficiency of alternative solutions. We originally wrote our book in French. Although we give the origin of a number of algorithms and ideas. Our warmest thanks.The last three chapters. Our goal is to suggest supplementary reading that can help you deepen your understanding of the ideas we introduce. The numbering of problems and sections. and Luis Miguel and Dan Philip in Berkeley. our primary aim is not historical. was crucial in improving the presentation of some topics. in particular at an international summer school in Bayonne. Writing this book would have been impossible without the help of many people. Laurent Langlois. It would be a pity to study this material without carrying out at least one such experiment. No solutions are provided for the other problems. both at the undergraduate and graduate levels. maybe even a research project). We are also grateful to those people who used the preliminary versions of our book. It is crucial to read the problems: their statements form an integral part of the text. The comments and suggestions we received were most valuable. the teacher may find it interesting to discuss these briefly in an undergraduate class. without necessarily going over each and every example given there of how the techniques can be applied. and in spotting occasional errors. Our thanks go first to the students who have followed our courses in algorithmics over the years since 1979. Paris. The first printing of this book by Prentice Hall is already in a sense a second edition.Preface xv the book. Although less than a year separates the first French and English printings. however. You should therefore not be surprised if information of this kind is sometimes omitted. and Sophie Monet in Montreal. Particular thanks are due to those who kindly allowed us to copy their course notes: Denis Fortin. or by the presence of one asterisk (takes a little thought) or two asterisks (difficult. however. The other chapters are to a great extent independent of one another. In this form it was published by Masson. or colleagues and students at other universities. is not always consistent between the French and English versions. perhaps to lay the ground before going into detail in a subsequent graduate class. whether they were our own students. must go to those who carefully read and reread several chapters of the book and who suggested many improvements and corrections: Pierre . Almost 500 exercises are dispersed throughout the text. We hope the serious teacher will be pleased to have available this extensive collection of unsolved problems from which homework assignments can be chosen. Their level of difficulty is indicated as usual either by the absence of an asterisk (immediate to easy). although most of Chapter 1 can probably be assigned as independent reading. An elementary course should certainly cover the first five chapters. deal with more advanced topics. The solutions to many of the difficult problems can be found in the references. The references from each chapter are combined at the end of the book in an extensive bibliography including well over 200 items. Each chapter ends with suggestions for further reading.

Lawler for mentioning our French manuscript to Prentice Hall's representative in northern California. its sylvan serenity provided the setting and the inspiration for writing a number of chapters. Jean-Marc Robert. We thank the entire team at Prentice Hall for their exemplary efficiency and friendliness. We are also grateful to those who made it possible for us to work intensively on our book during long periods spent away from Montreal. for their encouragement.-Michel deserves our special thanks. Gilles Brassard thanks Manuel Blum and the University of California.xvi Preface Beauchemin. Santiago Miro. who taught him so much of the material included in this book. Michel Maksud and Robert Gerin-Lajoie. Paul Bratley thanks Georges Stamon and the Universite de Franche-Comte. Inc. Gilles Brassard Paul Bratley . Pierre McKenzie. for putting up with us -while we were working on the French and English versions of this book. we particularly appreciate the help we received from James Fegen. Bennett Fox. Annette Hall. Claude Goutier. Berkeley. Last but not least. Andre Chartier. Isabelle and Pat. Amsterdam. we owe a considerable debt of gratitude to our wives. even before we plucked up the courage to work on an English version. The heads of the laboratories at the Universite de Montreal's Departement d'informatique et de recherche operationnelle. Bruxelles. and Lise DuPlessis who so many times made her country house available. Dan Joraanstad. and Jean-Jacques Quisquater and Philips Research Laboratory. He also thanks John Hopcroft. The Natural Sciences and Engineering Research Council of Canada provided generous support. understanding. David Chaum and the CWI. Pierre L'Ecuyer. and exemplary patience-in short.. and Production. provided unstinting support. It was her misfortune to help us struggle with the text editing system through one translation and countless revisions. and Alan Sherman. of Editing. Design. Claude Crepeau. Denise St. was no less misfortuned to help us struggle with the last stages of production. We also thank Eugene L.

ALGORITHMICS .

.

it is important to decide which algorithm for its solution should be used. When we talk about algorithms. other systematic methods for solving problems could be included. It is even possible to consider certain cooking recipes as algorithms. nor must it require the use of intuition or creativity (although we shall see an important exception to this rule in Chapter 8). the methods we learn at school for multiplying and dividing integers are also algorithms. The execution of an algorithm must not include any subjective decisions. 1 . The most famous algorithm in history dates from the time of the Greeks : this is Euclid's algorithm for calculating the greatest common divisor of two integers. the way in which the problem is presented. taken from right to left. For example. provided they do not include instructions like "Add salt to taste". Take elementary arithmetic as an example. machine) calculation". and that finally you will add all these rows to obtain your answer. the chances are that you will multiply the multiplicand successively by each figure of the multiplier. and so on. that you will write these intermediate results one beneath the other shifting each line one place left. When we set out to solve a problem. we shall mostly be thinking in terms of computers. Suppose you have to multiply two positive integers using only pencil and paper. the speed and memory size of the available computing equipment. This is the "classic" multiplication algorithm. Nonetheless.I Preliminaries 1. If you were raised in North America. The answer can depend on many factors : the size of the instance to be solved.1 WHAT IS AN ALGORITHM? The Concise Oxford Dictionary defines an algorithm as a "process or rules for (esp.

which will always be enclosed within braces. and then add up the numbers that remain in the column under the multiplicand. To use it.I. In this example we get 19+76+152+608 = 855. we shall in future specify our algorithms by giving a corresponding program. Sometimes we write. Although this algorithm may seem funny at first. 45 19 19 22 --- 11 38 76 5 152 152 2 304 ----- 1 608 608 76 855 Figure 1. Finally. there is no need to memorize any multiplication tables : all we need to know is how to add up. These phrases should not be confused with comments on the program. Even our description of an algorithm as simple as multiplication a la russe is not completely clear. Multiplication a la russe. ignoring any fractions. and how to double a number or divide it by 2.2 Preliminaries Chap.1. We shall see in Section 4. here is quite a different algorithm for doing the same thing. We shall use phrases in English in our programs whenever this seems to make for simplicity and clarity. these more sophisticated algorithms are in fact slower than the simple ones when the operands are not sufficiently large. for instance procedure proc1(T : array) . one under each operand. it is essentially the method used in the hardware of many computers.I. real. Declarations of scalar quantities (integer.7 that there exist more efficient algorithms when the integers to be multiplied are very large. 1 However. The notation used to specify that a function or a procedure has an array parameter varies from case to case. Write the multiplier and the multiplicand side by side. At this point it is important to decide how we are going to represent our algorithms. Make two columns. cross out each row in which the number under the multiplier is even. If we try to describe them in English. multiplying 19 by 45 proceeds as in Figure I.1. the essential points of an algorithm will not be obscured by the relatively unimportant programming details. sometimes called "multiplication a la russe ". and double the number under the multiplicand by adding it to itself. However. we rapidly discover that natural languages are not at all suited to this kind of thing. or Boolean) are usually omitted. To avoid confusion. by repeating the following rule until the number under the multiplier is 1 : divide the number under the multiplier by 2. We did not so much as try to describe the classic multiplication algorithm in any detail. However. we shall not confine ourselves to the use of one particular programming language : in this way. and arrays are passed by reference. For example. Scalar parameters of functions and procedures are passed by value unless a different specification is given explicitly.

while. Y { initialization } X[1] -A.0 while i > 0 do if X [i ] is odd then prod F prod + Y [i ] <-- return prod . A reader who has some familiarity with Pascal. is shown by indenting the statements affected. 1. by a procedure call of the form proc3(T [ l .Y[i] +Y[i] i -i +1 { add the appropriate entries } prod F. We assume that the reader is familiar with the concepts of recursion and of pointers. for example. The operators div and mod represent integer division (discarding any fractional result) and the remainder of a division. If the bounds or the type of T are important. respectively. These bounds can be specified explicitly. as well as that of a declaration such as procedure.. and their values are determined by the bounds of the actual parameter corresponding to T when the procedure is called.. The statement return marks the dynamic end of a procedure or a function. Y[1] -B i (-- I make the two columns } while X [i] > 1 do X[i+1]E-X[i]div2 Y[i+l] <-. function russe (A. b ] : integers) In such cases n. we write procedure proc3(T [ 1 . will have no difficulty understanding the notation used to describe our algorithms. or record. or for. function. and b should be considered as formal parameters. To avoid proliferation of begin and end statements. m ]) . For instance. B ) arrays X.. and in the latter case it also supplies the value of the function. a.1 3 or even procedure proc2(T) if the type and the dimensions of the array T are unimportant or if they are evident from the context. n ]) or more generally procedure proc4(T [a . the range of a statement such as if. The latter are denoted by the symbol " T ". or changed.What Is an Algorithm? Sec. here is a formal description of multiplication a la russe. In such a case #T denotes the number of elements in the array T.

Y[1] . 19) is an instance of this problem. The following APL program describes exactly the same algorithm (although you might reasonably object to a program using logarithms.) . and that this program could easily be simplified. describes quite a different algorithm.prod + Y [i ] i -i-1 return prod We see that different algorithms can be used to solve the same problem.2*t[2*A V On the other hand. Y { initialization } X[1] . even if this is more suited to a calculation using pencil and paper than to computation on a machine. we preferred to follow blindly the preceding description of the algorithm.. 1. An algorithm must work correctly on every instance of the problem it claims to solve. and that different programs can be used to describe the same algorithm. despite a superficial resemblance to the one given previously. To show that an algorithm is incorrect.B iF1 { make the two columns) while X [i ] > 1 do X[i+1]<-X[i]-1 Y[i+1]-B i-i+1 { add the appropriate entries } prod F-. On the . V R-A RUSAPL B. we need only find one instance of the problem for which it is unable to find a correct answer. you will probably have noticed that the arrays X and Y are not really necessary. It is important not to lose sight of the fact that in this book we are interested in algorithms. exponentiation. It gives a general solution to the problem of multiplying positive integers. B ) arrays X. not in the programs used to describe them. 1 If you are an experienced programmer.4 Preliminaries Chap. T [1] R<-+/(2I LA=T)/BXT-1. However. Most interesting problems ihclude an infinite collection of instances. Nonetheless.. we shall occasionally consider finite problems such as that of playing a perfect game of chess. the following program.2 PROBLEMS AND INSTANCES Multiplication a la russe is not just a way to multiply 45 by 19. and multiplication by powers of 2 to describe an algorithm for multiplying two integers .A . function not -russe (A. We say that (45.0 while i > 0 do if X [i ] > 0 then prod .

It is also possible to analyse algorithms using a hybrid approach. that is. This last point is particularly important since often a newly discovered algorithm may only begin to perform better than its predecessor when both of them are used on large instances. which we favour in this book. For example. it is important to define its domain of definition. 1. even though each item would take more than one bit when represented on a computer. and then any required numerical parameters are determined empirically for a particular . This is often not the case with the empirical approach. The size of an instance x. To make our analyses clearer. we often use the word "size" to mean any integer that in some way measures the number of components in an instance.Sec.3 THE EFFICIENCY OF ALGORITHMS When we have a problem to solve. so we can choose the best. rather than its size (which is the number of bits needed to represent this value in binary).1). when we talk about sorting (see Section 1. this limit cannot be attributed to the algorithm we choose to use. denoted by I x 1. it is obviously of interest to find several algorithms that might be used.7. However. etc.) needed by each algorithm as a function of the size of the instances considered. an instance involving n items is generally considered to be of size n. Although multiplication a la russe will not work if the first operand is negative. It also allows us to study the efficiency of an algorithm when used on instances of any size.3 The Efficiency of Algorithms 5 other hand. using some precisely defined and reasonably compact encoding. The empirical (or a posteriori) approach consists of programming the competing algorithms and trying them on different instances with the help of a computer. the set of instances to be considered. The advantage of the theoretical approach is that it depends on neither the computer being used. nor the programming language. corresponds formally to the number of bits needed to represent the instance on a computer. consists of determining mathematically the quantity of resources (execution time. it is usually more difficult to prove the correctness of an algorithm. Any real computing device has a limit on the size of the instances it can handle. Once again we see that there is an essential difference between programs and algorithms. however. When we specify a problem. nor even the skill of the programmer. When we talk about numerical problems. where the form of the function describing the algorithm's efficiency is determined theoretically. It saves both the time that would have been spent needlessly programming an inefficient algorithm and the machine time that would have been wasted testing it. The theoretical (or a priori) approach. 1. 19) is not an instance of the problem being considered. we sometimes give the efficiency of our algorithms in terms of the value of the instance being considered. this does not invalidate the algorithm since (-45. This raises the question of how to decide which of several algorithms is preferable. where practical considerations often force us to test our algorithms only on instances of moderate size. memory space.

Certain orders occur so frequently that it is worth giving them a name. More precisely.6 Preliminaries Chap. since we do not have a standard computer to which all measurements might refer. polynomial. say. an algorithm is quadratic. for example. cubic. for numerical problems) of the instance considered. to solve an instance of size n . respectively. Thus. then there always exists a positive constant c such that tl(n) <_ ct2(n) whenever n is sufficiently large. where n is the size of the instance to be solved. The hidden multiplicative constant used in these definitions gives rise to a certain danger of misinterpretation. it is likely to be less precise. Coming back to the question of the unit to be used to express the theoretical efficiency of an algorithm. or c. but only a change of algorithm will give us an improvement that gets more and more marked as the size of the instances being solved increases. Similarly. n 3. a change of machine may allow us to solve a problem 10 or 100 times faster. We say that an algorithm takes a time in the order of t (n). respectively. or exponential if it takes a time in the order of n 2.6 and 1. we say that it takes linear time. n k . There can be no question of expressing this efficiency in seconds. since we only need change the constant to bound the time by at (n) years or bt (n) microseconds. where k and c are appropriate constants. In this case we also talk about a linear algorithm. If such an extrapolation is made solely on the basis of empirical tests. This approach allows pred- ictions to be made about the time an actual implementation will take to solve an instance much larger than those used in the tests. ignoring all theoretical considerations. In the next chapter we give a more rigorous treatment of this important concept known as the asymptotic notation. The use of seconds in this definition is obviously quite arbitrary. Sections 1. This principle remains true whatever the computer used (provided it is of a conventional design). where n is the size (or occasionally the value. 1 program and machine. if there exist a positive constant c and an implementation of the algorithm capable of solving every instance of the problem in a time bounded above by ct (n) seconds. if not plain wrong. It is natural to ask at this point what unit should be used to express the theoretical efficiency of an algorithm. if an algorithm takes a time in the order of n. two algorithms whose implementations on a given machine take respectively n 2 days and n 3 seconds to solve an . By the principle of invariance any other implementation of the algorithm will have the same property. Consider.7 illustrate the important differences between these orders of magnitude. usually by some form of regression. according to which two different implementations of the same algorithm will not differ in efficiency by more than some multiplicative constant. An answer to this problem is given by the principle of invariance. It will be clear from the formal definition why we say "in the order of" rather than the more usual "of the order of ". For example. although the multiplicative constant may change from one implementation to another. there will be no such unit : we shall only express this efficiency to within a multiplicative constant. if two implementations take t i(n) and t2(n) seconds. for a given function t. regardless of the programming language employed and regardless of the skill of the programmer (provided that he or she does not actually modify the algorithm!).

i . 5. Simulate these two algorithms on the arrays T = [3. however. j. It is only on instances requiring more than 20 million years to solve that the quadratic algorithm outperforms the cubic algorithm ! Nevertheless. and conversely. 6] .4 Average and Worst-Case Analysis 7 instance of size n. its performance is better on all sufficiently large instances. and V = [6. "In" and "log" denote natural logarithms and logarithms to the base 10. 5. 9. 1. such that U is already sorted in ascending order. can be estimated theoretically in a similar way. the former is asymptotically better than the latter. respectively. V represents the worst case . memory space in particular.4 AVERAGE AND WORST-CASE ANALYSIS The time taken by an algorithm can vary considerably between two different instances of the same size. we concentrate on execution time. As is more usual. 1.1 shows that both these algorithms take more time on V than on U. minx F. 2. 4. 0 Let U and V be two arrays of n elements. The other resources needed to execute an algorithm.. Problem 1. 4. that is to say. 4.T[i]. 5. from a theoretical point of view.1 to n -1 do minj . 3] . 2. U = [1. 1. 2.minx Problem 1. 3.j minx -T[j] T[minjI -T[i] T [i] f. Finally.-T[j] j F-j-1 T[j+1]-x and procedure select (T [1 .Sec. n ]) for i E-. In this book.1 while j > 0andx <T[j] doT[j+l] <..1.T [i] for j E-. 5. In fact.4.i. 1. procedure insert (T [1 . To illustrate this. Make sure you understand how they work. whereas V is in descending order. 1]. n ]) for i F-.2 to n do x. consider two elementary sorting algorithms inser-: tion and selection. 3. It may also be interesting to study the possibility of a trade-off between time and memory space : using more space sometimes allows us to reduce the computing time.4. note that logarithms to the base 2 are so frequently used in the analysis of algorithms that we give them their own special notation : thus "lg n " is an abbreviation for 1og2n.i + I to n do if T [ j ] < minx then minj i. 6.

1 will show. whereas insert (V) takes three and a half minutes when V is an array of 5. we shall have an idea of the likely time taken to sort an array initially in random order. Nonetheless. if it is a question of controlling a nuclear power plant.5 we shall see another sorting algorithm that also takes quadratic time in the worst case.000 elements already in ascending order. We found that the time required to sort a given number of elements using selection sort does not vary by more than 15% whatever the initial order of the elements to be sorted. The variation in execution time is only due to the number of times the assignments in the then part of this test are executed. we programmed this algorithm in Pascal on a DEC VAx 780. because the condition controlling the while loop is always false at the outset. Thus we say that insertion sorting takes quadratic time in the worst case.2. Worst-case analysis is appropriate for an algorithm whose response time is critical. The algorithm therefore performs in linear time. it is among the fastest algorithms known on the average. for each size we only consider those instances of that size on which the algorithm requires the most time. the time required by the selection sorting algorithm is not very sensitive to the original order of the array to be sorted : the test "if T [ j ] < minx " is executed exactly the same number of times in every case.2. It is usually harder to analyse the average behaviour of an algorithm than to analyse its behaviour in the worst case. We saw that the time taken by the insertion sort algorithm varies between the order of n and the order of n 2. Also. takes quadratic time both on the average and in the worst case. regardless of the initial order of the elements. An implementation in Pascal on the DEC VAx 780 shows that insert (U) takes less than one-fifth of a second if U is an array of 5. On the one hand.Preliminaries 8 Chap. If we can calculate the average time taken by the algorithm on the n! different ways of initially ordering n elements (assuming they are all distinct). The situation is quite different if we compare the times taken by the insertion sort algorithm on the arrays U and V. it may be more important to know the average execution time on instances of size n.000 elements in descending order. regardless of the particular instance to be solved. in a situation where an algorithm is to be used many times on many different instances. but that requires only a time in the order of n log n on the average. In Section 4.3). The variation in time is therefore considerable. insert (V) takes quadratic time.3 The insertion sorting algorithm thus that this average time is also in the order of n2. and moreover. it is crucial to know an upper limit on the system's response time. On the other hand. Even though this algorithm has a bad worst case. that is. because the while loop is executed i -1 times for each value of i (see Example 2. although in certain cases it can be much faster. If such large variations can occur. 1 for these two algorithms : no array of n elements requires more work. We shall see in Example 2. how can we talk about the time taken by an algorithm solely in terms of the size of the instance to be solved? We usually consider the worst case of the algorithm. As Example 2. For example. it increases with the number of elements to be sorted. insert (U) is very fast. On the other hand.2. such an analysis of average behaviour . the time required by select (T) is quadratic. To verify this.

T[l] for i F. Since we are only concerned with execution times of algorithms defined to within a multiplicative constant. In this case. if T is an array of n elements. Wilson's theorem would let us test an integer for primality with astonishing efficiency. the hypothesis that each of the n! ways of. For example.initially ordering n elements is equally likely fails. 1. In what follows we shall only be concerned with worst-case analyses unless stated otherwise. and that for some reason it might mostly be asked to sort arrays whose elements are already nearly ordered.5 What Is an Elementary Operation? 9 can be misleading if in fact the instances to be solved are not chosen randomly when the algorithm is used in practice. not the exact time required by each of them. If we allowed ourselves to count the evaluation of a factorial and a test for divisibility at unit cost. Equivalently.2 to n do if T [i ] < x then x .Sec. it is only the number of elementary operations executed that matters in the analysis. A useful analysis of the average behaviour of an algorithm therefore requires some a priori knowledge of the distribution of the instances to be solved.5 WHAT IS AN ELEMENTARY OPERATION? An elementary operation is an operation whose execution time can be bounded above by a constant depending only on the particular implementation used (machine. regardless of the operand's size. and so on). In the description of an algorithm it may happen that a line of program corresponds to a variable number of elementary operations. 1. some mathematical operations are too complex to be considered elementary. function Wilson (n) { returns true if and only if n is prime } if n divides ((n -1)! + 1) exactly then return true else return false Can we consider addition and multiplication to be unit cost operations ? In theory these operations are not elementary since the time needed to execute them . For example. since it is an abbreviation for x .T [i ] Similarly. and this is normally an unrealistic requirement. we say that elementary operations can be executed at unit cost. the time required to compute x -min {T[i]ll <-i <-n } increases with n. and their behaviour made independent of the specific instances to be solved. it could happen that a sorting algorithm might be used as an internal procedure in some more complex algorithm. programming language. In Chapter 8 we shall see how this difficulty can be circumvented for certain algorithms.

i return j In the algorithm called Not-Gauss the value of sum stays quite reasonable for all the instances that the algorithm can realistically be expected to meet in practice. The situation is quite different in the case of Fibonacci. Two examples will illustrate what we mean.i+ j i E. It is therefore not realistic. function Not-Gauss (n) { calculates the sum of the integers from 1 to n } sum E. If we are using a 32-bit machine. In theory.Preliminaries 10 Chap.j . however. The analysis of the algorithm must therefore depend on its intended domain of application. however. ** Problem 1. In practice. rather.i + j " cause arithmetic overflow on a 32-bit machine. all the additions can be executed directly provided that n is no greater than 65.l ton do sum E-sum + i return sum and function Fibonacci (n) { calculates the n th term of the Fibonacci sequence (see section 1. Use Wilson's theorem (n is prime if and only if it is a divisor of (n . this algorithm (there called fib2) is shown to take quadratic time.5)) i E. whether or not n is prime. so that no real machine can in fact execute these additions at unit cost if n is chosen sufficiently large. and the divide-and-conquer technique discussed in Chapter 4 to design an algorithm capable of deciding in a time in the order of log n . 1 increases with the length of the operands.8.l to n do j E. given an integer n. Newton's binomial theorem. The following problem illustrates this danger. it is even more important to ensure that arithmetic operations do not overflow : it is easier to produce large operands by repeated multiplication than by addition.0 fork E.0 for i <. j E. In the case of multiplication.5. even though at first glance its execution time appears to be linear.1.2. as a practical matter. As many as 45.496 bits are needed to hold the result corresponding to n = 65. to consider that these additions can be carried out at unit cost . It suffices to take n = 47 to have the last addition "j E. In your analysis you .1)! + 1 ).535. although it may still be reasonable to consider this an elementary operation for sufficiently small operands. it may be sensible to consider them as elementary operations so long as the operands concerned are of a rea- sonable size in the instances we expect to encounter.1.535. the algorithm should work for all possible values of n.7. we must attribute to them a cost proportional to the length of the operands concerned. In Example 2.

it is reasonable to count such arithmetic operations at unit cost.6 WHY DO WE NEED EFFICIENT ALGORITHMS? As computing equipment gets faster and faster. that using the original machine this new algorithm can solve an instance of size n in 10-2 x n 3 seconds. 1.7. In most practical situations. you cannot even solve an example of size 45. Supposing you were able to run your computer without interruption for a year. regardless of the size of the operands involved. When this is so. and tests of divisibility by an integer (but not calculations of factorials or exponentials) can be carried out in unit time. comparisons.1. To solve an instance of size 30. you buy a new computer one hundred times faster than the first. subtractions. for example. You may feel you have wasted your money. You find a cubic algorithm that can solve your problem. To sum up. Suppose you decide instead to invest in algorithmics. A similar problem can arise when we analyse algorithms involving real numbers if the required precision increases with the size of the instances to be solved. your new machine will solve instances of size at best n + 7 in the same time. the use of single precision floating point arithmetic proves satisfactory despite the inev- itable loss of precision. Would it not be easier simply to wait for the next generation of computers? The remarks made in the preceding sections show that this is not true. . In general. In one day you can now solve instances whose size is greater than 200. however.6. In what follows we count additions. modulo operations. when you run your new machine for a whole year. Since you need to solve bigger instances than this.5). with one year's computation you can almost reach size 1. divisions. Boolean operations. it may seem hardly worthwhile to spend our time trying to design more efficient algorithms. With the same algorithm you can now solve an instance of size n in only 10-6 x 2" seconds.500. even deciding whether an instruction as apparently innocent as "j F i + j " can be considered as elementary or not calls for the use of judgement. One typical example of this phenomenon is the use of De Moivre's formula to calculate values in the Fibonacci sequence (see Section 1. and assignments at unit cost unless explicitly stated otherwise. to illustrate the argument. Solving an instance of size 20 will take nearly two minutes. you would only just be able to solve an instance of size 38. that to solve a particular problem you have available an exponential algorithm and a computer capable of running this algorithm on instances of size n in 10-4 x 2' seconds. multiplications. when you figure out that now. however. Your program can thus solve an instance of size 10 in one-tenth of a second. 1. This is illustrated by Figure 1. even a whole day's computing will not be sufficient. Imagine. if you were previously able to solve an instance of size n in some given time.Sec.6 Why Do We Need Efficient Algorithms ? 11 may assume that additions. multiplications. Suppose.

Some of the following examples use large integers or real arithmetic. Nevertheless. On the original machine the new algorithm takes 10 seconds to solve an instance of size 10.7).6. there have been cases where even more spectacular improvements have been made. Additions and multiplications are therefore generally taken to be elementary operations in the following paragraphs (except. supposing you are able to afford both. even for well-established algorithms. a machine one hundred times faster than the old one will allow you to solve instances four or five times bigger in the same length of time.2). 1 Algorithmics versus hardware. it is possible to com- bine the two algorithms into a third one that looks at the size of the instance to be solved before deciding which method to use. the new algorithm should not be used uncritically on all instances of the problem.7. thanks to your new algorithm. Naturally.7 SOME PRACTICAL EXAMPLES Maybe you are wondering whether it is really possible in practice to accelerate an algorithm to the extent suggested in the previous section. which is one hundred times slower than the old algorithm. Such problems can always be solved by using multiple-precision arithmetic (see Sections 1. we shall simplify our presentation by ignoring the problems that may arise because of arithmetic overflow or loss of precision on a particular machine.7. .2 and 4.Preliminaries 12 Figure 1. 1. Chap. for Section 1. The new algorithm is faster only for instances of size 20 or greater.1. In fact. in particular on the rather small ones. In fact. of course. make such a purchase much more profitable. it will also. Not only does the new algorithm offer a much greater improvement than the purchase of new machinery. Unless we explicitly state the contrary.

4: insertion sorting and selection sorting.000 elements. In 30 seconds.7. whereas quicksort requires less than one-fifth of a second. 1. Both these algorithms. When this occurs. it can happen that the operands become too long to be held in a single word of the computer in use. we might use Williams's heapsort algorithm (see Example 2. the inefficiency of insertion sorting becomes still more pronounced : one and a half minutes are needed on average. and in particular in algorithmics. To sort 1. Such operations thereupon cease to be elementary.3). we can use a representation such as FORTRAN's "double precision". we must ask ourselves how the time necessary to multiply two large integers increases with the size of the operands. quicksort can handle 100. We have already seen two classic sorting algorithms in Section 1.000 elements . multipleprecision arithmetic. Quicksort is already almost twice as fast as insertion when sorting 50 elements.1 Sorting The sorting problem is of major importance in computer science. mergesort (see Section 4.2 Multiplication of Large Integers When a calculation requires very large integers to be manipulated. and three times as fast when sorting 100 elements.2.000 elements to sort.Sec. The time required is therefore in the order of mn.5). Since these measures differ only by a multiplicative constant. The difference in efficiency between the two algorithms is marginal when the number of elements to be sorted is small. Multiplication a la russe also takes a time in the order of mn. are to be multiplied.4). insertion takes more than three seconds. and that it executes approximately one elementary addition for each of these multiplications. or. as we saw.7 Some Practical Examples 13 1.1 can easily be transposed to this context. In this case. our estimate is that it would take nine and a half hours to carry out the same task using insertion sorting. When we have 5. provided we choose the smaller operand as the multiplier . All these algorithms take a time in the order of n log n on the average . We can measure this size by either the number of computer words needed to represent the operands on a machine or the length of their representation in decimal or binary. (This last remark would be false should we be considering exponential time algorithms-can you see why?) Suppose two large integers.7. other sorting algorithms are more efficient when n is large. of sizes m and n.2. we programmed insertion sort and quicksort in Pascal on a DEC VAx 780. the first two take this same amount of time even in the worst case. this choice does not alter our analysis of the order of efficiency of the algorithms in question. 1. Sorting problems are often found inside more complex algorithms.4 and Problem 2. The classic algorithm of Section 1. or Hoare's quicksort algorithm (see Section 4. more generally. take quadratic time both in the worst case and on the average. Although these algorithms are excellent when n is small. compared to little more than one second for quicksort. We see that it multiplies each word of one of the operands by each word of the other. Among others. We are required to arrange in ascending order a collection of n objects on which a total ordering is defined. respectively. To have a clearer idea of the practical difference between a time in the order of n 2 and a time in the order of n log n .

1 and the larger as the multiplicand. the size of an operand is measured in terms of the number of 20-bit segments in its binary representation. then . Accordingly. The difference between the order of n 2 and the order of n 1. the algorithm thus takes a time in the order of n 1. we carried out the calculations in base 220 rather than in base 10.1) submatrix obtained from M by deleting the i th row and the j th column. The simplest. Problem 1. 1.2 .1. more efficient algorithms exist to solve this problem. we programmed the classic algorithm and the algorithm of Section 4. As we shall see in Chapter 9. however..1) x (n . there is no reason for preferring it to the classic algorithm..1 a1. which we saw in the case of sorting algorithms.599 which is preferable to the quadratic time taken by both the classic algorithm and multiplication a la russe. or approximately the size of the smaller.59 is less spectacular than that between the order of n 2 and the order of n log n .7.59. denoted by det(M). j ] denotes the (n . To take account of the architecture of the machine. The theoretically better algorithm of Section 4. yet at the same time space is used quite efficiently (the machine has 60-bit words).. To verify this. whereas the classic algorithm takes about 400 milliseconds. even more sophisticated algorithms exist for much larger operands. the fast algorithm is some three times more efficient than the classic algorithm : they take about 15 seconds and 40 seconds.1. Thus.2 a2. The gain in efficiency continues to increase as the size of the operands goes up. . takes a time in the order nm0.3 Evaluating Determinants Let a1.1 a2.7. Integers of 20 bits are thus multiplied directly by the hardware of the machine.. particularly as the hidden constant is likely to be larger.Preliminaries 14 Chap.n M = I an l an 2 . For operands ten times this length. How much time does multiplication a la russe take if the multiplier is longer than the multiplicand ? As we mentioned in Section 1. respectively..7. is often defined recursively : if M [i.7 in Pascal on a CDc CYBER 835 and tested them on operands of different sizes. where n is the size of the larger operand and m is of nmlg(3/2)..7 gives little real improvement on operands of size 100 (equivalent to about 602 decimal digits) : it takes about 300 milliseconds. an .n j be an n x n matrix.n a2. which we shall study in Section 4. a1. The determinant of the matrix M. If both operands are of size n.

The common factors of 120 and 700 are therefore 2 and 5. Chapter 4 describes a technique where recursion plays a fundamental role in the design of efficient algorithms. j ]) . Determinants are important in linear algebra. For example.7 Some Practical Examples = det(M) 15 n Y. The obvious algorithm for calculating gcd(m . we say that m and n are coprime. it takes about five and a half seconds on a 100 x 100 matrix.min(m . respectively.1 .7. we estimate that it would take more than 10 million years to calculate the determinant of a 20 x 20 matrix. For example. and then taking the product of the prime factors common to m and n.5). function ged (m. n) consists of first factorizing m and n.4 Calculating the Greatest Common Divisor Let m and n be two positive integers. We programmed the two algorithms in Pascal on a Coc CYBER 835. 700) we first factorize 120=2 3 x 3 x 5 and 700=2 2 x5 2 x 7.1 until i divides both m and n exactly return i The time taken by this algorithm is in the order of the difference between the smaller of the two arguments and their greatest common divisor. the determinant is defined by det(M) = a 1. A classic algorithm for calculating gcd(m. and we need to know how to calculate them efficiently. . 1. and their lower powers are 2 and 1.Sec. thus proving that Gauss-Jordan elimination is not optimal.15) = 3 and gcd(10. Strassen discovered in 1969 a recursive algorithm that can calculate the determinant of an n x n matrix in a time in the order of n 1g7 or about n 2. gcd(6. When m and n are of similar size and coprime. does the computation in cubic time. 21) = 1.i .1 det(M[1. (-1)j+'a1. The greatest common divisor of m and n. On the other hand. the recursive algorithm takes more than 20 seconds on a 5 x 5 matrix and 10 minutes on a 10 x 10 matrix . to calculate gcd(120. n) is obtained directly from the definition. This is even worse than exponential.81. n) i . n) =1. The greatest common divisor of 120 and 700 is therefore 22 x 51 = 20. n) + 1 repeat i F. When gcd(m. In particular. denoted by gcd(m. we obtain an algorithm that takes a time in the order of n! to calculate the determinant of an n x n matrix (see Example 2. it therefore takes a time in the order of n. a task accomplished by the Gauss-Jordan algorithm in about onetwentieth of a second ! You should not conclude from this example that recursive algorithms are necessarily bad. n). On the contrary. another classic algorithm. j=1 If n = 1. If we use the recursive definition directly. each prime factor being raised to the lower of its powers in the two arguments. 1. Gauss-Jordan elimination. On the other hand.2. The Gauss-Jordan algorithm finds the determinant of a 10 x 10 matrix in one-hundredth of a second . is the largest integer that divides both m and n exactly.

2) : L = _1 [on . in mathematics. it requires us to factorize m and n. The first ten terms of the sequence are therefore 0.2. Since 4 < 1.3.5 Calculating the Fibonacci Sequence The Fibonacci sequence is defined by the following recurrence : fo0. 34. Nevertheless.7. 1. there exists a much more efficient algorithm for calculating greatest common divisors. and in the theory of games. However. 2. the greater is the degree of precision required in the values of 5 and 0. function fib 1(n) if n < 2 then return n else return fibl(n . This is Euclid's famous algorithm. It is when applied to two consecutive terms of the Fibonacci sequence that Euclid's algorithm takes the longest time among all instances of comparable size. function Euclid (m. This sequence has numerous applications in computer science. 13. an operation we do not know how to do efficiently. 3. 1. Euclid's original algorithm works using successive subtractions rather than by calculating a modulo.6). De Moivre's formula is of little immediate help in calculating fn exactly. this algorithm takes a time in the order of the logarithm of its arguments. The algorithm obtained directly from the definition of the Fibonacci sequence is the following. 1 Even though this algorithm is better than the one given previously.1) + fib 1(n-2) . 1. which is much faster than the preceding algorithms. which means that the value of fn is in the order of 0n . even in the worst case (see Example 2. since the larger n becomes. n) while m > 0 do tF. De Moivre proved the following formula (see Example 2. fI = 1 fn fn -I +fn-2 and for n ? 2. 21.(-4)-n where 4 _ (I +x(5)/2 is the golden V rJatio.nmodm n F-m mt return n Considering the arithmetic operations to have unit cost. a single-precision computation programmed in Pascal produces an error for the first time when calculating f66 . the term (-4)-n can be neglected when n is large. 5.Preliminaries 16 Chap. To be historically exact. 8. On the CDC CYBER 835.

In fact. itself. 1. and fibl(0) three times.1.) The time required by fib l for n > 50 is so long that we did not bother to estimate it. assuming we count each addition as an elementary operation (see Example 2. k -0. fibl(2) three times. we programmed the three algorithms in Pascal on a CDC CYBER 835 in order to compare their execution times empirically. that is to say.jh j .5. function fib2(n) i E. function fib3(n) i . Table 1. However. it is natural to proceed as in Section 1. with the exception of the case n = 100 on which fib 1 would take well over 109 years ! Note that fib2 is more efficient than fib3 on small instances.7.ik+t t f. we carried out all the computations modulo 107. This third algorithm. It will be explained in Chapter 4. . This is much better than the first algorithm. Times greater than two minutes were estimated using the hybrid approach.2.Sec. the time required to calculate f. To avoid wastefully recalculating the same values over and over.0 fork -ltondoj -i+j return j i 4j-i This second algorithm takes a time in the order of n. which is to say that we only obtained the seven least significant figures of the answer. For instance.7). fibl(1) five times.1. To avoid problems caused by arithmetic overflow (the Fibonacci sequence grows very rapidly : fl00 is a number with 21 decimal digits). j E-0.ih+jk+t i .7 Some Practical Examples 17 This algorithm is very inefficient because it recalculates the same values many times. h E-1 while n > 0 do if n is odd then t F-.2kh + t k E--k2+t n E-n div 2 return j Once again.2. which at first sight appears quite mysterious. (All these times are approximate.1 eloquently illustrates the difference that the choice of an algorithm can make.8). j . takes a time in the order of the logarithm of n (see Example 2.h2 h F. there exists a third algorithm that gives as great an improvement over the second algorithm as the second does over the first. in the order of 0" (see Example 2. using this algorithm is in the order of the value of f. but fib 1(4) also calls for the calculation of fib 1(3).9). We see that fib 1(3) will be calculated twice. to calculate fib l (5) we need the values of fib 1(4) and fib l (3) .2.

892 29.941 0.011 0.020 118.021 n 100 500 1.664 298.553 * All times are in seconds.177 3.000.132 0.457 0.348 76.007 0.000 times larger to make fib3 take one extra millisecond of computing time. and Problem 2.107 7. PERFORMANCE COMPARISON BETWEEN EXACT FIBONACCI ALGORITHMS* n 5 10 15 20 fib l fib2 fib3 0.109 1. 25 .000 fib2 fib3 0.8.7.000 3 5 n 100 10.2.2 compares the times taken by these three algorithms when they are used in conjunction with an efficient implementation of the classic algorithm for multiplying large integers (see Problem 2. and t3(n) = 1logn milliseconds.Preliminaries 18 Chap.005 0.021 0.766 0. t2(n) = 15n microseconds.11).017 0.000 10.087 0.msec 4 2 2 25 min 15 sec msec 1 2 msec Z Using the hybrid approach. In this case the advantage that fib3 enjoys over fib2 is less marked. (n) for the time taken by fibi on the instance n. Writing t.581 0. We could also have calculated all the figures in the answer using multipleprecision arithmetic.2.019 10.-20 seconds.000 5. TABLE 1.1 PERFORMANCE COMPARISON BETWEEN MODULO 107 FIBONACCI ALGORITHMS n 10 fibl 8 msec I msec I msec fib2 fib3 6 3 20 30 50 I sec 2 min I msec 21 days ! msec 2 msec I msec i msec 1.017 0.2. but their joint advantage over fibi remains just as striking.7. we find t1(n) ^ 0.000 100.9. It takes a value of n 10.009 0.2.013 0.7.000 fib2 fib3 1 i msec 150 msec msec I msec 2 1.041 0. Example 2. we can estimate approximately the time taken by our implementations of these three algorithms.000. 1 TABLE 1. Table 1.

the choice may be considered a subjective decision. The "discovery" by Cooley and Tukey in 1965 of a fast algorithm revolutionized the situation : problems previously considered to be infeasible could now at last be tackled. all the necessary theoretical groundwork for Danielson and Lanczos's algorithm had already been published by Runge and Konig in 1924! . and signal processing including speech recognition. That this distinction is not merely academic is illustrated by Problems 2.2. For years progress in these areas was limited by the fact that the known algorithms for calculating Fourier transforms all took far too long.5 that it is possible to get by with a time in the order of n to set up not just one. acoustics. this added subtlety does not alter the fact that the algorithm takes a time in the order of n! to calculate the determinant of an n x n matrix. probably by using a program package allowing arithmetic operations on very large integers.2. but all the n recursive calls.Sec. However. 1.3 is another example of an incompletely presented algorithm. . Thus the development of numerous applications had been hindered for no good reason for almost a quarter of a century.6 Fourier Transforms The Fast Fourier Transform algorithm is perhaps the one algorithmic discovery that had the greatest practical impact in history. Although the classic algorithm took more than 26 minutes of computation.7.7. telecommunications. And if that were not sufficient. In one early test of the "new" algorithm the Fourier transform was used to analyse data from an earthquake that had taken place in Alaska in 1964. the "new" algorithm was able to perform the same task in less than two and a half seconds. quantum physics. Since the exact way in which these multiplications are to be carried out is not specified in fib3. We shall see in Problem 2. Ironically it turned out that an efficient algorithm had already been published in 1942 by Danielson and Lanczos.7.6. nor must it require the use of intuition or creativity".7.11 and 4. We shall come back to this subject in Chapter 9. and hence fib3 is not formally speaking an algorithm. In this case. 1. And what should we say about De Moivre's formula used as an algorithm ? Calculation of a determinant by the recursive method of Section 1. which show that indeed the order of time taken by fib3 depends on the multiplication algorithm used. How are the recursive calls to be set up? The obvious approach requires a time in the order of n 2 to be used before each recursive call.8 When Is an Algorithm Specified ? 19 1. For the moment let us only mention that Fourier transforms are of fundamental importance in such disparate applications as optics. systems theory.5 describes an algorithm ? The problem arises because it is not realistic to consider that the multiplications in fib3 are elementary operations. can we reasonably maintain that fib3 of Section 1. Any practical implementation must take this into account.8 WHEN IS AN ALGORITHM SPECIFIED? At the beginning of this book we said that "the execution of an algorithm must not include any subjective decisions.

9.. to copy a list. pointers. Using this implementation.2. which is the first node in the structure. and lists.9 DATA STRUCTURES The use of well-chosen data structures is often a crucial factor in the design of efficient algorithms.9.1. for example. and Example 2. and which are the predecessor and the successor (if they exist) of any given node.Preliminaries 20 Chap. we shall continue to use the word algorithm for certain incomplete descriptions of this kind. as in Figure 1. alpha l-il beta as gamma 31 delta Figure 1. structures.2..1 Lists A list is a collection of nodes or elements of information arranged in a certain order. The different computer implementations that are commonly used differ in the quantity of memory required. We also suppose that he or she has already come across the mathematical concepts of directed and undirected graphs. We suppose that the reader already has a good working knowledge of such basic notions as arrays. and the order of the elements is given by the order of their indices in the array. and knows how to represent these objects efficiently on a computer.9. maxlength value[ I. Such lists are subject to a number of operations : we might want to insert an additional node. to delete a node. this book is not intended to be a manual on data structures. Such a structure is frequently represented graphically by boxes and arrows.3. After a brief review of some important points. these two structures also offer interesting examples of the analysis of algorithms (see Example 2. and in the greater or less ease of carrying out certain operations. Implemented as an array by the declaration type tablist = record counter : 0 . The details will be filled in later should our analyses require them. A list. maxlength ] : information the elements of a list occupy the slots value [ 1 ] to value [counter ]. 1 To make life simple. . to count the number of elements it contains. 1.4. which is the last. The corresponding data structure must allow us to determine efficiently. Chosen because they will be used in subsequent chapters. The information attached to a node is shown inside the corresponding box and the arrows show transitions from a node to its successor.1. Problem 2.10). 1. and so on.2. this section concentrates on the less elementary notions of heaps and disjoint sets. Here we content ourselves with mentioning the best-known techniques. Nevertheless.

inserting a new element or deleting one of the existing elements requires a worst-case number of operations in the order of the current size of the list. provided a suitably powerful programming language is used. the edges may form paths and cycles. once an element has been found.2 for instance. On the other hand. for arbitrary k. whereas an edge joining nodes a and b in an undirected graph is denoted by the set { a. Consider Figure 1. the nodes are joined by lines with no direction indicated. the space needed to represent the list can be allocated and recovered dynamically as the program proceeds. also called edges. This implementation is particularly efficient for the important structure known as the stack. There are never more than two arrows joining any two given nodes of a directed graph (and if there are two arrows. In the case of an undirected graph. However. Even if additional pointers are used to ensure rapid access to the first and last elements of the list. In the example of Figure 1. but not in the other. An edge from node a to node b of a directed graph is denoted by the ordered pair (a. 1. then they must go in opposite directions). A > where N is a set of nodes and A c N x N is a set of edges. . a graph is therefore a pair G = < N. In the case of a directed graph the nodes are joined by arrows called edges. On the other hand.9. which we obtain by restricting the permitted operations on a list : addition and deletion of elements are allowed only at one particular end of the list.Sec.2 there exists an edge from alpha to gamma and another from gamma to alpha. We distinguish directed and undirected graphs. however.9. if pointers are used to implement a list structure.2 Graphs Intuitively speaking. inserting new nodes or deleting an existing node can be done rapidly. Formally speaking. In every case. it suffices to add a second pointer to each node to allow the list to be traversed rapidly in either direction.9 Data Structures 21 we can find the first and the last elements of the list rapidly. 1. it is difficult when this representation is used to examine the k th element. However. If a higher memory overhead is acceptable. the nodes are usually represented by some such structure as type node = record value : information next : T node . a single pointer is used in each node to designate its successor : it is therefore easy to traverse the list in one direction. are joined only in the direction indicated. In this case. b). it presents the major disadvantage of requiring that all the memory space potentially required be reserved from the outset of a program. a graph is a set of nodes joined by a set of lines or arrows. beta and delta. as we can the predecessor and the successor of a given node. where each node includes an explicit pointer to its successor.9. In our example. without having to follow k pointers and thus to take a time in the order of k. and there is never more than one line joining any two given nodes of an undirected graph. b } .

j ] = true . nbnodes ] : information adjacent [ 1 .. Here we attach to each node i a list of its neighbours. The memory space required is quadratic in the number of nodes. The first is illustrated by type adjgraph = record value [ 1 . On the other hand. too).9. should we wish to examine all the nodes connected to some given node. On the other hand. nbnodes ] of record value : information neighbours : list . Equivalently. to determine whether or not two given nodes i and j are connected directly.. j ] = false.. the matrix is necessarily symmetric. If the number of edges in the graph is small. A directed graph. which is less efficient than looking up a Boolean value in an array. then adjacent [i . independently of the number of edges that exist involving this particular node. .Preliminaries 22 Chap.1 . this representation is preferable from the point of view of the memory space used. This takes a time in the order of nbnodes. A tree is an acyclic.2. we have to scan a complete row in the matrix. nbnodes. There are at least two obvious ways to represent a graph on a computer. undirected graph. nbnodes ] : Booleans If there exists an edge from node i of the graph to node j. A second possible representation is as follows : type lisgraph = array[ L. that is to say of those nodes j such that an edge from i to j (in the case of a directed graph) or between i and j (in the case of an undirected graph) exists. we have to scan the list of neighbours of node i (and possibly of node j. With this representation it is easy to see whether or not two nodes are connected. The same representations used to implement graphs can be used to implement trees. It may also be possible in this case to examine all the neighbours of a given node in less than nbnodes operations on the average. otherwise adjacent [i. In the case of an undirected graph. connected. 1 alpha gamma delta Figure 1. a tree may be defined as an undirected graph in which there exists exactly one path between any given pair of nodes. the number of nodes in the graph.

the use of additional pointers (for example. any rooted tree may be represented using nodes of the following type : type treenode = record value : information eldest-child. as in Figure 1. If there exists in G a vertex r such that every other vertex can be reached from r by a unique path. like a family tree. Figure 1.3 Rooted Trees Let G be a directed graph. 1. and (by analogy with a family tree once again) delta is the eldest sibling of epsilon and zeta. but rather the pointers used in the computer representation.3 would be represented as in Figure 1.9.9. In this example alpha is at the root of the tree.9.) Extending the analogy with a family tree.3.4. As in the case of lists.5. we say that beta is the parent of delta and the child of alpha. (When there is no danger of confusion. next-sibling :T treenode The rooted tree shown in Figure 1.3. On a computer. It is usual to represent a rooted tree with the root at the top. Two distinct rooted trees. . the branches of a rooted tree are often considered to be ordered : in the previous example beta is situated to the left of gamma. Any rooted tree with n nodes contains exactly n -1 edges. The two trees in Figure 1.9. we shall use the simple term "tree" instead of the more correct "rooted tree". Although nothing in the definition indicates this. A rooted tree. and so on. lambda lambda Figure 1. that epsilon and zeta are the siblings of delta. that alpha is an ancestor of epsilon.4 may therefore be considered as different.9. A leaf of a rooted tree is a node with no children. then G is a rooted tree and r is its root.9. the other nodes are called internal nodes.9 23 Data Structures 1. to the parent or the eldest sibling of a given node) may speed up certain operations at the price of an increase in the memory space needed.9. where now the arrows no longer represent the edges of the rooted tree.Sec.

the level of a node is equal to the height of the tree minus the depth of the node concerned..9. and level 1 in the tree of Figure 1.5.6. we naturally tend to talk about the left-hand child and the right-hand child. the binary trees of Figure 1. zeta VV M Possible computer representation of a rooted tree. One obvious representation uses nodes of the type type n-ary-node = record value : information child [ 1 . height 0. For instance. In the important case of a binary tree.9. 1 alpha beta delta gamma epsilon 1A Figure 1. n ] : T n-ary-node Figure 1. whereas in the second case b is the younger child of a and the elder child is missing.9. There are several ways of representing an n-ary tree on a computer. .9. although the metaphor becomes somewhat strained. The height of a rooted tree is the height of its root. If each node of a rooted tree can have up to n children. the positions occupied by the children are significant. gamma has depth 1. Two distinct binary trees. The height of a node is the number of edges in the longest path from the node in question to a leaf.3.Preliminaries 24 Chap. The depth of a node in a rooted tree is the number of edges that need to be traversed to arrive at the node starting from the root. and thus also the depth of its deepest leaf. Finally. In this case. For example. we say it is an n-ary tree.6 are not the same : in the first case b is the elder child of a and the younger child is missing.

A binary tree is a search tree if the value contained in every internal node is larger than or equal to the values contained in its left-hand descendants. which possesses only a left-hand child and no right-hand child. The unique special node. A binary tree is essentially complete if each of its internal nodes possesses exactly two children. and . here we only mention their existence. This interesting structure lends itself to numerous applications. called heapsort (see Problem 2.. T[2 k+1_11 (with the possible exception of level 0. It is possible to update a search tree. right-child : T binary-node It is also sometimes possible. to delete nodes or to add new values. Figure 1. For instance. in the sense that the height of the tree is in the order of the number of nodes it contains. all the leaves are either on level 0. T [2k+1]. These structures also allow the efficient implementation of several additional operations. is to the right of all the other level 1 internal nodes. such as the use of AVL trees or 2-3 trees. The parent of the node represented in T [i] is found in T [i div 2] for i > 1.2.1.7 shows how to represent an essentially complete binary tree containing 10 nodes.9.1. This kind of tree can be represented using an array T by putting the nodes of depth k. with the possible exception of a unique special node situated on level 1.4 Heaps A heap is a special kind of rooted tree that can be implemented efficiently in an array without any explicit pointers. to represent a rooted tree using an array without any explicit pointers. that is. and less than or equal to the values contained in its right-hand descendants. Problem 1. . it can happen that the resulting tree becomes badly unbalanced.9 Data Structures 25 In the case of a binary tree we can also define type binary-node = record value : information left-child.3). or else they are on levels 0 and 1. An example of a search tree is given in Figure 5. Suppose the value sought is held in a node at depth p in a search tree.Sec. if it exists. 1. allow such operations as searches and the addition or deletion of nodes in a time in the order of the logarithm of the number of nodes in the tree in the worst case.9.. in the positions T [2k]. from left to right. This structure is interesting because it allows efficient searches for values in the tree. Moreover. including a remarkable sorting technique. 1.9. However. as well as the efficient implementation of certain dynamic priority lists. More sophisticated methods. Design an algorithm capable of finding this node starting at the root in a time in the order of p. as we shall see in the following section. without destroying the search tree property. . if this is done in an unconsidered fashion. and no leaf is found on level 1 to the left of an internal node at the same level. one on the left and one on the right. which may be incomplete).5. Since these concepts are not used in the rest of this book.

on the contrary. each of whose nodes includes an element of information called the value of the node. If the value of the node increases.9. If.8.Preliminaries 26 T[7] T(6] T[5] T[4] Chap.9.8 gives an example of a heap. whenever they exist. it suffices to exchange the modified value with Figure 1. 1 T[I0] Figure 1. The heap property is that the value of each internal node is greater than or equal to the values of its children. the children of the node represented in T[i] are found in T[2i] and T [2i + 1 ]. A heap is an essentially complete binary tree. The subtree whose root is in T [i] is also easy to identify. Figure 1. . it suffices to exchange these two values and then to continue the same process upwards in the tree until the heap property is restored.9. so that it becomes greater than the value of its parent. This same heap can be represented by the following array : 10 7 9 4 7 5 2 2 1 6 The fundamental characteristic of this data structure is that the heap property can be restored efficiently after modification of the value of a node. We say that the modified value has been percolated up to its new position (one often encounters the rather strange term sift-up for this process). An essentially complete binary tree.7. the value of a node is decreased so that it becomes less than the value of at least one of its children. A heap.

adding a new node.. procedure alter-heap (T [ 1 . These are exactly the operations we need to implement dynamic priority lists efficiently : the value of a node gives the priority of the corresponding event. n ]. n ] . they are written so as to reflect as closely as possible the preceding discussion. we encourage him or her to figure out how to avoid the inefficiency resulting from our use of the "exchange" instruction. we suppose that T would be a heap if T [i] were sufficiently small . We say that the modified value has been sifted down to its new position. n ].2j if2j <n andT[2j+1]>T[k]thenk -2j+1 exchange T [ j ] and T [k] { if j = k.. n ]. v ) { T [I . i. we suppose that 1<_ i<_ n } x .. The following procedures describe more formally the basic heap manipulation process. The event with highest priority is always found at the root of . or modifying a node. If the reader wishes to make use of heaps for a "real" application.. and then to continue this process downwards in the tree until the heap property is restored. n] is a heap .. we suppose that T would be a heap if T [i ] were sufficiently large . then the node has arrived at its final position) until j = k procedure percolate (T [1 . i ) else percolate (T. i ) {this procedure sifts node i down so as to re-establish the heap property in T [I .. the parameter n is not used here } k-i repeat j *-k if j > 1 and T [j div 2] < T [k] then k . the value of T[i] is set to v and the heap property is re-established . we also suppose that 1 <_ i <_ n .T[i] T[i]<-v if v < x then sift-down (T. 1.9 Data Structures 27 the larger of the values in the children.Sec.j div 2 exchange T [ j ] and T [k] until j = k The heap is an ideal data structure for finding the largest element of a set. i) procedure sift-down (T [1 . n I. i ) { this procedure percolates node i so as to re-establish the heap property in T [I . we also suppose that 1 <_ i 5 n } k F-i repeat j *-k { find the larger child of node j } if 2j <_ n and T [2j ] > T [k] then k . For the purpose of clarity. removing it.

that our starting point is the following array : 1 6 9 2 7 5 2 7 4 10 represented by the tree of Figure 1. n -1 ].. for example. but rather inefficiently) for i . n ] into a heap.. n + 1) It remains to be seen how we can create a heap starting from an array T [I . and the priority of an event can be changed dynamically at all times...9b.Preliminaries 28 Chap. n ] and restores the heap property in T [I . i ) However.. 1 the heap. n ]. n ]) { this procedure makes the array T [I .... n + 1]. as illustrated in Figure 1.. n -1 ] } T[1]-T[n] sift-down (T [ 1 . n + 1 ] } T[n+1] .2).2 to n do percolate (T [1 . This results in an essentially complete binary tree corresponding to the array : 1 10 9 7 7 5 2 2 4 6 It only remains to sift down its root in order to obtain the desired heap. There exists a cleverer algorithm for making a heap. this approach is not particularly efficient (see Problem 2. i ]. by sifting down those roots.v percolate (T [1 . The subtrees at the next higher level are then transformed into heaps.1) procedure insert-node (T [ 1 .9... n return T[1] procedure delete-max (T [1 . This is particularly useful in computer simulations. also by sifting down their roots.9c shows the process for the left-hand subtree. v) { adds an element whose value is v to the heap T [ 1 . The obvious solution is to start with an empty heap and to add elements one by one. The final process thus goes as follows : . procedure slow-make-heap (T [ 1 .9. n ]) { removes the largest element of the heap T [I . n ] of elements in an undefined order.. Suppose. n ]) { returns the largest element of the heap T [ 1 . We begin by making each of the subtrees whose roots are at level 1 into a heap.2.. function find-max (T [1 .9.9a.. The other subtree at level 2 is already a heap. Figure 1. n ] and restores the heap property in T [I .

2.Data Structures Sec. This algorithm can be described formally as follows.8.9. i) We shall see in Example 2. (c) One level 2 subtree is made into a heap (the other already is a heap). (a) The starting situation.4 that this algorithm allows the creation of a heap in linear time.. procedure make-heap (T [ 1 . 1.9.(n div 2) downto 1 do sift-down (T. . Making a heap. (b) The level t subtrees are made into heaps. n ] into a heap } for i ..9 10 1 9 10 7 10 7 29 7 7 5 9 1 7 5 9 4 7 5 2 2 2 2 4 6 2 4 6 2 1 6 whose tree representation is shown in the previous example as Figure 1. Figure 1.9. n ]) { this procedure makes the array T [ 1 .

the basic heap operations needed to implement a dynamic priority list can also be handled by data structures completely different from the heap we have considered so far.2. As an application.Preliminaries 30 Chap. 8) .9. For applications that need percolate more often than sift-down (Problems 3. 5.find-max (T) delete-max (T) insert-node (T [l .2. We wish to group these objects into disjoint sets.) 1. Problem 1. 6) alter-heap (T. and each delete-max in logarithmic time.2. 1. The basic concept of a heap can be improved in several ways.9. find-max. (To be precise.. treating this node as an empty location. Experiments have shown this approach to yield an improvement in the classic heapsort algorithm (Problem 2.12 and 3. m) Draw the heap after each operation. 12] be an array such that T [i ] = i for each i <-12. Let T [I . it pays to ignore temporarily the new value stored at the root. such that the following sequence results in a different heap : m . For applications that have a tendency to sift down the (updated) root almost to the bottom level. an operation beyond the (efficient) reach of classic heaps.2.5 Disjoint Set Structures Suppose we have N objects numbered from 1 to N. Exhibit the state of the array after each of the following procedure calls : make-heap (T) alter-heap (T. and percolate operation in constant time.13) it pays to have more than two children per internal node : this speeds up percolate (because the heap is shallower) at the cost of slowing down any operation that must consider every child at each level. In each set ..3). Exhibit a heap T [I . The Fibonacci heap allows also the merging of priority lists in constant time.3. 12. this data structure allows the implementation of Dijkstra's algorithm in a time in the order of a+ n log n for finding the length of the shortest path from a designated source node to each of the other nodes of a graph with n nodes and a edges (Section 3.2. but care is needed to do it correctly. It is still possible to represent such heaps in an array without explicit pointers. 10) alter-heap (T. Finally. In particular. n ] containing distinct values. n -1].2).9. rather than two with the usual procedure (the children are compared to each other but not to their father). the preceding times for Fibonacci heaps are correct in the amortized sense-a concept not discussed here. The advantage of this procedure is that it requires only one comparison at each level in order to sift down the empty node. At this point.. and to sift it all the way down to a leaf. 1 Problem 1. each object being in exactly one set at any given time. put back the relevant value into the empty leaf and percolate it to its proper position. the Fibonacci heap (or lazy binomial queue) can process each insert-node.

Sec. merge the two corresponding sets. function find 1(x) [ finds the label of the set containing object x } return set [x] procedure merge 1(a . which in turn represent the sets 11. Initially. 16. it is clear that find l takes constant time and that mergel takes a time in the order of N. Suppose we decide to use the smallest member of each set as the label : thus the set (7. we execute a series of operations of two kinds : for a given object.I to N do if set [k] = j then set [k] F.i We wish to know the time required to execute an arbitrary series of n operations of the type find and merge. Still using a single array.8.10) and (3. If consulting or modifying one element of an array counts as an elementary operation. N ]. If we now declare an array set [ 1 . then j is the parent of i in some tree. then i is both the label of a set and the root of the corresponding tree . 1. find which set contains it and return the label of this set . (2. which is necessarily the label for its set. it suffices to place the label of the set corresponding to each object in the appropriate array element. if set [i] =j# i. and given two distinct labels. b) {merges the sets labelled a and b } i f.a.10.4. We can do better than this.6. each containing exactly one object. The two operations can be implemented by two procedures. . We adopt the following scheme : if set [i ] = i. on the other hand.9. Thereafter. How can we represent this situation efficiently on a computer? One possible representation is obvious.9 Data Structures 31 we choose a canonical object. we can represent each set as an "inverted" rooted tree. which will serve as a label for the set..3. we need now only change a single value in the array . it is harder to find the set to which an object belongs.91 will be called "set 3". A series of n operations therefore takes a time in the order of nN in the worst case. j -b if i > j then exchange i and j fork E.7. 51. The array 1 2 3 2 1 3 4 3 4 therefore represents the trees given in Figure 1. the N objects are in N different sets. To merge two sets. starting from the initial situation.9).

(The figure shows the direction of the pointers in the array. Tree representation for disjoint sets. If each consultation or modification of an array element counts as an elementary operation. prove that the time needed to execute an arbitrary sequence of n operations find2 or merge2 starting from the initial situation is in the order of n 2 in the worst case.10. In this way the height of the resulting merged tree will be max(h 1. we have chosen arbitrarily to use the smallest member of a set as its label.Preliminaries 32 Chap.9. So far. not of the edges in the tree. Problem 1.9. h 2) if h 1 #h2. then after an arbitrary sequence of merge operations starting from the initial situation.b Problem 1. the trees do not grow as rapidly.set [i ] return i procedure merge2(a. so that each subsequent call on find2 takes a time in the order of k.5.a else set [a] . we have not gained anything over the use of findl and mergel.i do i .Using this technique. . The problem arises because after k calls on merge2. we may find ourselves confronted by a tree of height k. 1 Figure 1. Prove by mathematical induction that if this tactic is used. Let us therefore try to limit the height of the trees produced. b) [ merges the sets labelled a and b } if a < b then set [b] . In the case when n is comparable to N. When we merge two trees whose heights are respectively h 1 and h2. it would be better to arrange matters so that it is always the root of the tree whose height is least that becomes a child of the other root.) function find2(x) { finds the label of the set containing object x } i -x while set [i ] 91. or h 1+1 if h 1=h2. a tree containing k nodes will have a height at most Llg kJ .4.9.

when we execute the operation find (20) on the tree of Figure 1. 1. the result is the tree of 4 11 4 9 10 1 8 11 9 1 G )a (b) after 20 12 21 16 (a) before Figure 1. we can make our operations faster still. By modifying find2.. Prove that the time needed to execute an arbitrary sequence of n operations find2 and merge3 starting from the initial situation is in the order of n log n in the worst case. we first traverse the edges of the tree leading up from x to the root. For example. modifying each node encountered on the way to set its pointer directly to the root. 20 16 .9. Whenever a is the label of a set.9.height [a] + 1 set [b] <.11. When we are trying to determine the set that contains a certain object x. Once we know the root. Initially. height [a] therefore gives the height of the corresponding tree.a else set [a] F. This technique is called path compression. Path compression.9 33 The height of the trees can be maintained in an additional array height [1 .6. we can now traverse the same edges again.1 la. The procedure find2 is still relevant but we must modify merge accordingly.9. N] so that height [i] gives the height of node i in its current tree.Data Structures Sec. procedure merge3(a. b) { merges the sets labelled a and b we suppose that a # b } if height [a] = height [b] then height [a] .a else if height [a] > height [b] then set [b] F. height [i] is set to zero for each i.b Problem 1.

Problem 1.9. On the other hand.1lb: nodes 20. Our function becomes function find3(x ) { finds the label of the set containing object x } r Fx while set [r ] r do r .set { r is the root of the tree } [r ] i <-. the new find operation takes about twice as long as before. so that it is easy to store this value exactly (whereas we could not efficiently keep track of the exact height of a tree after path compression).9. this remains an upper bound on the height. Use this remark to implement a disjoint set structure that uses only one length N array rather than the two set and rank. We call this value the rank of the tree. Problem 1. which lay on the path from node 20 to the root.set [i] set [i ] . A canonical object has no parent.7.5. Path compression does not change the number of nodes in a tree. when we use this combination of an array and of the procedures find3 and merge3 to deal with disjoint sets of objects. 10. now point directly to the root. and 9.Preliminaries 34 Chap.9.9. This technique obviously tends to diminish the height of a tree and thus to accelerate subsequent find operations.r i F-j return r From now on. The pointers of the remaining nodes have not changed.8. and we make no use of the rank of any object that is not canonical.9.x while i # r do j E. Is path compression a good idea? The answer is given when we analyse it in Example 2.2.10. A second possible tactic for merging two sets is to ensure that the root of the tree containing the smaller number of nodes becomes the child of the other root. 1 Figure 1. and change the name of the array accordingly. ** Problem 1. Analyse the combined efficiency of find3 together with your merge4 from the previous problem. However. and give a result corresponding to the one in Problem 1.) . Using path compression. we say we are using a disjoint set structure. it is no longer true that the height of a tree whose root is a is given by height [a]. Write a procedure merge4 to implement this tactic.9. (Hint : use negative values for the ranks.

a remarkable little French book of popular mathematics. Nilsson (1971). Horowitz and Sahni (1976). graph theory. Borodin and Munro (1975).5. Several other well-known general books are worth mentioning : in chronological order Aho. see Knuth (1977) and Lewis and Papadimitriou (1978). we encourage the reader to look at Bentley (1984). Sedgewick (1983). Graphs and trees are presented from a mathematical standpoint in Berge (1958. Even (1980). Finally. they are described in detail in Knuth (1973) . 1970). Reingold.10 REFERENCES AND FURTHER READING We distinguish between three kinds of books on algorithm design. and Tarjan (1983). 1984c). consult Knuth (1968. 2-3 trees come from Aho. To reinforce our remarks in Sections 1. AVL trees come from Adel'son-Vel'skii and Landis (1962) . Although we do not use any specific programming language in the present book. General books cover several application areas : they give algorithms useful in each area. originally intended to consist of seven volumes. searching.6 and 1. The heap was introduced as a data structure for sorting in . For a more popular account of algorithms. Aho. Standish (1980). we may mention. We give more references on this subject in Chapter 9. For more information about data structures. Stone (1972). Hopcroft. and Ullman (1974). Specific books will be referred to in the following chapters whenever they are relevant to our discussion . and so on. Papadimitriou and Steiglitz (1982). The most complete collection of algorithms ever proposed is no doubt found in Knuth (1968. read Demars (1981). Besides our own book and Harel (1987). The algorithm capable of calculating the determinant of an n x n matrix in a time in the order of n 2'81 is given in Strassen (1969) and Bunch and Hopcroft (1974). however. Knuth (1973) contains a large number of sorting algorithms. 1973). 1. Specific books cover algorithms that are useful in a particular application area : sorting. Nievergelt.7. The excellent book of Harel (1987) takes a broader view at algorithmics and considers it as no less than "the spirit of computing". For an introduction to Fourier transforms. Hop- croft. Fourier transforms. Baase (1978). Christofides (1975). which offers experimental proof that intelligent use of algorithmics may allow a TRs-80 to run rings round a CRAY-1. The fast algorithm for calculating the Fibonacci sequence is explained in Gries and Levin (1980) and Urbanek (1980). and Melhorn (1984a. we are aware of two more works on algorithmics : Horowitz and Sahni (1978) and Stinson (1985).Sec. Dromey (1982). books on algorithmics concentrate on the techniques of algorithm design : they illustrate each technique by examples of algorithms taken from various applications areas. and Ullman (1974). we suggest that a reader unfamiliar with Pascal would do well to look at one of the numerous books on this language.10 References and Further Reading 35 1. such as Jensen and Wirth (1985) or Lecarme and Nebut (1985). Brigham (1974). 1969. and Gonnet (1984). Gondran and Minoux (1979). Hopcroft. Lawler (1976). Multiplication a la russe is described in Warusfel (1961). and Ullman (1983). 1973). 1984b. and Deo (1977). Problem 1.1 comes from Shamir (1979).

Gonnet and Munro (1986).36 Preliminaries Chap. Fredman and Tarjan (1984). we give only some of the possible uses of disjoint set structures . Hopcroft. For ideas on building heaps faster. and Carlsson (1986. The improvements suggested at the end of the sub-section on heaps are described in Johnson (1975). 1987). for more applications see Hopcroft and Karp (1971) and Aho. 1 Williams (1964). which he calls the double-ended heap. consult McDiarmid and Reed (1987). In this book. Carlsson (1986) also describes a data structure. . and Ullman (1974. that allows finding efficiently the largest and the smallest elements of a set. or deap. 1976).

we say that t(n) is in the order of f (n) even if t(n) is negative or undefined for some values n < n 0. provided that n is sufficiently large (greater than some threshold no). For instance. We define O (f (n)) = { t : IN -4 IR* I (3c E IR+) (2 no E IN) (b'n ? no) [t(n) ! cf (n)] 1. To this end. of programming language.1 ASYMPTOTIC NOTATION As we mentioned in Chapter 1. or of computer. Similarly. and the set of nonnegative real numbers by 1R* (the latter being a nonstandard notation). respectively. We denote the set of strictly positive natural numbers by IN+. The set { true. we now introduce formally the asymptotic notation that will be used throughout the present book. we allow ourselves to misuse the notation from time to time. For convenience. Let f : IN -* JR* be an arbitrary function. 0 (f (n)) (read as "the order of f (n)") is the set of all functions t(n) bounded above by a positive real multiple of f (n).1. false 1 of Boolean constants is denoted by 13. 2. the set of strictly positive real numbers by IR+.2 Analysing the Efficiency of Algorithms 2. In other words.1 A Notation for "the order of " Let IN and IR represent the set of natural numbers (positive or zero) and the set of real numbers. theoretical analyses of the efficiency of algorithms are carried out to within a multiplicative constant so as to take account of possible variations introduced by a change of implementation. we talk about the order of f (n) even 37 .

for any function f : IN -* IR*. (Such behaviour is unlikely since t(n) decreases as n increases for n <.) Find the simplest possible function f : IN -4 lR* such that the algorithm takes a time in the order of f (n). (n + I)! E 0 (n!) v. n3EO(n2) iii. f (n) E O (n) = [f (n)]2 E O (n 2) vi. the preceding one. it is allowable to talk about the order of n / log n . Problem 2. In particular. Prove that t(n) E O (f (n)).3. then any other implementation of the same algorithm takes a time in the order of t(n) seconds. and it is correct to write n3-3n2-n -8 E 0(n).1.Analysing the Efficiency of Algorithms 38 Chap. Problem 2. for any function f : IN -+ IR*. provided that f (n) is strictly positive for all n E IN: O(f(n))={t:IN -4 ft*1(3cjR)(b'nEIN)[t(n)<_cf(n)]}. it takes a time in the order of t (n) itself.1. We say that such an algorithm takes a time in the order of f (n) for any function f : IN -4 IR* such that t(n) E O (f (n)). 2 when f (n) is negative or undefined for a finite number of values of n . however. we try to express the order of the algorithm's running time using the simplest possible function f such that t(n) E O (f (n)). The principle of invariance mentioned in the previous chapter assures us that if some implementation of a given algorithm never takes more than t(n) seconds to solve an instance of size n. f (n) E O (n) 2f (n) E O (2n) Prove that the following definition of 0 (f (n)) is equivalent to Problem 2. in this case we must choose no sufficiently large to be sure that such behaviour does not happen for n ? no.2. . Some implementation of a certain algorithm takes a time bounded above by t(n) = 3 seconds . Which of the following statements are true? Prove your answers. n2E0(n3) ii.333. since t(n) E O (t(n)). i.1.1. even though this function is not defined when n =0 or n = 1. In general.l8n milliseconds + 27n2 microseconds to solve any instance of size n. 2 ' ' Q ( 2 ) iv. For example.

g(n))). However. 3n2 + n +8)#n 3 when 0 <.6.1.7.4. even though it is often useful in practice.n <. that Problem 2.1. Find two functions f and g : IN -* IN+ such f (n) 0 0 (g(n)) and g(n) 0 0 (f (n)). The last equality holds despite the fact that max(n 3. the threshold no is not necessary in principle. 2. n 2. The result of the preceding problem is useful for simplifying asymptotic calculations. For instance. where the sum and the maximum are to be taken pointwise. n3+3n2+n +8 E O(n3+(3n2+n +8)) = 0 (max(n 3. 0 (f (n)) = 0 (g(n)) if and only if f (n) E 0 (g(n)) and g(n) E O (f (n)). however. A little manipulation is sufficient. to allow us to conclude that n3-3n2-n -8 E O(n3) because .3.Sec. then 0 (g(n)) c 0 (h (n)) This asymptotic notation provides a way to define a partial order on functions and consequently on the relative efficiency of different algorithms to solve a given problem. we do have to ensure that f (n) and g(n) only take nonnegative values (possibly with a finite number of exceptions) to avoid false arguments like the following : 0 (n2)=O(n3+(n2-n3)) = 0 (max(n 3. Problem 2. Prove that the relation " E 0 " is transitive : if f (n) E 0 (g(n)) and g(n) E 0 (h (n)).n 3)) = O (n 3). and ii. Problem 2. prove that 0 (f (n) + g(n)) = 0 (max(f (n).5. Problem 2.1 Asymptotic Notation 39 In other words. 0 (f (n)) C 0 (g(n)) if and only if f (n) E 0 (g(n)) but g(n) e 0 (f (n)). because the asymptotic notation only applies when n is sufficiently large. then f (n) E 0 (h (n)). prove that i. Conclude that if g(n) e 0 (h (n)).1. Prove your answer. For arbitrary functions f and g : IN -4 IR*. as suggested by the following exercises.1. 3n 2 + n + 8)) = 0 (n 3). For arbitrary functions f and g : IN -> IR*.

10.1. De l'Hopital's rule is often useful in order to apply the preceding problem.n . Given f and g : IN -* R+.11. Here again. and iv. Problem 2. is never zero for x E [no.3n 2 . n i = 1+2+ Find the error in the following argument : +n E 0(1+2+ +n) = 0(max(1.9 to but that e O (log n). prove that i. X-->- provided that this last limit exists.. lim f (n)/g(n) E R+ 0 (f (n)) = 0 (g(n)).8. . then lira f (n)lg(n) = lim f '(x)lg'(x).9. n)) = O(n). n3-3n2-n . Let a be an arbitrary real constant. prove that log n e 0 Use de l'Hopital's rule and Problems 2. or if both these limits are infinite.1.8 is negative when 0 <. Problem 2. the fact that n 3 . +oo) in such a way that the corresponding new functions f and g are differentiable on this interval and also that g'(x). Recall that if lim f (n) = lim g(n) = 0..1. 2 n3-3n2-n -8 E 0(n3-3n2-n -8) = O(Zn3+(Zn3-3n2-n -8)) = 0(max(fn3. then provided na- n-+ - that the domains off and g can be extended to some real interval [no.2. tions " c " and "_ " to put the orders of the following functions into a sequence : . but n-i- iii. and n->- ii.6 is of no concern.8)) = O(In3)=0 (n3). The notion of a limit is a powerful and versatile tool for comparing functions.40 Analysing the Efficiency of Algorithms Chap.1.5 and 2. Problem 2. it can happen that 0 (f (n)) C 0 (g(n)) when the limit of f (n)lg(n) does not exist as n tends to infinity and when it is also not true that 0 (g(n)) = 0 (g(n) -f(n)). the derivative of g(x). 0 < e < 1. False arguments ofzthe kind illustrated in the following problem are also to be avoided. Use the relaProblem 2. lim f (n)/g(n) = 0 = O (f (n)) c O (g(n)) = O (g(n) ± f (n)). it can happen that 0 (f (n)) = 0 (g(n)) although the limit of f (n)/g(n) does not exist as n tends to infinity.1. +oo). .n <.1.

n s. For arbitrary functions f and g : IN -a IR*.12.13.1. (1+E)" .R+ be two increasing functions. a fundamental asymmetry between the notation 0 and Q. In other words. Prove that iii. On the other hand. f (n) e O (g(n)). which we read unimaginatively as omega of f (n) is the set of all the functions t(n) bounded below by a real positive multiple of f(n). there can thus be only a finite number of instances. n 41 1+e. since they cannot take more time than the worst case. Assuming only a finite number of instances of each given size exists.Sec.1. The following exercise brings out the symmetry between the notation 0 and Q. This time is obviously also sufficient to solve all the other instances of size n. 4g(n) g(2n) <_ 8g(n) and ii. on which the algorithm takes a time greater than cf (n). if an algorithm takes a time in Q(f (n)) in the worst case. for each sufficiently large n. however. by using a bigger constant. If an algorithm takes a time in 0 (f (n)) in the worst case. (n 2+8n +Iog3n)4.1 Asymptotic Notation n log n. In a worst-case analysis there is. and let c be a strictly positive real constant such that for every integer n i. and n 2/ log n. Prove your answer. Q(f (n)). prove that f (n) E O (g(n)) if and only if g(n) e Q(f (n)). 2.1.2 Other Asymptotic Notation The notation we have just seen is useful for estimating an upper limit on the time that some algorithm will take on a given instance. provided n is sufficiently large. there exists a real positive constant c such that a time of cf (n) is sufficient for the algorithm to solve the worst instance of size n. all of size less than the threshold.3. f (2n) 2f (n) + cg(n). Let f and g : IN -. there exists a real positive constant d such that the algorithm takes a time longer than . It is also sometimes interesting to estimate a lower limit on this time.1. Problem 2. The following notation is proposed to this end : Q(f(n))= {t:IN --> IR* I (2ce1R+)(3noEIN)(Vn ?no)[t(n)?cf(n)] }. These instances can all be taken care of as suggested by Problem 2. 2. * Problem 2.

which we saw in Section 1. are two arbitrary functions. O(f (n)) = O(g(n)). We shall be happiest if. prove that if f and g : IN -* IR Problem 2. The following problem shows that the O notation is no more powerful than the 0 notation for comparing the respective orders of two functions.16.4. prove that the following statements are equivalent : i. For arbitrary functions f and g : IN -4 R*.15. and n-4iii. Continuing Problem 2. and iii. when we analyse the asymptotic behaviour of an algorithm. 0 (f (n)) = 0 (g(n)). Prove that f (n) E O(g(n)) if and only if (3c. called the exact order of f (n). For this reason we introduce a final notation O(f(n)) = O(f(n)) n f (f(n)). lim f (n)/g(n) a 1R+ = f (n) E O(g(n)). lim f (n)/g(n) = 0 = f (n) E O (g(n)) but f (n) 0 O(g(n)). Problem 2. despite the fact that a time in the order of n is sufficient to solve arbitrarily large instances in which the items are already sorted. provides a typical example of such behaviour : it takes a time in S2(n 2) in the worst case. 2 df (n) to solve the worst instance of size n. na00 . for each sufficiently large n.1. This in no way rules out the possibility that a much shorter time might suffice to solve some other instances of size n. then I. Problem 2.1.9.42 Analysing the Efficiency of Algorithms Chap. Insertion sort. ii.1. its execution time is bounded simultaneously both above and below by positive real multiples (possibly different) of the same function. d e R+)(3n0EIN)(Vn ? n0) [cg(n) 5 f(n) 5 dg(n)].= f (n) E f (g(n)) but f (n) 0 O(g(n)). lim f (n)/g(n) = +. Thus there can exist an infinity of instances for which the algorithm takes a time less than df (n).1. H.14. f (n) E O(g(n)).

where the time depends on both the number of vertices and the number of edges.3. but 210g°n 0 O(21°g'n) if a # b. This situation is typical of certain algorithms for problems involving graphs. n (=_O(logb n) whatever the values of a. there are in general an infinite number of pairs < m .1.1 Asymptotic Notation Problem 2. * Problem 2.4 Operations on Asymptotic Notation To simplify some calculations. n ik E O(nk+') for any given integer k >_ 0 (this works even for real k > -1. iv. For instance. and n i -' E O(log n).1.n)5cf(m. log(n!) E O(n log n). For this reason the asymptotic notation is generalized in a natural way to functions of several variables. There is nevertheless an essential difference between an asymptotic notation with only one parameter and one with several : unlike the result obtained in Problem 2.Sec.n)<_cf(m.nEIN)[t(m. 2. n > such that m >. 0 (f (n))+O (g(n)) represents the set of functions .. b > 1 (so that we generally do not iii. iii. This is explained by the fact that while there are never more than a finite number of values of n >.n)l } Other generalizations are defined similarly.17.0 and n >.noE N) (Vn?n0)(Vm ?mo)[t(m. log. bother to specify the base of a logarithm in an asymptotic expression).0 yet such that m >. r=i the hidden constant in the 0 notation may depend on the value of k). We define O(f(m.m o and n >.18.1.n))= [t:INxJ-4JR' I(3cElR+)(3mo.1. it can happen that the thresholds mo and no are indispensable. In such cases the notion of the "size of the instance" that we have used so far may lose much of its meaning.no are not both true. Give an example of a function f : INxIN 1R+ such that O(f(m. 2.3 Asymptotic Notation with Several Parameters It may happen when we analyse an algorithm that its execution time depends simultaneously on more than one parameter of the instance in question. 0 2. Let f : INxIN -* 1R* be an arbitrary function.0 such that n >_ no is not true.1. v.n)l }.n))# {t:INx1N-4 1R* 1(3cER+)(`dm. we can manipulate the asymptotic notation using arithmetic operators. 43 Prove the following assertions : i.

including pointwise subtraction of functions. On the other hand. Still.19(i)). +O(f (n)). To understand the first notation. some function in O(f (n)) . however. think of it as O(f (n)) exp { Id 2 } . where Ida : N -f IR* is the constant function Ida (n) = a for every integer n.44 Analysing the Efficiency of Algorithms Chap. a genuinely ambiguous situation would arise : what would O (n 3) .19.1. if X and Y are sets of functions from IN into IR* and if op is any binary operator. . it suffices for g(n) to be the pointwise product of two possibly different functions. g(n))) = max(O(f (n)). if a E IR*. to belong to O(f (n)) x O(f (n)). Intuitively this represents the order of the time taken by an algorithm composed of a first stage taking a time in the order of f (n) followed by a second stage taking a time in the order of g(n). where " exp " denotes the binary exponentiation operator and " Id 2 " is the constant function 1d2(n)=2 for all n . but this is of no consequence (see Problem 2. The hidden constants that multiply f (n) and g(n) may well be different. 2 obtained by adding pointwise any function in 0 (f (n)) to any function in 0 (g(n)). and all this theory of operations on sets is extended in the obvious way to operators other than binary. reserving the symbol "\' to denote set difference : A \ B = { x E A I x e B 1. O(f (n))+O(g(n)) = O(f (n) + g(n)) = O(max(f (n).O (n 2) mean. Furthermore. Let f and g be arbitrary functions from IN into IR*. Prove the following identities : i. if N is the set of vertices of a directed graph. We also use the symmetrical notation g op X and a op X. we stretch the notation by writing X op g to denote X op { g } .19(ii)). Similarly. Similarly. In every case but one the context removes any potential ambiguity. 0 (f (n)) x 0 (g(n)) does not denote the Cartesian product of O (f (n)) and 0 (g(n)). we use X op a to denote X op Ida.1. a function g(n) must be the pointwise square of (n)]2). If g is a function from IN into IR*. This notation occasionally conflicts with the usual mathematical conventions. each a member of O(f (n)). for example ? To avoid this ambiguity. there is one case that must be treated cautiously. O(g(n))) . [0 (f (n))]2 does not denote the set of pairs of functions chosen from the set 0 (f (n)). Notice the subtle difference between [O(f (n))]2 and O(f (n)) x O(f (n)). Although they both denote the same set as 0([f this requires a proof (Problem 2. For instance. More formally. we use "-" only to denote arithmetic subtraction.1. then "X op Y " denotes { t :IN IR* I (3f EX )(3gEY)(3n0EIN)(b'n>no)[t(n) = f(n) op g(n)] }. To belong to [O(f (n))]2. then N x N denotes as usual the set of possible edges between these vertices. which is not at all the same as yj 1 O(f (n)) = O(f (n))+O(f (n)) + Problem 2. n x O(f (n)) denotes { t :IN -4 JR I (3g(n)EO(f(n)))(3noEIN)(Vn ?no) [t(n) = n xg(n)] }. If the symbol "-" were used to denote the differ- ence of two sets.

such as being a power of 2. and iv. f(n) E Hi=o0(1) Another kind of operation on the asymptotic notation allows it to be nested.1.1. A function f : IN -* R* is . You probably used this idea for solving Problem 2.5. The notation S2(f (n) I P(n)) and O(f (n) I P(n)) is defined similarly. but [0(1)]n # 2°(") . as is the notation with several parameters.R* I (3f (=.Sec. The principal reason for using this conditional notation is that it can generally be eliminated once it has been used to make the analysis of an algorithm easier. possibly defined by some asymptotic notation.1. 0 (f (n) I P(n)). Example 2. is the set of all functions t(n) bounded above by a real positive multiple of f (n) whenever n is sufficiently large and provided the condition P(n) holds. [I +0(1)]n = 2°(n). Now 0 (X) denotes LJ0(f(n)) = { t :IN . 12(X) and O(X) are defined similarly. Conditional asymptotic notation handles this situation. Let X be a set of functions from IN into R*. 2. the natural way to express the execution time required by Dixon's integer factorization algorithm (Section 8. Let f : IN -4 R* be any function and let P : IN -* 113 be a predicate. which we read as the order of f (n) when P(n).12. 2.3) is 0(eo(mi )) where n is the value of the integer to be factorized.X)[te0(f(n))] }.5 Conditional Asymptotic Notation Many algorithms are easier to analyse if initially we only consider instances whose size satisfies a certain condition. The notation 0 (f (n)) defined previously is thus equivalent to 0 (f (n) I P(n)) where P(n) is the predicate whose value is always true.1 Asymptotic Notation 45 H.1. We define O(f(n) I P(n)) t :lN -4 R* I (3cER+)(3noeIN)(Vn ? no) [P(n) = t(n) <_ cf (n)] }. iii. In other words. O([f(n)]2) _ [0(f(n))]2=O(.f(n))xO(f(n)). Although this expression can be simplified.

<_ t (L(n + l )/2]) and t ([n / 21) <_ t (((n + 1)12 ). in particular Problem 2.1. if we only consider the cases when n is a power of 2. or e. Let b ? 2 be any integer. Example 2. By the induction particular. 2 eventually nondecreasing if (3 n 0 E IN) ('d n > n 0) [f (n) <_ f (n + 1)]. First. One might be tempted to claim that t(n) is eventually nondecreasing because such is obviously the case with n log n .6. The proof that t(n) is nondecreasing must use its recursive definition.4. note that t (1) = a <_ 2(a +b) = t (2). we shall therefore in future simply refer to such functions as being smooth. where a and b are arbitrary real positive constants.20. Let t (n) be defined by the following equation : t(n) _ Ja ifn = 1 t(Ln/2J)+t([n/21)+bn otherwise. This argumentation is irrelevant and fallacious because the relation between t(n) and n log n has only been demonstrated thus far when n is a power of 2. 0 We illustrate this principle using an example suggested by the algorithm for merge sorting given in Section 4. Furthermore. the equation becomes t (n) Ja ifn=l 2t (n / 2) + bn if n > I is a power of 2.2. Give two specific examples to illustrate that the conditions "t(n) is eventually nondecreasing" and "f (bn ) E O (f (n))" are both necessary to obtain these results. as well as being eventually nondecreasing. .3.Analysing the Efficiency of Algorithms 46 Chap. assume that (V m < n) [t(m) <_ t (m + 1)]. The presence of floors and ceilings makes this equation hard to analyse exactly. S2. However. where X stands for one of 0. In order to apply the result of the previous problem to conclude that t(n) E O(n log n). Let n be greater than hypothesis. let f : IN -4 IR* be a smooth func- tion. The following problem assembles these ideas.1. t (Ln / 2j) t(n)=t(Ln/2j)+t([n/2])+bn St(L(n+1)12j)+t([(n+l)/21)+b(n+l)=t(n+l). it satisfies the condition f (bn ) E O (f (n)). * Problem 2.3. The proof that (Vn >_ 1) [t(n) <_t(n + 1)] is by mathematical induction. It turns out that any function that is b-smooth for some integer b >_ 2 is also c-smooth for every integer c ? 2 (prove it!). Such a function is b-smooth if. if 1(n) E O(f (n)). Prove that t(n) E X (f (n)). allow us to infer immediately that t (n) E O(n log n { n is a power of 2). The techniques discussed in Section 2. and let t : IN --> IR* be an eventually nondecreasing function such that t(n) EX (f (n) I n is a power of b). which implies by mathematical induction that (3 n0 E l) (`d n >_ n0) (V m >_ n) [f (n) <_ f (m)]. Let b ? 2 be any integer. A word of caution is important here. prove that t (n) is also smooth. Therefore In 1. we need only show that t(n) is an eventually nondecreasing function and that n log n is smooth.

d E 1R+. max I t I (n) I n Sn0}) and v =min(d.6 Asymptotic Recurrences When analysing algorithms. To solve such inequalities. since they do not allow us to conclude that t(n) is eventually nondecreasing. we do not always find ourselves faced with equations as precise as those in Example 2. We saw in the previous section that f (n) E O(n log n).20. it allows us to confine our analysis in the initial stages to the easier case where n is a power of 2.2 in the preceding section. n log n)) = O (n log n). More often we have to deal with inequalities such as t(n) < tI(n) ifn :5 no t (Ln / 2]) + t (Fn / 21) + cvi otherwise t2(n) ifn <_n0 t (Ln / 2]) + t ([n / 21) + do otherwise. it is convenient to convert them first to equalities. define f : IN -* IR by f(n)= 1 ifn = 1 f (Ln / 2]) + f ([n / 21) + n otherwise. We immediately conclude that t(n) E O(f (n)) = O(n log n).1 Asymptotic Notation 47 Finally. however.min {t7(n)/f(n) I n Sn0}).1.20.Sec.1. and simultaneously t(n) for some constants c. This could not have been done directly with the original inequalities. 2. 2. It is then possible. It is easy to prove by mathematical induction that v <_ t(n)/f(n) <_ u for every integer n. More importantly. n 0 E IN. let u = max(c.1. which in turn prevents us from applying Problem 2. This change from the original inequalities to a parametrized equation is useful from two points of view. Our asymptotic notation allows these constraints to be expressed succinctly as t(n) E t(Ln/2])+t([n121)+0(n). .1. and for appropriate initial functions t 1. n log n is smooth because it is clearly eventually nondecreasing and 2n log(2n) = 2n (log2 + log n) = (21og2)n + 2n log n E O (n + n log n) = O (max(n. t2: IN -f IR+. to generalize our results automatically to the case where n is an arbitrary integer. Coming back now to the function t(n) satisfying the preceding inequalities. To this end. using the conditional asymptotic notation and the technique explained in Problem 2. Obviously it saves having to prove independently both t (n) c O (n log n) and t (n) E Q (n log n).

1) is true for some n ? 1. 2 2.1. mathematical induction is a tool sufficiently powerful to allow us to discover not merely the proof of a theorem. This suggests that we formulate a hypothesis that f (n) might be a quadratic polynomial. we obtain two nontrivial equations for our three unknowns : 1 + b . we know that f(n)=n +f(n-1)=n +a(n-1)2+b(n-1)+c =an2+(1+b-2a)n +(a-b+c).1. From these it follows that a = b = '-f. Obviously f (n) = YJ%o i <_ J70n = n 2. As we shall see in Examples 2. this technique of constructive induction is especially useful for solving certain recurrences that occur in the context of the analysis of algorithms. It remains to establish the truth of HI (0) in order to conclude by mathematical induction that HI (n) is true for every integer n. Too often it is employed to prove assertions that seem to have been produced from nowhere like a rabbit out of a hat. 1. We now have therefore a new. The technique of constructive induction consists of trying to prove this incomplete hypothesis by mathematical induction. However. then so is HI (n). By applying this technique. This hypothesis is partial in the sense that a.Analysing the Efficiency of Algorithms 48 Chap. and c are not yet known. but that you are looking for some such formula.3. and 2. we conclude that c = 0 and that f (n) = n 2/ 2 + n / 2 is true for every integer n .2. Let the function f : IN -> IN be defined by the following recurrence : 0 . more complete hypothesis. If we are to conclude HI (n). We have just shown that if HI (n -1) is true for some n ? 1.f (n) = ifn =0 n + f (n . b. we can simultaneously prove the truth of a partially specified assertion and discover the missing specifications thanks to which the assertion is correct. b. and so f (n) E O (n 2 ). We therefore try the partially specified induction hypothesis HI (n) according to which f (n) = an 2 + bn + c. the value of c being as yet unconstrained. Knowing that f (O) = 0. By equating the coefficients of each power of n. Supposing that HI (n . Along the way we hope to pick up sufficient information about the constants to determine their values.4 and 2. It is clear that f (n) = Yi"_o i. . In the case of the preceding example it would have been simpler to determine the values of a. their origin remains mysterious. but also its exact statement.1. it must be the case that f (n) = an 2 + bn + c. While the truth of these assertions is thus established.1) otherwise. which we continue to call HI (n) : f (n) = n 2/ 2 + n / 2 + c.2a = b and a . and c by constructing three linear equations using the values of HI (n) forn =0.7 Constructive Induction Mathematical induction is used primarily as a proof technique. Pretend for a moment that you do not know that f (n) = n (n + 1)/ 2. Now HI (0) says precisely that f (O) = a 02 + b 0 + c = c. We begin with a simple example.5.b + c = c. Example 2.

1 Asymptotic Notation 49 c=0 a+b+c=1 4a +2b +c=3 Solving this system gives us immediately a = 2. we have established that t(n) ? an! for every positive integer n . regardless of the value of u. However. .bn2 + vn!.1.4.1) otherwise. Unfortunately no positive value of v allows us to conclude that t(n) <. Thus we see that t(n) ? un! is always true. since we are only interested in establishing an upper bound on the quantity of interest.1) ? u (n .1)! for some n > 1. In this setting constructive induction can be exploited to the hilt.Sec.3.u (n -1)! for some n > 1. The technique of constructive induction is useful in both cases.a. For simplicity. To establish this. we know that t(n) = bn2 + nt(n -1) >.vn!. t (l) >. that is. Although this equation is not easy to solve exactly. it suffices to show that this is true for n = 1. As usual. or perhaps even that the hypothesis to the effect that t(n) EO (n!) is false.un! for every positive integer n.un! for every positive integer n. Encouraged by this success.bn2 + nu (n -1)! = bn2 + un! >. However our aim is to show that t(n) :.3 will prove insufficient on occasion. Since t (l) = a. b = Z. It seems then that constructive induction has nothing to offer in this context.W. Some recurrences are more difficult to solve than the one given in Example 2. there exists a real positive constant u such that t (n) >. Thus once the constants are determined we must in any case follow this with a proof by mathematical induction. we now try to show that t (n) E O (n!) by proving the existence of a real positive constant v such that t(n) 5 vn! for every positive integer n. this allows us to affirm that t(n) = bn2 + nt(n -1) <. However. Let the function t : IN+ -> IR+ be given by the recurrence t(n) _ (a ifn=1 jl bn z + nt(n .u.vn! given that t(n) :. using this approach does not prove that f (n) = n 2/ 2 + n / 2.v(n . this is the same as saying that u <. 2. In order to conclude that t(n) >. Suppose by the partially specified induction hypothesis that t(n -1) >.1. we shall prove independently that t (n) E O (n!) and that t(n) E S2(n!). Suppose by the partially specified induction hypothesis that t(n -1) <.1)!.bn2 + vn!. Taking u = a. Even the techniques we shall see in Section 2. provided that t(n . it is sufficiently similar to the recurrence that characterizes the factorial (n! = n x(n -1)!) that it is natural to conjecture that t (n) E O(n!). since nothing allows us to assert a priori that f (n) is in fact given by a quadratic polynomial. By definition of t(n). and thus that t (n) E U(M). in the context of asymptotic notation. where a and b are arbitrary real positive constants. and c = 0. we begin by proving that t(n)ES2(n!). Example 2. an exact solution of the recurrence equations is generally unnecessary. that is.

vn! .a + 5b. Determine in terms of a and b the real positive constant c such that t(n) lim n-*= n! = c. solve exactly the recurrence * Problem 2. When n = 1.wn for any positive integer n. Verify that a <. When n = 2.vn!. independently of the value of v. This inequality holds if and only if n >. which we were unable to prove.w (n -1). we can apply the recurrence definition of t(n) to find t(2) = 4b + 2t(1) = 4b + 2a . 2 In fact. To conclude that t(n) <. we may in particular choose w = 3b to ensure that t(n) <.bn 2 + n (v (n -1)! .1.vn! .wn. defining t(n).3b. Using the definition of t(n). it is necessary and sufficient that v >.3.vn! .22. Rather than trying to prove directly that t(n) <. it illustrates well the advantage obtained by using constructive induction.a + 5b. We may hope for success.wn is a consequence of the hypothesis t(n -1) <.3. To complete Example 2.21.1)! .2v .1.v . Problem 2. we know that t(l) = a. you may wish to prove this assertion.1.vn!. by straightforward mathematical induction. we may choose v = a + 5b.vn! . If we are to conclude that t(n) <. b E IR+ be arbitrary constants.-w. Suppose then by the partially specified induction hypothesis that t(n -1) < v (n -1)! .3 and w > bn /(n -2).4. we use constructive induction to determine real positive constants v and w such that t(n) <. Since n /(n -2):5 3 for every n >. since t(n) <. then so too is the induction hypothesis it allows us to use.v (n .2. This idea may seem odd. thanks to which we were able to prove that t(n) E O(n!) without ever finding an exact expression for t(n).c <.6b . which is stronger than the previous condition. provided that n >.a + 3b. 11 The following problem is not so easy . we conclude that t (n) = bn 2 + nt (n -1) <. All that remains is to adjust the constant v to take care of the cases n <. The conclusion from all this is that t(n) E 9(n!) since an! S t(n) S (a + 5b)n! .w (n -1) for some n > 1. Let IR+ be the function defined by the recurrence .Analysing the Efficiency of Algorithms 50 Chap. g : IN+ Let k E IN and a. which is now completely specified.wn is a stronger statement than t(n) <. If we are to conclude that t(n) <. If you got lost in the preceding argument.w (n -1)) = vn! + ((b -w )n + w )n. it is necessary and sufficient that v >. however. it is necessary and sufficient that (b -w )n + w <. In particular. on the grounds that if the statement to be proved is stronger.3bn for every positive integer n. it is possible to obtain the result we hoped for.

g(n))) is no longer true. 11 . The third difference concerns the definition of 0. the asymmetry between 0 and Q noted after Problem 2. With this definition it suffices that there exist an infinite number of instances x that force some algorithm to take at least cf ( I x I) steps in order to conclude that this algorithm takes a time in Q (f (n)).1. Some authors define Q(f(n)) = { t:IN -> IR* I (3ce1R+)(VnoeIN)(3n ?no) [t(n)?cf(n)] I.n 2 E n 3 + O (n 2). 1 2. Of course. a statement such as O(f (n)) + O(g(n)) = O(max(f (n). We often find 0(. the meaning of "such-and-such an algorithm takes a time in 0 (n 2)" does not change since algorithms cannot take negative time. The second difference is less striking but more important.8 For Further Reading The notation used in this chapter is not universally accepted. Unfortunately.Sec. since it can lead an incautious reader astray. the notation becomes difficult to handle. Why ? When we want the equivalent of this definition of 0 (f (n)) we write ±O (f (n)). and because it makes O asymmetric. Using this definition.1. Problem 2. Notice the quantifier reversal.23. with this definition. On the other hand. Furthermore.IR 1 (3cclR+)(2noEIN)(Vn ? n0)[It(n)I <_ cf(n)] 1 where It(n)I denotes (here only) the absolute value of t(n). 2. The most striking is the widespread use of statements such as n 2 = O (n 3) where we would write n 2 E O (n 3). Prove that g(n) e O(n!).13 is neatly avoided. With this definition we say that the execution time of some algorithm is of the order of f (n) (or is 0 (f (n))) rather than saying it is in the order of f (n). This corresponds more closely to our intuitive idea of what a lower bound on the performance of an algorithm should look like.1. Use of such "one-way equalities" (for one would not write O (n3) = n2) is hard to defend except on historical grounds.1 Asymptotic Notation g(n) = 51 a ifn=1 bnk + ng (n . You may encounter three major differences in other books. in particular because Q thus defined is not transitive. one would write n 3 .1) otherwise.f(n)) _ { t :IN .

where T is an array of n integers such that 0 <. Consider the selection sorting algorithm given in Section 1. the complete algorithm takes a time not greater than d + 7-11 [c + b + a (n -i )]. intuition. This figure gives us the exact order of the execution time of the complete algorithm. and finally. The following example shows. Often a first analysis gives rise to a complicated-looking function.Analysing the Efficiency of Algorithms 52 Chap. It is often sufficient to choose some instruction in the algorithm as a barometer and to count how many times this instruction is executed. Details like the initialization of the loops are rarely considered explicitly. Choosing a barometer. The next step is to simplify this function using the asymptotic notation as well as the techniques explained in Section 2. where b is a second constant introduced to take account of the time spent initializing the loop. for a fourth constant d. We can simplify this expression to 2 n 2 + (b +c -a /2)n + (d -c -b ). one possible barometer is the test in the inner loop. where c is a third constant.1)/ 2 times when n items are sorted.i for every i <. which is executed exactly n (n . Let s be the sum of the elements of T.0 for i . from which we conclude that the algorithm takes a time in 0(n2). One trip round the outer loop is therefore bounded above by c +b + a (n-i ).2. When an algorithm includes several nested loops. as is the case with selection sort. It is largely a question of judgement.2 ANALYSIS OF ALGORITHMS There is no magic formula for analysing the efficiency of an algorithm. Most of the execution time is spent carrying out the instructions in the inner loop.1 to n do for j f. Consider the following algorithm (which is reminiscent of the countsort algorithm discussed in Section 10. 0 In this first example we gave all the details of our argument. Example 2.2. involving summations or recurrences. including the implicit control statements for this loop.2.3. and experience. provided that the time taken to execute the chosen instruction can itself be bounded above by a constant. that such simplifications should not be made incautiously. Selection sort.4. A similar analysis for the lower bound shows that in fact it takes a time in O(n 2). 2 2. Example 2. Here are some examples.1 to T [i ] do k +-k +T[j] .1. How much time does the algorithm take? .1): k E. However. The time taken by each trip round the inner loop can be bounded above by a constant a. The complete execution of the inner loop for a given value of i therefore takes at most a time b + a (n -i ). In the selection sort example. any instruction of the inner loop can usually be used as a barometer.T[i] <. however. there are cases where it is necessary to take account of the implicit control of the loops.n.

We use the comparison "x < T [ j ] " as our barometer. where the execution time often depends on both the number of vertices and the number of edges.3. As Section 1. The time that this algorithm takes to sort n elements depends on their original order.2 Analysis of Algorithms 53 For each value of i the instruction " k . The worst case arises when x is less than T [ j ] for every j between 1 and i . The total number of times it is executed is therefore E° T [i] = s times. However. The number of comparisons carried out between elements is a good measure of the complexity of most sorting algorithms..1. and they presuppose that we know a priori the probability distribution of the instances to be solved. When simplified. where the constant b represents the initialization time for the loop. This situa- tion is typical of algorithms for handling graphs. as we shall see in Chapter 10.k + T [ j ] " is executed T [i ] times. This can happen for . To execute the inner loop completely for a given value of i therefore takes a time b + aT [i].Sec. where c is a new constant. for yet another constant d.1 ] . Insertion sort. analyses of average behaviour are usually harder than analyses of the worst case. Let a be the time taken by one trip round the inner loop. With a little experience the same algorithm can be analysed more succinctly. T [i -2] .2. this expression yields (c +b )n + as + d . We now give an example of analysis of the average behaviour of an algorithm. The detailed analysis of this algorithm is as follows. T [ 1 ] before we leave the while loop because j = 0. the complete algorithm takes a time d + J:i" 1(c + b + aT [i]). Coming back to our algorithm. The time therefore depends on two independent parameters n and s and cannot be expressed as a function of just one of these variables. Finally. we would conclude that the algorithm takes a time in the exact order of s. If indeed this instruction could serve as a barometer. . The instruction in the inner loop is executed exactly s times. given in Section 1. The problem arises because we can only neglect the time spent initializing and controlling the loops provided we include something each time the loop is executed.4 points out. Thus the algorithm makes i -1 comparisons.4.1. nor the comparison j>0". Let x = T[i]. 2. Suppose that T [i] = 1 whenever i is a perfect square and that T [i] = 0 otherwise. Problem 2. the time taken to execute one trip round the outer loop is c + b + aT [i ]. In this case s = [I J. To this we must add n to take account of the control of the outer loop and of the fact that the inner loop is initialized n times. This time is not zero when T[i]=O.) Suppose for a moment that i is fixed.7 says that we can express its execution time in asymptotic notation in two ways : O(n +s) or O(max(n . the algorithm clearly takes a time in 11(n) since each element of T is considered at least once. Consider the insertion sorting algorithm Example 2. (Here we do not count the implicit comparisons involved in control of the for loop.. A simple example is sufficient to convince us that this is not the case. Next. s )). since in this case we have to compare x to T [i . as in the algorithm. . including the time spent on loop control. The total time taken by the algorithm is therefore in O(n+s).

the n th term of the harmonic series. k k=I _ (i -1)(i+2) _ i + I _ I 2i 2 i These events are independent for different values of i. Notice that selection sorting systematically makes the same number of comparisons between elements that insertion sorting makes in the worst case. The average number of comparisons made for a given value of i is therefore i-2 ci = .. . T [2]. as shown in Problem 2. For a given value of i. The average number of comparisons made by the algorithm when sorting n items is therefore " " Eci=z i=2 i+1 2 i=2 - -1 i n2+3n -H" 4 E 0(n2).17. Although the algorithm takes a time in c2(n2) both on the average and in the worst case.1 comparisons will be carried out. suppose that the n items to be sorted are all distinct and that each permutation of these items has the same probability of occurrence.Analysing the Efficiency of Algorithms 54 Chap. T[i] < T [i -1 ] is false at the outset. .k <. 2 every value of i from 2 to n when the array is initially sorted in descending order. since this happens both when x < T [ 1 ] and when T [ 1 ] S x < T [2].1 . but this number is still in O(n2). When we analyse an algorithm.1)/ 2 E O(n 2). If i and k are such that 1 <.i. In the worst case insertion sorting thus takes a time in O(n2). The total number of comparisons is therefore It 2(i . T [i] is 1/i because this happens for [n] (i -1)! (n -i )! = n!/i of the n! possible permutations of n elements. . we often have to evaluate the sum of arithmetic. . a time in 0 (n) is sufficient for an infinite number of instances.1. Here H" = E i "_ I i. is negligible compared to the dominant term n 2/4 because H" E O(log n). and other series. . To determine the time needed by the insertion sorting algorithm on the average. On the other hand. geometric.. the probability that T[i] is the kth largest element among T [1].1) = n (n . The insertion sorting algorithm makes on the average about half the number of comparisons that it makes in the worst case. and the first comparison x < T [j ] gets us out of the while loop...2(i-1)+ Y. T [i -1]. T [2]. The same probability applies to any given number of comparisons up to i-2 included. With probability 1/i . the probability is 2/i that i . T[i] can therefore be situated with equal probability in any position relative to the items T [I].

E2 k lg(n/2k) < 2dllg(n/2d i=1 1) k=O (by Problem 2.1.2. n.Sec.lg n and d . therefore k >.2j. ? 2jr_1 for 1 < t <. Consequently n >_Jm >2Jm-I?4Jm-2? . notice first that for any k ? 0 2--.. But it is impossible for k (and thus j ) to exceed n.1 + lg (n / i ). = i. which implies that m <. But d = Llg (n / 2)] implies that d + 1 <. Denote by j.4. 55 Prove that for any positive integers n and d d k=o Rather than simply proving this formula by mathematical induction. Obviously j.2 Analysis of Algorithms Problem 2.. This shows that j. X lg(n/i)<-2klg(n/2k).k " on the t th trip round the loop. As a barometer we use the instructions in the repeat loop of the algorithm used to sift down a node. Making a heap.2. so that this can be done in a time in 0 (n).4: this algorithm constructs a heap starting from an array T of n items. the value of j after execution of the assignment "j . 2. The total number of trips round the repeat loop when constructing a heap can now be bounded above by Ln/2] (I +lg(n/i)). Example 2. % =GA The interesting part of the sum (*) can therefore be decomposed into sections corresponding to powers of 2. then at the end of the (t -1)st trip round the loop we had j # k .m. Let m be the largest number of trips round the loop that can be caused by calling sift-down (T. (*) To simplify this expression. Moreover.m. >2m-li Thus 2m n / i.9. Ln/2i d E lg(n/i) <. Since any algorithm for constructing a heap must look at each element of the array at least once. Hence Ln/2J lg(n/i)<-3n From (*) we thus conclude that Ln / 2j + 3n trips round the repeat loop are enough to construct a heap. Consider the "make-heap " algorithm given at the end of Section 1. Let d = Llg (n / 2)]. we obtain our . try to see how you might have discovered it for yourself.1).I > lg (n / 8). i). if 1 < t <.2.

additions and multiplications are considered to be elementary operations. can be used to conclude that t (k) E O (2k).4 we saw another algorithm for making a heap (slow-make-heap ). In Section 1. most of the work done by the algorithm consists of calling itself recursively n times to work on (n -1) x (n -1) matrices.3).2. what are the best and the worst ways to arrange the elements initially insofar as the execution time of the algorithm is concerned? Example 2. Assume k >.. When n is greater than 1.56 Analysing the Efficiency of Algorithms Chap.2. 1) { T is sorted) What is the order of the execution time required by this algorithm in the worst case ? * Problem 2. Problem 2. i -1].3.5.. Analyse the worst case for this algorithm and compare it to the algorithm analysed in Example 2. In order to construct the heap. which is in O(n) since 2UgnJ <. Let t(n) be the time taken by some implementation of this algorithm working on an n x n matrix.2. n ]) IT is an array to be sorted) make-heap (T) for i F n step -1 to 2 do exchange T [1] and T [i ] sift-down (T [1. Analysis of heapsort. The algorithm then sifts the root down a path whose length is at most k. We now analyse the algorithm derived from the recursive definition of a determinant (Section 1.7. A different approach yields the same result. . We therefore ignore for the time being the problems posed by the fact that the size of the operands can become very large during the execution of the algorithm. the algorithm first transforms each of the two subtrees attached to the root into heaps of height at most k . In our analysis. both in the worst case and on the average.2. hence it can be built in at most t (Llg n J) steps. Williams invented the heap to serve as the underlying data structure for the following sorting algorithm.9.3.2.2. But a heap containing n elements is of height Llg nj. We thus obtain the asymptotic recurrence t (k) E 2t (k -1) +O (k). Recursive calculation of determinants.5. Find the exact order of the execution time for Williams's heapsort. For a given number of elements.n. Let t (k) stand for the time needed to build a heap of height at most k in the worst case.2. which takes a time in the order of k in the worst case. The techniques of Section 2.3. procedure heapsort (T [ 1 .1 (the right hand subtree could be of height k . Problem 2.2). in particular Example 2.4.4. 2 final result that the construction of a heap of size n can be carried out in a time in O(n).

2. n) while m > 0 do t . Analyse the algorithm again. Analysis of Euclid's algorithm. By Problem 2. this does not affect the fact that the complete algorithm takes a time in O(n!). For each integer i <.k. n >.5). is in O(n3). then 1 <_ n / m < 2. Assume that you know how to add two integers of size n in a time in O(n) and that you can multiply an integer of size m by an integer of size n in a time in O(mn). The values of mi and ni are defined by the following equations for 1 <. it is always true that n modm <n12. Ifm<_n/2. * Problem 2.n mod m n <.2.22 the algorithm therefore takes a time in O(n!) to calculate the determinant of an n x n matrix.Sec.2.k.m m -t return n We first show that for any two integers m and n such that n >_ in. excluding the time taken by the recursive calls. If m > n / 2.7.2. function Euclid (m. where mo and no are the initial values of m and n : ni =mi_i mi =ni_i modmi_1. 2.2 Analysis of Algorithms 57 Besides this.2. however.6. Recall that Euclid's algorithm calculates the greatest common divisor of two integers (Section 1. the matrices for the recursive calls have to be set up and some other housekeeping done. Example 2. This gives us the following asymptotic recurrence : t (n) E nt(n .6.1) + O(n 3 ).i <. In particular. Let k be the number of trips round the loop made by the algorithm working on the instance < m. let ni and mi be the values of n and m at the end of the i th trip round the loop. Mk = 0 causes the algorithm to ter- minate and mi > 1 for every i < k. and so Ln / m] = 1. .5 supposes that the time needed to compute a determinant.5.1.1. Example 2. By Problem 2.then (n mod m)<m <-n/2. which means that n modm =n -m < n -n/2=n12.4). Show that this time can be reduced to O(n).22. taking account this time of the fact that the operands may become very large during execution of the algorithm. which takes a time in O(n2) for each of the n recursive calls if we do this without thinking too much about it (but see Problem 2. Problem 2.

still not taking account of the large size of the operands involved.b. the recurrence looks so like the one used to define the Fibonacci sequence that it is tempting to suppose that t(n) must be in 0(f ). * Problem 2. . The algorithm fib 1 therefore takes a time in n th term of the Fibonacci sequence. 0(4") to calculate the Using constructive induction. Let t(n) be the time taken by some implementation of this algorithm working on the integer n.bf . and so m o >. we have mi =n. On the other hand. for appropriate constants a and b. its size is in 0(n lg o) = E) (n). This time is therefore in 0(n).2d. The case when k is even is handled similarly. Suppose for the moment that k is odd.c Problem 2.2. Once again.1. and therefore the time required by the algorithm. Finally. .8. It is clear that the algorithm fib2 takes a time equal to a + bn on any instance n . We give without explanation the corresponding asymptotic recurrence: t(n) E t(n-1)+t(n-2)+O(1). In conclusion.2._1 mod m. b.2. (Since the value of f is in O(4").9. . the number of trips round the loop made by Euclid's algorithm working on the integers in and n. for appropriate constants a. Define d by k = 2d + 1. and give values for these constants. take into account that we need a time in O(n) to add two integers of size n.5. Prove that the algorithm fibl takes a time in even if we Problem 2.7. it is easy to use this technique to find three real positive constants a. > m. But recall that Mk-1 ? 1. and c.. < mo/2d. Prove that the worst case for Euclid's algorithm arises when we calculate the greatest common divisor of two consecutive numbers from the Fibonaccisequence.8. constructive induction cannot be used directly to find a constant d such that t(n) <. provided the instructions in the loop can be considered as elementary. However. Using the preceding observation.) Example 2.1 + 21g m o . and c such that a f. are in 0 (log m). Analysis of the algorithm fibl. Then mk_1 < mk_3/2 < Mk-5/4 < . remembering that m 1 = no mod m o < m 0. Analysis of the algorithm fib2.2. We now analyse the algoExample 2. b. k = 2d + 1 <._1 <n1_1/2=m1_2/2 for every i >.. for each i Chap.d.t(n) <.2.4. . as in Example 2. where 0 = (1+' )/ 2. . <.7.c for any positive integer n.Analysing the Efficiency of Algorithms 58 Clearly n.7. prove that af <. rithm fib l of Section 1.2. 2 1.t (n) <.

and so nn = 0.2 Analysis of Algorithms 59 What happens.7. which takes a time bounded above by ab (2k . It is obvious that nr = Lnr_1/ 2]<. For each of the first two trips round the loop.2.fk-2 .11. Let d be an appropriate constant to account for necessary initializations. < n 1/ 2t-1 <. Assume that addition of two integers of size n takes a time in O(n) and that multiplication of an integer of size n by an integer of size m takes a time in O(mn). Analysis of the algorithm fib3. of course. Example 2.2. The analysis of fib3 is relatively easy if we do not take account of the size of the operands. 2.n. be the value of n at the end of the t th trip . Then the time taken by ffb2 on an integer n > 2 is bounded above by n d + 2(c + 2a) + Y. in particular n 1 = Ln / 2j. To see this. Compare your result to that obtained in Example 2.n12' . Notice first that the values of i and j at the beginning of the k th trip round the for loop are respectively fk_2 and fk.1 (where we take f _1 = 1).7 we shall see a multiplication algorithm that can be used to improve the performance of the algorithm fib3 (Problem 4.5). But nn is a nonnegative integer.. plus some constant time c to carry out the assignments and the loop control. We conclude that the loop is executed at most m times. To evaluate the number of trips round the loop. ab (2k -1) = abn 2 + (d + 2c +4a . . Consequently nt 5 nr-1/2 5 nr-2/4 <_ .4ab) ..Sec.3. but not. which is the condition for ending the loop. that of fib2 (why not?). k=3 which is in 0 (n 2). if we take account of the size of the operands involved? Let a be a constant such that the time to add two numbers of size n is bounded above by an.7.t <_ m. Problem 2. If you find the result disappointing. take the instructions in the while loop as our barometer.2. Determine the exact order of the execution time of the algorithm fib3 used on an integer n. It is easy to see by symmetry that the algorithm takes a time in 0(n 2). Prove that the execution time of the algorithm fib3 on an integer n is in @(log n) if no account is taken of the size of the operands.10.-1/2 for every 2 <. the time is bounded above by c +2a. The k th trip round the loop therefore consists of calculating fk_2 + fk _ 1 and fk . look back at the table at the end of Section 1. Let m = I + LlgnJ. and let b be a constant such that the size of fn is bounded above by bn for every integer n ? 2. ** Problem 2. The preceding equation shows that nn 5 n/2m < 1. however. let n.9.8.5 and remember that the hidden constants can have practical importance! In Section 4.2. which implies that the algorithm fib3 takes a time in 0 (log n).1) for k >.

2.1 to N do set [i] F. Example 2.5).i while j is odd do j F. Find the exact order of its execution time. this for i -0ton do j F.2.. except when set [i ] = i.13. * Problem 2.10..0 . define the group of an element of rank r as G (r).9. 2 Consider the following algorithm : for i F-0ton do j F. it is clear that this algorithm takes a time in U(n) n 0 (n log n). and loop control can all be carried out at unit cost. N ] keeps the meaning given to it in algorithms find3 and merge3: set [i ] gives the parent of node i in its tree.2.j div 2 Show a relationship between this algorithm and the act of counting from 0 to n + 1 in binary. Prove your answer. N ] in algorithm merge3 : rank [i] denotes the rank of node i (see Section 1. Chap. which indicates that i is the root of its tree. Analysis of disjoint set structures. The array set [1 . Their purpose will be explained later.Analysing the Efficiency of Algorithms 60 Problem 2.12.j div 2 Supposing that integer division by 2.9. The analysis of these algorithms is the most complicated case we shall see in this book. this is so when we look at the algorithms find3 and merge3 used to handle the disjoint set structures introduced in Section 1. assignments. We also introduce a strictly increasing function F : IN -* IN (specified later) and its "inverse" G : IN -* N defined by G (n) = min { m E IN I F (m) >_ n ). For instance.i rank [i] 0 cost [i ] .. time for the algorithm Answer the same question as in the preceding problem.i while j# 0 do j E.. It can happen that the analysis of an algorithm is facilitated by the addition of extra instructions and counters that have nothing to do with the execution of the algorithm proper. N ] plays the role of height [ 1 . Finally.0 fori F. The algorithms become procedure init (initializes the trees) global <. N J. The array rank [ 1 . We begin by introducing a counter called global and a new array cost [ 1 .5.

a else if rank [a] > rank [b] then set [b] F a else set (a] b With these modifications the time taken by a call on the procedure find can be reckoned to be in the order of I plus the increase of global + EN cost [i] occasioned by the call.r i-j return r procedure merge (a. . is in N O (N + n + global + cost [i ]). it never becomes a root thereafter and its rank no longer changes . 2.set [i ] set [i] F. once an element ceases to be the root of a tree.x while i * r do if G (rank [i]) < G (rank [set [i]]) or r = set [i] then global E. the following remarks are relevant : 1. In order to obtain an upper bound on these values. including initialization. The time required for a call on the procedure merge can be bounded above by a constant. 2.cost [i ] + 1 j . b) { merges the sets labelled a and b we suppose that a b [ if rank [a] = rank [b] then rank [a] t.rank [a] + 1 set[b] f. the rank of a node that is not a root is always strictly less than the rank of its parent .2 61 function find (x) { finds the label of the set containing object x I r -x while set [r r do r F.global + 1 else cost [i] F. i=1 where global and cost [i] refer to the final values of these variables after execution of the sequence. Therefore the total time required to execute an arbitrary sequence of n calls on find and merge.set [r] { r is the root of the tree) i E.Analysis of Algorithms Sec.

Analysing the Efficiency of Algorithms

62

Chap. 2

3. the rank of an element never exceeds the logarithm (to the base 2) of the number
of elements in the corresponding tree ;

4. at every moment and for every value of k, there are not more than N i 2k elements of rank k ; and

5. at no time does the rank of an element exceed L1gNJ, nor does its group ever
exceed G ( Llg NJ).

Remarks (1) and (2) are obvious if one simply looks at the algorithms. Remark

(3) has a simple proof by mathematical induction, which we leave to the reader.
Remark (5) derives directly from remark (4). To prove the latter, define subk (i) for
each element i and rank k : if node i never attains rank k, subk (i) is the empty set ; oth-

erwise sub, (i) is the set of nodes that are in the tree whose root is i at that precise
moment when the rank of i becomes k. (Note that i is necessarily a root at that
moment, by remark (1).) By remark (3), subk (i) # 0
# subk (i) ? 2k . By remark
(2), i # j
subk (i) n subk (j) = 0. Hence, if there were more than N/ 2k elements i
such that subk (i) 0, there would have to be more than N elements in all, which
proves remark (4).

The fact that G is nondecreasing allows us to conclude, using remarks (2) and
(5), that the increase in the value of global caused by a call on the procedure find
cannot exceed 1 + G ( LlgNJ). Consequently, after the execution of a sequence of n
operations, the final value of this variable is in 0 (1 + nG (L1g NJ)). It only remains to
find an upper bound on the final value of cost [i] for each element i in terms of its final
rank.

Note first that cost [i ] remains at zero while i is a root. What is more, the value
of cost [i ] only increases when a path compression causes the parent of node i to be
changed. In this case the rank of the new parent is necessarily greater than the rank of
the old parent by remark (2). But the increase in cost [i ] stops as soon as i becomes
the child of a node whose group is greater than its own. Let r be the rank of i at the
instant when i stops being a root, should this occur. By remark (1) this rank does not
change subsequently. Using all the preceding observations, we see that cost [i ] cannot
increase more than F (G (r)) - F (G (r) -1) -1 times. We conclude from this that the
final value of cost [i] is less than F (G (r)) for every node i (=-final (r), where final (r)
denotes the set of elements that cease to be a root when they have rank r >- 1 (while,
on the other hand, cost [i] remains at zero for those elements that never cease to be a
root or that do so when they have rank zero). Let K = G (Llg NJ) - 1. The rest is
merely manipulation.
N

K

F(g+l)

cost [i ] = Y
i=l

E cost [i ]

g=0 r=F(g)+1 iefina((r)
K

<I

F(g+l)

I

I F (G (r))

g=0 r=F(g)+I iefinal(r)

Analysis of Algorithms

Sec. 2.2

63

F(g+l)

K

E (N12`)F(g+1)

<_ 1

g=O r=F(g)+l
K

NEF(g+1)/2F(g)
g=O

It suffices therefore to put F (g + 1) = 2F(g) to balance global and 1"' cost [i] and so
to obtain E 7l cost [i] NG (Llg NJ). The time taken by the sequence of n calls on
find and merge with a universe of N elements, including the initialization time, is
therefore in
N

O(N +n +global +

cost[i]) c O(N +n +nG(L1gNJ)+NG(L1gNJ))
=1

= O(max(N,n)(l +G(L1gN]))).
Now that we have decided that F (g + 1) = 2F(g), with the initial condition
F (0) = 0, what can we say about the function G ? This function, which is often
denoted by lg*, can be defined by

G(N)=lg*N =min{k I lglg ...lgN <0)
k times

The function lg* increases very slowly : lg*N <- 5 for every N 5 65,536 and
lg*N <- 6 for every N < 265,536 Notice also that lg*N - lg*(L1gN j) 2, so that
lg*(Llg NJ) E O(lg*N ). The algorithms that we have just analysed can therefore execute a sequence of n calls on find and merge with a universe of N elements in a time
in 0 (n lg*N ), provided n >_ N, which is to most intents and purposes linear.
This bound can be improved by refining the argument in a way too complex to
give here. We content ourselves with mentioning that the exact analysis involves the
use of Ackermann's function (Problem 5.8.7) and that the time taken by the algorithm
is not linear in the worst case.

4-

II-1

II

1I

ILL 1

II
4-

Figure 2.2.1

I

The towers of Hanoi.

I

I II

Analysing the Efficiency of Algorithms

64

Example 2.2.11. The towers of Hanoi.

Chap. 2

It is said that after creating the

world, God set on Earth three rods made of diamond and 64 rings of gold. These rings
are all different in size. At the creation they were threaded on one of the rods in order

of size, the largest at the bottom and the smallest at the top. God also created a
monastery close by the rods. The monks' task in life is to transfer all the rings onto
another rod. The only operation permitted consists of moving a single ring from one
rod to another, in such a way that no ring is ever placed on top of another smaller one.
When the monks have finished their task, according to the legend, the world will come
to an end. This is probably the most reassuring prophecy ever made concerning the
end of the world, for if the monks manage to move one ring per second, working night
and day without ever resting nor ever making a mistake, their work will still not be
finished 500,000 million years after they began!

The problem can obviously be generalized to an arbitrary number of rings. For
example, with n = 3, we obtain the solution given in Figure 2.2.1. To solve the general problem, we need only realize that to transfer the m smallest rings from rod i to
rod j (where 1 <- i <- 3, 1 <- j <- 3, i # j, and m ? 1), we can first transfer the smallest
m -1 rings from rod i to rod 6 - i -j, next transfer the m th ring from rod i to rod j,

and finally retransfer the m -1 smallest rings from rod 6 - i -j to rod j. Here is a
formal description of this algorithm ; to solve the original instance, all you have to do
(!) is to call it with the arguments (64, 1, 2).

procedure Hanoi(m, i, j)
(moves the m smallest rings from rod i to rod j }

ifm > 0 then Hanoi(m -1, i, 6 - i -j)
write i "-*" j
Hanoi(m - 1, 6 - i j, j)
To analyse the execution time of this algorithm, let us see how often the instruction write, which we use as a barometer, is executed. The answer is a function of m,
which we denote e (m). We obtain the following recurrence :

e(m)=

1
ifm=1
2e(m-1)+1 ifm > 1,

from which we find that e (m) = 2" - 1 (see Example 2.3.4). The algorithm therefore
takes a time in the exact order of 2" to solve the problem with n rings.
Problem 2.2.14. Prove that the algorithm of Example 2.2.11 is optimal in the
sense that it is impossible with the given constraints to move n rings from one rod to
0
another in less than 2" - 1 operations.
* Problem 2.2.15.
Give a nonrecursive algorithm to solve this problem. (It is
cheating simply to rewrite the above algorithm using an explicit stack to simulate the
recursive calls.)

Sec. 2.3

Solving Recurrences Using the Characteristic Equation

65

2.3 SOLVING RECURRENCES USING
THE CHARACTERISTIC EQUATION

We have seen that the indispensable last step when analysing an algorithm is often to
solve a system of recurrences. With a little experience and intuition such recurrences
can often be solved by intelligent guesswork. This approach, which we do not illustrate here, generally proceeds in four stages : calculate the first few values of the
recurrence, look for regularity, guess a suitable general form, and finally, prove by
mathematical induction that this form is correct. Fortunately there exists a technique
that can be used to solve certain classes of recurrence almost automatically.

2.3.1 Homogeneous Recurrences
Our starting point is the resolution of homogeneous linear recurrences with constant
coefficients, that is, recurrences of the form

aptn +altn-1 + ... +aktn-k =0

(*)

where

i. the t, are the values we are looking for. The recurrence is linear because it does
not contain terms of the form t, ti+j , t, 2, and so on ;
ii. the coefficients a, are constants ; and
iii. the recurrence is homogeneous because the linear combination of the t, is equal
to zero.

After a while intuition may suggest we look for a solution of the form
tn

=xn

where x is a constant as yet unknown. If we try this solution in (*), we obtain

apx"

+a1xn-I + ... +akx"-k = O.

This equation is satisfied if x = 0, a trivial solution of no interest, or else if

aoxk

+alxk-I + ... +ak =0.

This equation of degree k in x is called the characteristic equation of the recurrence

M.
rk of this characteristic
Suppose f o r the time being that the k roots r I , r2r 2 ,-.
equation are all distinct (they could be complex numbers). It is then easy to verify that
any linear combination

t" _

k

i=1

ci rin

Analysing the Efficiency of Algorithms

66

Chap. 2

of terms rrn is a solution of the recurrence (*), where the k constants c 1 , c 22 ,-.- Ck
are determined by the initial conditions. (We need exactly k initial conditions to determine the values of these k constants.) The remarkable fact, which we do not prove
here, is that (*) has only solutions of this form.
Example 2.3.1.

Consider the recurrence

n >2

to -3tn-1 -4tn_2=0
subject to to = 0, t i = 1.
The characteristic equation of the recurrence is

x2-3x -4=0
whose roots are -1 and 4. The general solution therefore has the form
=C1(-1)n +C24n.

tn

The initial conditions give

c1+ c2=0
-c1+4c2=1
that is, c 1 =- 5. C2 =
1

n =0

n=1

I

We finally obtain

[4n - (-1)n j.

to =
9

Example 2.3.2. Fibonacci.

Consider the recurrence

to = to-1 + to-2

n >2

subject to to = 0, t 1 = 1.

(This is the definition of the Fibonacci sequence ; see Section 1.7.5.)

The recurrence can be rewritten in the form to - to-1 - to-2 = 0, so the characteristic equation is

x2-x - 1 =0
whose roots are

r1=1+5
2 and

r2=

The general solution is therefore of the form

to =Cir +c2r2
The initial conditions give

Sec. 2.3

Solving Recurrences Using the Characteristic Equation

c1+ c2=0

n =0

rice+r2c2=1

n =1

67

from which it is easy to obtain

CI=,
T5

C2=-

I
T5.

Thus t" = 5 (r i - r2" ). To show that this is the same as the result obtained by De
Moivre mentioned in Section 1.7.5, we need to note only that r I = 0 and r2 = - 0-I.

* Problem 2.3.1.

Consider the recurrence

t" =2t"_I -2tn-2

n >2

subject to to = 0, t I = 1.

Prove that t, = 2'12sin(n it/4), not by mathematical induction but by using the
characteristic equation.

Now suppose that the roots of the characteristic equation are not all distinct. Let

p(x)=aoxk

+alxk-

I + ...

+ ak

be the polynomial in the characteristic equation, and let r be a multiple root. For every
n >- k, consider the n th degree polynomial defined by
h (x) = x [x" -k p (x)]' = a o n x" + a 1(n - 1)x"- I +

+ ak (n -k

)x"-k

Let q (x) be the polynomial such that p (x) = (x -r )2q (x). We have that
h (x) = x [(x-r )2 x

n-k

q (x)]' = x [2(x-r

)x"-k q (x) + (x-r

)2[x"-k q (x)]7

In particular, h (r) = 0. This shows that

aonr" +ai(n-1)r"-I+

+ak(n-k)rn-k =0,

that is, to = nr" is also a solution of (*). More generally, if m is the multiplicity of the
root r, then t" = r" , t" = nr" , t" = n 2r"
,
t" = n m - I r n are all possible solutions
of (*). The general solution is a linear combination of these terms and of the terms
contributed by the other roots of the characteristic equation. Once again there are k
constants to be determined by the initial conditions.
Example 2.3.3.

Consider the recurrence

tn =5tn_I-Stn-2+4tn-3
subject to to = 0, t I = 1, t2 = 2.
The recurrence can be written

n >_3

Analysing the Efficiency of Algorithms

68

Chap. 2

to - Stn -I + 8tn-2 - 4tn-3 = 0
and so the characteristic equation is

x3-5x2+8x -4=0
or (x-1)(x-2)2 = 0.
The roots are 1 (of multiplicity 1) and 2 (of multiplicity 2). The general solution
is therefore
to =C11n +C22n +C3n2n.

The initial conditions give

=0
c1+2c2+2c3=1
cl+4c2+8c3=2

n =0

c1+ C2

n=1
n =2

from which we find c 1 = -2, c 2 = 2, c 3 = - z . Therefore
to = 2n +1

- n2n- - 2.
1

2.3.2 Inhomogeneous Recurrences
We now consider recurrences of a slightly more general form.

aotn +a1tn-1+ ... +aktn-k = b n p(n)

( ** )

The left-hand side is the same as (*), but on the right-hand side we have bn p (n),
where

i. b is a constant ; and
H. p (n) is a polynomial in n of degree d.
For example, the recurrence might be

tn -2tn_1=3n.

In this case b = 3 and p (n) = 1, a polynomial of degree 0. A little manipulation
allows us to reduce this example to the form (*). To see this, we first multiply the
recurrence by 3, obtaining
3tn - 6tn - 1 =

3n + 1

If we replace n by n + 1 in the original recurrence, we get

to+l-2tn=3n+1
Finally, subtracting these two equations, we have

which can be solved by the method of Section 2. whereas the factor (x-3)2 is the result of our manipulation.2tn+l = -6th+I + 12th (n+5)3n+2 (n+7)3' 2 = -6(n+6)3n+1 .1. replace n in the recurrence by n +2.6. proceed as in the homogeneous case. Here is a second example. and c. replace n in the recurrence by n + 1 and then multiply by . we can show that to solve (**) it is sufficient to take the following characteristic equation : (aoxk+alxk-I+ .3. we obtain th+2 . Once this equation is obtained. The characteristic equation of this new recurrence is X3-8X2+21x-18=0 that is. Generalizing this approach.11) is given by .2t. The characteristic equation is x2-5x +6=0 that is.8th+1 + 21tn .-I = (n+5)3n The necessary manipulation is a little more complicated : we must a.Sec..4. The number of movements of a ring required in the Towers of Hanoi problem (see Example 2. Adding these three equations. th .18tH-I = 0.18tn_I = to+2 . whereas the factor (x-3) has appeared as a result of our manipulation to get rid of the right-hand side. (x-2)(x-3) = 0. Example 2. (x-2)(x-3)2 = 0.3. 2.3 Solving Recurrences Using the Characteristic Equation 69 to+1 -5th +6th_I =0. +ak)(x-b)d+1=0. we can see that the factor (x-2) comes from the left-hand side of the original recurrence. obtaining respectively 9th .2. Once again.. multiply the recurrence by 9 b. Intuitively we can see that the factor (x-2) corresponds to the left-hand side of the original recurrence.

The characteristic equation is therefore (x-2)(x-1) = 0 where the factor (x-2) comes from the left-hand side and the factor (x-1) comes from the right-hand side. to find a second initial condition we use the recurrence itself to calculate t1=2to+I=1. The roots of this equation are 1 and 2. Therefore C2 > 0. once we know that to = C I In + C22n we can already conclude that to E 8(2n). In the previous example.2t. -I = 1. it is therefore always the case that c 1 must be equal to -1. which is of the form (**) with b = 1 and p (n) = 1. 2 n>?1 subject to to = 0. For this it is sufficient to notice that to . there is no need to calculate the constants in the general solution. We finally have c1+ c2=0 n =0 c1+2c2=1 n=1 from which we obtain the solution to=2n-1.2tn -I =c1+C22n -2(c1+C22n-') Whatever the initial condition. we find 1 = to . and the conclusion follows.Analysing the Efficiency of Algorithms 70 tn=2tn_1+I Chap. so the general solution of the recurrence is =Ciln +C22n. We know that to = 0. In fact we can obtain a little more. is certainly neither negative nor a constant. to We need two initial conditions. If all we want is the order of to . a polynomial of degree 0. Substituting the general solution back into the original recurrence. since clearly to >.n. The recurrence can be written to . . the number of movements of a ring required..

. . . Example 2. we are always looking for a solution where to >. Solve tn =2tn_I+n +2" n >. +ak)(X-b1)d'+I(x-b2)°.3. we can conclude immediately that to must be in 0 (2"). 2..3 Solving Recurrences Using the Characteristic Equation 71 Problem 2. on the contrary! Why? Example 2. which is of the form (**) with b = 1 and p(n) = n.Sec. (***) where the b.2.3. and hence that they are all in O(2").. =0 which contains one factor corresponding to the left-hand side and one factor corresponding to each term on the right-hand side. Consider the recurrence to = 2tn -I + n. The general solution is to =c12" +C2In +C3nIn.+I . .1 subject to to = 0. .3.2tn -I = n. Conclude that all the interesting solutions of the recurrence must have c i > 0. A further generalization of the same type of argument allows us finally to solve recurrences of the form aotn +aitn . are distinct constants and the p. In the problems that interest us. If this is so. and to solve the problem as before. It suffices to write the characteristic equation (aoxk +aIxk . prove that in the preceding example C2 = -2 and C3 = -1 whatever the initial condition.3.5. This can be written to . The characteristic equation is therefore (x-2)(x-1)2 = 0 with roots 2 (multiplicity 1) and 1 (multiplicity 2).I + . +aktn-k = b?p1(n)+bnP2(n)+ .. The recurrence can be written t -2tn_I =n +2n. Problem 2. a polynomial of degree 1.0 for every n. . There is nothing surprising in the fact that we can determine one of the constants in the general solution without looking at the initial condition .6.3. (n) are polynomials in n respectively of degree d. By substituting the general solution back into the recurrence..I+ .

In the following examples we write T (n) for the term of a general recurrence. C2. We could obviously have concluded that t o e O (n 2") without calculating the constants.3. cm . How many constraints on these constants can be obtained without using the initial conditions ? (See Problems 2.) 2.3. The characteristic equation is (x-2)(x-1)2(x-2) = 0. Prove that all the solutions of this recurrence are in fact in O(n 2").3. p2(n) = 1. This can be written . which has roots 1 and 2.3 and 2. pi(n) = n .1) + 2k .Analysing the Efficiency of Algorithms 72 Chap. The degree of p 1(n) is 1.4.3. =-2-n +2n+1+n2".3 Change of Variable It is sometimes possible to solve more complicated recurrences by making a change of variable. If the characteristic equation of the recurrence (***) is of degree m =k +(d1+1)+(d2+1)+ . C3 and C4 from =0 n =0 c1+ c2+2c3+ 2c4= 3 n=1 c1 +2c2+4c3+ 8c4= 12 c1+3c2+8c3+24c4=35 n =2 n =3 + c3 Cl arriving finally at t. 2 which is of the form (***) with b 1 = 1. and tk for the term of a new recurrence obtained by a change of variable. then the general solution contains m constants C 1 . b2 = 2. Problem 2.5. t3 = 35. Here is how we can find the order of T (n) if n is a power of 2 and if T(n)=4T(n12)+n n > 1. Using the recurrence. C 2 . and p2(n) is of degree 0.3.4. Example 2.3. we can calculate t 1 = 3. Problem 2. regardless of the initial condition.7. both of multiplicity 2. The general solution of the recurrence is therefore of the form to =C1I" +C2nI" +C32" +Cgn2". Replace n by 2k (so that k = lg n) to obtain T (2k) = 4T(2'. We can now determine C1 . t2 = 12.

Example 2. 2. Example 2. Hence. and so tk =c12k +C2k2k +C3k22k T(n)=cin +c2nlgn +c3nlg2n.8. The characteristic equation is (x-4)2 = 0.Sec.3. Proceeding in the same way. We know how to solve this new recurrence : the characteristic equation is (x-4)(x-2) = 0 and hence tk = c 14k + c 22k . T (n) is therefore in 0 (n 2 1 n is a power of 2). Here is how to find the order of T (n) if n is a power of 2 and if T (n) = 2T (n / 2) + n lg n n > 1. Putting n back instead of k. we obtain successively T(2k)=4T(2k-1)+4k tk =4tk-I +4k. we find T(n)=C1n2+C2n.3 Solving Recurrences Using the Characteristic Equation 73 tk =4tk-1+2k if tk = T (2k) = T (n). As before. and so tk =c14k +c2k4k T(n) =c1n2+C2n2lgn. we obtain T(2k)=2T(2k-1)+k2k tk =2tk-I+k2k The characteristic equation is (x-2)3 = 0. T (n) E 0 (n log2n I n is a power of 2).3. . Thus T (n) e O (n 2log n I n is a power of 2).9. Here is how to find the order of T (n) if n is a power of 2 and if T(n)=4T(n/2)+n2 n > 1.

10.1. Let T : IN -) 1R+ be an eventually nondecreasing function such that T(n)=aT(n/b)+cnk n > no when n In 0 is a power of b. 0 Finally. * Problem 2. n 2 log n .7 to 2.3. T (n) E O (n 'g 3 I n is a power of 2). it is sufficient to add the condition that T (n) is eventually nondecreasing to be able to conclude that the asymptotic results obtained apply unconditionally for all values of n.10 the recurrence given for T (n) only applies when n is a power of 2. This follows from problem 2.3. 2 We want to find the order of T (n) if n is a power of 2 and if (c is constant. Chap. In Examples 2.6. This result is generalized in Problem 2.3. obtain it using the techniques of the characteristic equation and change of variable. Show that the exact order of T (n) is given by T (n) E O(nk) ifa < hk O(n k log n) ifa = b k 'ogn a ifa > b k O(n ) Rather than proving this result by constructive induction. whereas a and c are positive real numbers. and so tk = c 13k + C22k T(n)=c13'g" +c2n and hence since algb = b1g° T(n)=cInIg3+c2n. The constants no ? 1.Analysing the Efficiency of Algorithms 74 Example 2.2 and k >. Remark. b >. n = 2k > 1). however. It is therefore inevitable that the solution obtained should be in conditional asymptotic notation.20 since the functions n 2. .3.3.0 are integers. The characteristic equation is (x-3)(x-2) = 0. In each of these four cases. n log2n and n I g 3 are smooth.13. T (n) = 3T(n12) + cn We obtain successively T(2k)=3T(2k-l)+c2k tk =3tk_i+c2k.

V. 75 Solve the following recurrence exactly for n a power of 2: T(n)=2T(n/2)+lgn n >>-2 subject to T(1) = 1. = 3 + 21g 3. we obtain 23n-23n T(n)= n .3. We want to solve n>1 T (n) = n T 2(n / 2) subject to T (l) = 6 for the case when n is a power of 2. C2=-2. From VO= I + lg 3. Express your solution as simply as possible using the 0 notation. 2. To transform the range. one of the coefficients is not constant. we create a new recurrence by putting Vk = lg tk . Finally.4 Range Transformations When we make a change of variable.8. we transform the domain of the recurrence. and furthermore. Solve the following recurrence exactly for n of the form )+lgn 22A n >_4 subject to T(2) = 1.3.2.Sec. none of the techniques we have seen applies to this recurrence since it is not linear. We give just one example of this approach. and so Vk =c12k +c2lk +c3klk. which gives k>0 tk =2ktk2 I subject to to = 6. using tk = 2t'" and T(n) = tlgn . Express your solution as simply as possible using the O notation.k .7. The first step is a change of variable : put tk = T (2k ). which yields k>0 -I subject to V() = lg 6.3. and hence Vk = (3 + lg 3)2k . The characteristic equation is (x-2)(x-1)2 = 0.3 Solving Recurrences Using the Characteristic Equation Problem 2. It is sometimes useful to transform the range instead in order to obtain something of the form (***). and c 3 =-J. and V2 = 8 + 41g 3 we obtain c i = 3 + lg 3. 2. At first glance. Problem 2.

9. Problem 2. 4. Define the function T : X -* R* by the recurrence T(n)= d if n =no aT (nlb)+ f(n) if nEX.3.7.2 are integers. a more general result is required (and the technique of the characteristic equation does not always apply).7).1. In some cases. and 4.Analysing the Efficiency of Algorithms 76 Chap.5 Supplementary Problems Problem 2.3. Prove that T (n) E O (n ig 3 ). 2 2. Recurrences arising from the analysis of divide-and-conquer algorithms (Chapter 4) can usually be handled by Problem 2. Let T AN 4 R* be an eventually nondecreasing function such that T(n) ST(Ln/2])+T([n/2])+T(1+[n/21)+cn n > no. Express your answer as simply as possible using the 0 notation. use Example 2.3. Let f : X . Problem 2. however.6. whereas a and d are real positive constants.1.3. (Multiplication of large integers: see Sections 1.R* be an arbitrary function.2.2 subject to T(1) = 1. Hint : observe that T (n) < 3 T (1 + [n / 21) + cn for n > n 0. * Problem 2. Solve the following recurrence exactly : tn =tn_I +tn_3-tn_4 n >>>4 subject to to = n for 0 5 n < 3.3. Solve the following recurrence exactly : to = to -1 + 2tn-2 .20 to conclude for T (n). Consider any constants c E R+ and no E N.11. Problem 2.3. and use problem 2.3. Solve the following recurrence exactly for n a power of 2: T (n) = 5T (n / 2) + (n lg n)2 n >.n>no . make the change of variable T '(n) = T (n+2). Express your answer as simply as possible using the O notation. Define X={n ENIlogb(n/no)EIN}={nENNI(3iEIN)[n=nob`I).10. Express your answer as simply as possible using the 0 notation.12.15n + 106 for 0!5 n :5 2.2tn-3 n >3 subject to to = 9n 2 .3.10 to solve for T '(n) when n is a power of 2.13. The constants no >-1 and b >.

Note that the third alternative includes f (n) E O(nP) by choosing q=1. Let q be any strictly positive real constant . If we set f (n o) = d (which is of no consequence for the definition of T ). iv. Prove or disprove that the third alternative can be generalized as follows : T(n) E O(f (n) log n) whenever there exist two strictly positive real constants ). Prove that i. It turns out that the simplest way to express T (n) in asymptotic notation depends on how f (n) compares to n P .3.3 Solving Recurrences Using the Characteristic Equation 77 Let p = loge a . The last alternative can be generalized to include cases such as f (n) E O(n P+q log n) or f (n) E O(n P+q / log n) . As a special case of the first alternative. Problem 2.15. n >2 . In what follows. then T (n) E < O(no) if f(n) E O(nP/(logn)1+q ) O(f (n) log n log log n) if f (n) E O(n P / log n) O(f(n)logn) if f(n) E O(nP(log n)q Off (n)) if f (n) E O(n p+q) 1) .3. Solve the following recurrence exactly : t = 1/(4-tn_1) n>1 subject tot 1 = 1/4. T (n) e O(n P) whenever f (n) E O (n' ) for some real constant r < p .Sec. If you disprove it. we also get T (n) E O(f (n)) if there exist a function g : X -* IR* and a real constant a> a such that f(n)EO(g(n)) and g(bn)>-ag(n) for all v. initial conditions a and b : T(n+2)=(1+T(n+l))/T(n) subject to T (O) = a. all asymptotic notation is implicitly conditional on n E=X. T (I) = b. 2. find the simplest but most general additional constraint on f (n) that q 1 <_ q 2 such that f (n) E O (n P (log n)q' 1) and f (n) E Q(n P (log n)' suffices to imply T (n) E O(f (n) log n).14. Solve the following recurrence exactly as a function of the Problem 2. W.) T (n) = E a' f (n l b') i=o ii. the value of T (n) is given by a simple summation when n EX logn(n In.

consult any book on mathematical analysis. for instance.20 are introduced by Brassard (1985).3. The more precise analysis making use of Ackermann's function can be found in Tarjan (1975. The analysis of disjoint set structures given in Example 2. 2 Solve the following recurrence exactly : T(n)='T(n/2)-zT(n/4)-1/n subject to T (1) = 1 and T (2) = 3/2. The book by Purdom and Brown (1985) presents a number of techniques for ana- lysing algorithms. The paper by Bentley. are explained in Lueker (1980). Haken.3 comes from Williams (1964). . including the characteristic equation and change of variable.2.2.10 is adapted from Hopcroft and Ullman (1973). Several techniques for solving recurrences. who also suggests that "one-way inequalities" should be abandoned in favour of a notation based on sets.16. Buneman and Levy (1980) and Dewdney (1984) give a solution to Problem 2. 1983). and Saxe (1980) is particularly relevant for recurrences occurring from the analysis of divide-and-conquer algorithms (see Chapter 4).15.Analysing the Efficiency of Algorithms 78 Problem 2. For information on calculating limits and on de l'Hopital's rule. Example 2. Knuth (1976) gives an account of its history and proposes a standard form for it.1.2. For a more rigorous mathematical treatment see Knuth (1968) or Purdom and Brown (1985). The main mathematical aspects of the analysis of algorithms can also be found in Greene and Knuth (1981).1 corresponds to the algorithm of Dixon (1981). Chap. Rudin (1953).1. Problem 2.4 REFERENCES AND FURTHER READING The asymptotic notation has existed for some while in mathematics: see Bachmann (1894) and de Bruijn (1961). n?3 El 2. Conditional asymptotic notation and its use in Problem 2.

the length of the path we have found. and an objective function that gives the value of a solution (the time needed to execute all the jobs in the given order. They are typically used to solve optimization problems : find the best order to execute a certain set of jobs on a computer. In the most common situation we have a set (or a list) of candidates : the jobs to be executed. a function that checks whether a particular set of candidates provides a solution to our problem.1 INTRODUCTION Greedy algorithms are usually quite simple. To solve our optimization problem. we look for a set of candidates constituting a solution that optimizes (minimizes or maximizes. the set of candidates that have already been used . ignoring questions of optimality for the time being . find the shortest route in a graph. a selection function that indicates at any time which is the most promising of the candidates not yet used . and so on). as the case may be) the value of the 79 . whether or not it is possible to complete the set in such a way as to obtain at least one solution (not necessarily optimal) to our problem (we usually expect that the problem has at least one solution making use of candidates from the set initially available).3 Greedy Algorithms 3. a function that checks whether a set of candidates is feasible. that is. or whatever . the nodes of the graph. and so on. this is the function we are trying to optimize.

we shall see in the following examples that at times there may be several plausible selection functions. and containing at least one coin of each type . if the enlarged set is still feasible. 0 . without worrying about the future. the candidate we tried and removed is never considered again. The selection function is usually based on the objective function . If the enlarged set of chosen candidates is no longer feasible. then the candidate we just added stays in the set of chosen candidates from now on. It never changes its mind : once a candidate is included in the solution. A greedy algorithm proceeds step by step. a solution : the total value of the chosen set of coins is exactly the amount we have to pay . representing for instance 1. sible number of coins. function greedy (C : set) : set { C is the set of all the candidates) S f. Then at each step.an element of C maximizing select(x) C-C\{x} if feasible (S u {x }) then S . the first solution found in this way is always optimal. Initially. we check whether the set now constitutes a solution to our problem. the procedure chooses the best morsel it can swallow.1. 5. However. We want to give change to a customer using the smallest posExample 3. they may even be identical.0 (S is a set in which we construct the solution) while not solution (S) and C # 0 do x . we try to add to this set the best remaining candidate. However. a feasible set : the total value of the chosen set does not exceed the amount to be paid the selection function : choose the highest-valued coin remaining in the set of candidates. it is never reconsidered. our choice being guided by the selection function.1. and 25 units.S u {x } if solution (S) then return S else return "there are no solutions" It is easy to see why such algorithms are called "greedy": at every step. 10. so that we have to choose the right one if we want our algorithm to work properly. once a candidate is excluded from the solution. When a greedy algorithm works correctly. it is there for good .Greedy Algorithms 80 Chap. 3 objective function. Each time we enlarge the set of chosen candidates. we remove the candidate we just added . The elements of the problem are the candidates: a finite set of coins. and the objective function : the number of coins used in the solution. the set of chosen candidates is empty.

that the greedy algorithm no longer gives an optimal solution in every case if there also exist 12-unit coins. Moreover. Prove. Using integer division is also more efficient than proceeding by successive subtractions.1 Minimal Spanning Trees Let G = < N. 3.Sec. and the sum of the lengths of the edges in T is as small as possible.) Problem 3. a feasible set of edges is promising if it can be completed so as to form an optimal solution. 3. Prove that the partial graph < N.1. The graph < N. Let B C N be a strict subset of the nodes of G. We give two greedy algorithms to solve this problem. or if one type of coin is missing from the initial set. if the nodes of G represent towns. A > be a connected undirected graph where N is the set of nodes and A is the set of edges.1. Prove that with the values suggested for the coins in the preceding example the greedy algorithm will always find an optimal solution provided one exists.1.2. Let G = < N. (Instead of talking about length. In particular.2 GREEDY ALGORITHMS AND GRAPHS 3. T > formed by the nodes of G and the edges in T is a tree. an edge touches a given set of nodes if exactly one end of the edge is in the set. For instance. A > be a connected undirected graph where the length of each edge is given. Finally. and the cost of an edge (a. Show that it can even happen that the greedy algorithm fails to find a solution at all despite the fact that one exists. The following lemma is crucial for proving the correctness of the forthcoming algorithms.2 Greedy Algorithms and Graphs 81 * Problem 3. T > is called a minimal spanning tree for the graph G. a set of edges is a solution if it constitutes a spanning tree.2. Each edge has a given non-negative length. and it is feasible if it does not include a cycle. Obviously. Lemma 3. b } is the cost of building a road from a to b.1. by giving specific counterexamples. In this case the problem is to find a subset T whose total cost is as small as possible. The problem is to find a subset T of the edges of G such that all the nodes remain connected when only the edges in T are used. on the other hand. then a minimal spanning tree of G shows us how to construct at the lowest possible cost a road system linking all the towns in question. we can associate a cost to each edge. This problem has many applications. this change of terminology does not affect the way we solve the problem.2. It is obviously more efficient to reject all the remaining 25-unit coins (say) at once when the remaining amount to be represented falls below this value. the empty set is always promising since G is connected. In the terminology we have used for greedy algorithms. Let .

the total length of the edges in U' does not exceed the total length in U. there necessarily exists at least one other edge.2. when T is empty. (Initially. since e touches B. A cycle is created if we add edge e to U. cannot be in T. Let e be the shortest edge that touches B (or any one of the shortest if ties exist). we create exactly one cycle (this is one of the properties of a tree). A greedy algorithm selects the edges one by one in some given order.2.) The elements of T that are included in a given connected component form a minimal spanning tree for the nodes in this component. But since the length of e is by definition no greater than the length of e'. Let U be a minimal spanning tree of G such that T E_ U (such a U must exist since T is promising by assumption). so that T is then a minimal spanning tree for all the nodes of G. N\B B Figure 3. Kruskal's algorithm. The main difference between the various greedy algorithms to solve this problem lies in the order in which the edges are selected. In this cycle. If we now remove e'. each node of G forms a distinct trivial connected component. Each edge is either included in the set that will eventually form the solution or eliminated from further consideration. At the end of the algorithm only one connected component remains. Proof.1. which touches B. 3 T c A be a promising set of edges such that no edge in T touches B.see Figure 3. At every instant the partial graph formed by the nodes of G and the edges in T consists of several connected components. If e e U. say. edges are added to T. As the algorithm progresses.Greedy Algorithms 82 Chap. e'. Therefore U' is also a minimal spanning tree.1). there is nothing to prove. The set T of edges is initially empty. the cycle disappears and we obtain a new tree U' that spans G. that also touches B (otherwise the cycle could not close . we note that T c U' because the edge e'. Then T v f e } is promising. The initial set of candidates is the set of all the edges. and it includes e. Otherwise when we add the edge e to U. To complete the proof. .

31. we add it to T.3. (1.7).2.7) (1.4} (1.3} (41 (5} (6) 171 (1. (6. This minimal spanning tree is shown by the heavy lines in Figure 3.2. (3.2.2.5) rejected 7 14.21 11. . The proof. we examine the edges of G in order of increasing length.4}.7).2 Greedy Algorithms and Graphs 83 To build bigger and bigger connected components. the two connected components now form only one component. (4.Sec. (4. (2.6}.5. In increasing order of length the edges are : (1. Problem 3. 13.2.2. Step Edge considered Initialization - Connected components (11 (2) (3) (4) 151 (6) f7) 11.5 } . If an edge joins two nodes in different connected components.2). 3 } . and consequently. (1.7) 1 2 T contains the chosen edges (1.4).3. (5.5} (6. The algorithm stops when only one connected component remains.5).7) (1.2. 14. The algorithm proceeds as follows.3) (4.4. El Figure 3.5) (1. (5.2.6.3} (4.2.4. 2) .71. (2.5} (6.2.2.2.6).21 (3} (4} 151 (61 (71 3 (2. 5) . consider the graph in figure 3.2.4). Prove that Kruskal's algorithm works correctly. which uses lemma 3. Otherwise the edge is rejected : it joins two nodes in the same connected component and cannot therefore be added to T without forming a cycle since the edges in T form a minimal spanning tree for each component. A graph and its minimal spanning tree. To illustrate how this algorithm works.71 5 (1.71. 12.2. and (4.7). 3.5).7) 6 t2. is by induction on the number of edges selected until now. (6.51 (61 (7) 4 16. its total length is 17. (2.3} (4.1.

0 (n) to initialize the n disjoint sets . which tells us in which component the node x is to be found. O (a) for the remaining operations. Here is the algorithm.find (u ) vcomp f. and merge (A . Is this the case in our example.find (v) if ucomp # vcomp then merge (ucomp. We therefore use disjoint set structures (Section 1.v}} until #T = n -1 return T Problem 3. We have to carry out rapidly the two operations find (x). v) f. since there are at most 2a find operations and n -1 merge operations on a universe containing n elements . Although this does not .n .Greedy Algorithms 84 Chap.0 1 will contain the edges of the minimal spanning tree) initialize n sets.shortest edge not yet considered ucomp F-.10.3. A graph may have several different minimal spanning trees.1. 3 Problem 3.2. we have to handle a certain number of sets : the nodes in each connected component. On a graph with n nodes and a edges the number of operations is in 0 (a log a) to sort the edges. We conclude that the total time for the algorithm is in 0 (a log n) because 0 (lg*n) c 0 (log n).4. For this algorithm it is preferable to represent the graph as a vector of edges with their associated lengths rather than as a matrix of distances. function Kruskal (G = < N. and at worst.2.2. where is this possibility reflected in the algo- 0 rithm ? To implement the algorithm. B) to merge two disjoint sets. that is not connected? What happens if. length : A -i IR*) : set of edges 1 initialization) Sort A by increasing length n -#N T .5). vcomp) T -T u {{u. by mistake. For a connected graph we know that a >. each containing one distinct element of N { greedy loop } repeat { u . which is equivalent to 0 (a log n) since n-1 <-a Sn(n-1)/2. in the worst case 0 ((2a+n -1)lg*n) for all the find and merge operations. we run the algorithm on a graph We can estimate the execution time of the algorithm as follows. by the analysis given in example 2.9. and if so. A > : graph .

which again uses Lemma 3. Initially.0 (will contain the edges of the minimal spanning tree } B (-.v}} B F-B U (u) return T Problem 3. To illustrate how the algorithm works. Here is an informal statement of the algorithm. This is particularly advantageous in cases when the minimal spanning tree is found at a moment when a considerable number of edges remain to be tried.here the heap property should be inverted so that the value of each internal node is less than or equal to the values of its children). At each step Prim's algorithm looks for the shortest possible edge { u . the original algorithm wastes time sorting all these useless edges. We continue thus as long as B # N. starting from an arbitrary root. function Prim (G = < N.4 . At each stage we add a new branch to the tree already constructed. is by induction on the number of nodes in B. the user supplies a matrix of distances. What can you say about the time required by Kruskal's algoProblem 3. v) to T . length : A -> IR*) : set of edges ( initialization) T F. instead of providing a list of edges.2. Prove that Prim's algorithm works correctly. In such cases. This allows the initialization to be carried out in a time in 0 (a). and the set T of edges is empty.2. except that we are careful never to form a cycle.9.2. on the other hand.2. In Prim's algorithm.6.2. leaving to the algorithm the job of working out which edges exist ? Prim's algorithm. 3. In this way the edges in T form at any instant a minimal spanning tree for the nodes in B. . v } of minimum length such that u E N \ B and v E B T 6-T u {(u. although each search for a minimum in the repeat loop will now take a time in 0 (log a) = 0 (log n).(an arbitrary member of N } while B # N do find { u . consider once again the graph in Figure 3. it is preferable to keep the edges in a heap (Section 1.2 Greedy Algorithms and Graphs 85 change the worst-case analysis. A > : graph . The proof.5. the set B of nodes contains a single arbitrary node. rithm if. We arbitrarily choose node I as the starting node. v J such that u E N \ B and v E B . It then adds u to B and (u. the minimal spanning tree grows "naturally". There results a forest of trees that grows somewhat haphazardly. In Kruskal's algorithm we choose promising edges without worrying too much about their connection to previously chosen edges. and the algorithm stops when all the nodes have been reached.Sec.1.

.

.

5) [50. The modifications to the algorithm are simple : Figure 3.10] 2 4 12. 10] Initialization Clearly.min(D[w]. function Dijkstra (L [1 .31 3 3 {2) [40. .2 times.2.3.30.20.3. To find the complete path.S u { v } } for each w e C do D[w] f.2. j) exists and L [i. n ].30. D [v] +L[v. D would not change if we did one more iteration to remove the last element of C... . n ]) : array[2. j ] ? 0 if the edge (i. it suffices to add a second array P [2. Step V - C D (2.30. simply follow the pointers P backwards from a destination to the source...{ 2.100.3.w]) return D The algorithm proceeds as follows on the graph in Figure 3.some element of C minimizing D [v ] C (-..3 A directed graph.Greedy Algorithms 88 Chap.20. n. n ) { S = N \ C exists only by implication ) for i -2tondoD[i]*-L[l.i] { greedy loop } repeat n -2 times v . where P [v ] contains the number of the node that precedes v in the shortest path. If we want not only to know the length of the shortest paths but also where they pass. Here is the algorithm. 3.30. which is why the main loop is only repeated n .4) [50. j ] = 00 otherwise. n ] (initialization) C F.C \ { v) { and implicitly S .. . 10] (35.4.20. I .10] 1 5 (2. 3 gives the length of each directed edge : L [i.

consider the inductive step.4). if a node i is in S. As for node v.. replace the contents of the inner for loop by ifD[w]>D[v]+L[v. . the first node encountered that does not belong to S is some node x distinct from v (see Figure 3. then D [i] gives the length of the shortest special path from the source to i. .10. We prove by mathematical induction that i.2. the total distance to v via x is distance to x (since edge lengths are non-negative) D [x] (by part (ii) of the induction) (because the algorithm chose v before x) D [v] and the path via x cannot be shorter than the special path leading to v. Show how the modified algorithm works on the graph of Figure 3. Next. . or else it now passes through v. By the induction hypothesis D [v ] certainly gives the length of the shortest special path. then D [i] gives the length of the shortest path from the source to i H. We therefore have to verify that the shortest path from the source to v does not pass through a node that does not belong to S. it will now belong to S. The initial section of the path. as far as x. Consider now a node w 0 S different from v When v is added to S. part (i) of the induction remains true. I. In the latter case it seems at first .2 89 initialize P [i] to 1 for i = 2. Look at the initialization of D and S to convince yourself that these two conditions hold at the outset .D[v] +L[v.w] then D[w] <-. We have thus verified that when v is added to S.w] P[w]-v. Problem 3. Suppose the contrary : when we follow the shortest path from the source to v. We must therefore check that D [v ] gives the length of the shortest path from the source to v. if a node i is not in S. This follows immediately from the induction hypothesis for each node i that was already in S before the addition of v. Proof of correctness. 3.2. is a special path. Consequently.. there are two possibilities for the shortest special path from the source to w : either it does not change.Greedy Algorithms and Graphs Sec. ii.2.3. n . and suppose by the induction hypothesis that these two conditions hold just before we add a new node v to S. the base for our induction is thus obtained. 3.

Show by giving an explicit example that if the edge lengths Problem 3. then Dijkstra's algorithm does not always work correctly. we can ignore the possibility (see Figure 3. The shortest path from the source to v cannot go through node x.4. We have to compare explicitly the length of the old special path leading to w and the length of the special path that visits v just before arriving at w . Thus the algorithm ensures that part (ii) of the induction also remains true when a new node v is added to S. However. can be negative. but not just before arriving at w : a path of this type cannot be shorter than the path of length D [x] + L [x. we need only note that when its execution stops all the nodes but one are in S (even though the set S is not constructed explicitly).11.2.Greedy Algorithms 90 Chap. glance that there are again two possibilities : either v is the last node in S visited before arriving at w or it is not.5) that v is visited. because D [x] <_ D [v]. Is it still sensible to talk about shortest paths if negative distances are allowed? S Figure 3. the algorithm does this.5 The shortest path from the source to w cannot visit x between v and w. At this point it is clear that the shortest path from the source to the remaining node is a special path.2. w] that we examined at a previous step when x was added to S.2.2. 3 The shortest path The shortest special path Figure 3. To complete the proof that the algorithm works. .

1 . .. * Problem 3. for a total also in 0 (n 2). a > n -1 and this time is in 0 (a log n). choosing v in the repeat loop requires all the elements of C to be examined.2). With this in mind.9. Initialization of the heap takes a time in 0 (n). i. . This is interesting when we remember that eliminating the root has for effect to sift down the node that takes its place. This does not happen more than once for each edge of the graph. giving for each node its direct distance to adjacent nodes (like the type lisgraph of Section 1.. If we remember to invert the heap.in the matrix L. 2 values of D on successive iterations. If a << n 2. In the preceding analysis.. Initialization takes a time in O (n). . If so. To sum up. n . whereas it is preferable to use a heap if the graph is sparse. Using the representation suggested up to now. The instruction " C .Sec. which takes a time in O (log n). it could be preferable to represent the graph by an array of n lists. Suppose Dijkstra's algorithm is applied to a graph having n nodes and a edges. whereas less than n roots are eliminated.2 times and to percolate at most a nodes.. Let k = max(2. the instance is given in the form of a matrix L [1 ..1. n -2. In a straightforward implementation. . La In j ). the element v of C that minimizes D [v] will always be found at the root. As for the inner for loop. but how are we to avoid taking a time in c(n2) to determine in succession the n -2 values taken by v ? The answer is to use a heap containing one node for each element v of C. giving a total time in 0 ((a +n) log n).. We might therefore consider modifying the definition of a heap slightly to allow percolation to run faster still. The straightforward implementation is therefore preferable if the graph is dense.2. since we only have to consider those nodes w adjacent to v. at the cost of slowing down sifting. we saw that up to a nodes can be percolated. 3. If the graph is connected. the choice of algorithm may depend on the specific implementation. we must modify D [w] and percolate w up the heap. n . The inner for loop does n . ordered by the value of D [v]. and that percolating up is somewhat quicker than sifting down (at each level.1 . Show how your modification allows you to calculate the shortest paths from a source to all the other nodes of a graph in a time in . This allows us to save time in the inner for loop. w ] < D [w]. What modification to the definition of a heap do you suggest ? ii. The time required by this version of the algorithm is therefore in O (n 2).2 Greedy Algorithms and Graphs 91 Analysis of the algorithm.C \ { v } " consists of eliminating the root from the heap. which again takes a time in 0 (log n ). it seems we might be able to avoid looking at the many entries containing . so that we look at n . we have to remove the root of the heap exactly n . n J. it now consists of looking. giving a total time in 0 (n 2). to see whether D [v] + L [v.12. 1 iterations.2. we compare the value of a node to the value of its parent rather than making comparisons with both children). for each element w of C adjacent to v. If a E O(n 2/ log n).

. Note that this gives 0 (n 2) if a = n 2 and 0 (a log n) if a = n . For example.1. Order T 123 : 5 + (5+10) + (5+10+3)=38 5 + (5+3) + (5+3+10)=31 10 + (10+5) + (10+5+3)=43 132: 213: 312: 10 + (10+3) + (10+3+5) = 41 3 + (3+5) + (3 + 5 + 10) = 29 E. 3 O (a logk n). just as Kruskal's algorithm would. customer 2 waits while customer 1 is served and then gets his turn. then six orders of service are possible.13. and customer 3 waits while both 1 and 2 are served and then is served himself: the total time passed in the system by the three customers is 38.2. if we have three customers with t1=5. t2=10. show that the modification suggested in the previous problem applies just as well to Prim's algorithm.. Since the number of customers is fixed. minimizing the total time in the system is equivalent to minimizing the average time. and so on) has n customers to serve.) Problem 3. .. Show that Prim's algorithm to find minimal spanning trees can also be implemented through the use of heaps.. 3. customer 1 is served immediately. Show that it then takes a time in 0(a log n ). 1 <_ i <_ n. The service time required by each customer is known in advance : customer i will take time t. we add customer j. Problem 2. t3=3. Finally. The increase in T at this stage is . ..3 GREEDY ALGORITHMS FOR SCHEDULING 3. a petrol pump.3.17(i) does not apply here since k is not a constant. We want to minimize n T= (time in system for customer i ).optimal 321: 3 + (3+10) + (3+10+5)=34 231 : In the first case.Greedy Algorithms 92 Chap. Suppose that after scheduling customers i I . (Still faster algorithms exist. Imagine an algorithm that builds the optimal schedule step by step.1 Minimizing Time in the System A single server (a processor. i. it therefore gives the best of both worlds. i 2. a cashier in a bank.

. k=1 Suppose now that I is such that we can find two integers a and b with a < b and t.1). .. > t... 2.3.1 .. we need only minimize tj . in other words. This suggests a simple greedy algorithm : at each step..3.. If customers are served in the order 1. t.+(n-2)ti + n _ k(n-k+1)t. .. 3.+(ti.. k=1 kta. tin After exchange of is and !b I Service duration Served customer (from I') Figure 3.+ti.+(n-b+1)ti + E (n-k+l)t.. because n T(I')=(n-a+1)ti. b a n i. 2.Sec. 1..+ =nti +(n-1)ti..+ To minimize this increase.b Service order 1 Served customer .1 l tab 1o Exchanging the positions of two customers. t' ..+ti)+(ti.. fin . t' . 11 (from I) Service duration 2 t.. we obtain a new order of service I' obtained from I by interchanging the items is and ib This new order is preferable : . . We now prove that this algorithm is always optimal. If we exchange the positions of these two customers.3 Greedy Algorithms for Scheduling 93 +ti +tj . in ) be any permutation of the integers { 1 . add to the end of the schedule the customer requiring the least service among those who remain.. In the preceding example this algorithm gives the correct answer 3. Let I = (i I i 2 . . t" . tea . the total time passed in the system by all the customers is T(I)=ti. the a th customer is served before the b th customer even though the former needs more service time than the latter (see Figure 3. n 1. +ti..

Without Problem 3. l2 Problem 3. 3 and thus (b -a )(ti. server i.. i. the tape is rewound to the beginIn ning. The only schedules that remain are those obtained by putting the customers in nondecreasing order of service time.-. as can the algorithm. A magnetic tape contains n programs of length l 1 .. Prove that T is minimized if the programs are held in order of decreasing pi /li . 1 i9 i +s . In this context.. .3.3. the average time required to load a program is n I T =cE Pr Eli - j=1 k=1 where the constant c depends on the recording density and the speed of the drive.1. . We want to minimize T.Greedy Algorithms 94 Chap. We can therefore improve any schedule in which a customer is served before someone else who requires less service. in that order. . We know how often each program is used : a fraction pi of requests to load a program concern program i. suppose the customers are numbered so that s . .2. 1 2 .3. i. ii. Prove that this algorithm always yields an optimal schedule.. Prove by giving an explicit example that it is not necessarily optimal to hold the programs in order of decreasing values of pi iii.) Information is recorded along the tape at constant density. i + 2s . If the programs are held in the order i 1.tin ) >0.i <. Each time a program has been loaded.3. i 2. Prove by giving an explicit example that it is not necessarily optimal to hold the programs in order of increasing values of 1i . 1 <. and the speed of the tape drive is also constant. n ] as data and produces an optimal schedule ? The problem can be generalized to a system with s servers. All such schedules are clearly equivalent. loss of generality. How much time (use the 0 notation) is required by a greedy algorithm that accepts n and t [I .. (This of course implies that Ji 1 pi = 1.n. must serve customers number t 1 <.. Problem 3. and therefore they are all optimal.t2 <S to . .

3.. we try the set 11.3 45. At first glance it seems we might have to try all the k ! possible permutations of these jobs to see whether J is feasible. Next. we should execute the schedule 4. each of which takes unit time. 3. which in fact can only be done in the order 4. so job 2 is also rejected. . d.3. with n = 4 and the following values : i 50 2 10 15 4 30 2 1 2 1 1 g. An obvious greedy algorithm consists of constructing the schedule step by step. Finally we try (1. which turns out not to be feasible .1 65 60 2. after its deadline d2 = 1. It remains to prove that this algorithm always finds an optimal schedule and to find an efficient way of implementing it. is not considered because job 2 would be executed at time t = 2.optimal in this case . we can execute exactly one job. 1 80 . The sequence 3.2 Scheduling with Deadlines We have a set of n jobs to execute. 1 <_ i <_ n . Let J be a set of k jobs. among those not yet considered. 41. . To maximize our profit in this example.. for instance. A set of jobs is feasible if there exists at least one sequence (also called feasible) that allows all the jobs in the set to be executed in time for their respective deadlines. . provided that the chosen set of jobs remains feasible.3 2. earns us a profit g.1 65 4. job 3 is therefore rejected.3 25 3. 1. if and only if it is executed no later than time d.optimum 4. adding at each step the job with the highest value of g. 1. 2.is therefore to execute the set of jobs 11. 41 is feasible because it can be executed in the order 4. At any instant t = 1. In the preceding example we first choose job 1. . Next. Happily this is not the case.3 Greedy Algorithms for Scheduling 95 3. 1. Job i. For example. 41.41. which is also infeasible . 2. 2. 3 the schedules to consider and the corresponding profits are Sequence: 1 2 Profit: 50 10 3 15 4 30 1.Sec. we choose job 4: the set 11. Our solution .

this implies that a does not appear in J and that b does not appear in I. it is clear that b > a .. . the set I u { b } is feasible..1. at the time when rb is at present scheduled. ? d. The "if " is obvious. we obtain a itself. Also dr. whereas there is a gap in S. Let us consider an arbitrary time when the task scheduled in Sj is different from that scheduled in S.ds2 ds. The result is a new feasible sequence.3. If gQ > gb . .. and S are distinct sequences since I # J. a. to know whether a set of jobs is or is not feasible. Then the set J is feasible if and only if the sequence a is feasible. The only remaining possibility is that some task a is scheduled in S. each having at least one more position in agreement with a. If some task a is scheduled in S. and S. (by the construction of a and the minimality of a) (by the definition of b). whereas a different task b is scheduled in S. 2. If some task b is scheduled in S. Proof. there exists at least one sequence of jobs p = (r 1r2 that dr.. and S. . hence the greedy algorithm should have included b in I. sk) be a permutation of these jobs such that ds <.) S. in order of increasing deadlines. whereas there is a gap in S.3. We now prove that the greedy algorithm outlined earlier always finds an optimal schedule.. which is the same as a at least in positions 1.i S k. This is also impossible since it did not do so. and S. (We may have to leave gaps in the schedule. 3 Lemma 3. By making appropriate interchanges of jobs in S. we can interchange the items ra and rb in p. after a maximum of k -1 steps of this type. Let J be a set of k jobs. one could substitute a for b in J and improve it. . Suppose a # p. Continuing thus. Let a be the smallest index such that Sa # rQ and let b be defined by rb = sQ . Suppose that the greedy algorithm chooses to execute a set of jobs I whereas in fact the set J # I is optimal. from right to left. Finally. we obtain a series of feasible sequences. for the two sets of jobs in question. This is not possible since J is optimal by assumption. 1 <. Consider two feasible sequences S. For the "only if " : rk) such If J is feasible. . This goes against the optimality of J.Greedy Algorithms 96 Chap. and S..2 for an example. 11 This shows that it suffices to check a single sequence. the set J u { a } is feasible and would be more profitable than J. such that every job common to both I and J is scheduled for execution at the same time in the two sequences. and let a = (s 1s 2 . . > i . Since rb can certainly be executed earlier than planned. Proof of optimality.. .. The job ra could therefore be executed later. See Figure 3. . The necessary interchanges are easily found if we scan the two sequences S. which is therefore feasible. = dr. . we can obtain two feasible sequences S. (and therefore task a does not belong to J ). Again.

and thus I is optimal as well. k ] array j [0. array[ 1 .i k4-k+I return k. j [ l ] . either schedule no tasks. the same task.r - ifd[j[r]]<-d[i]andd[i]>r then for/ F-kstep-] tor+lldoj[I+1]E-j[l] j[r+I] . This is not possible either since it did not include h in I.If g. < gl. j[I . .The only remaining possibility is therefore that go = gn In conclusion. n ]) : k..g . and S. S" . 1 <_ i <.Sec.3 Greedy Algorithms for Scheduling 97 .2 to n do { in decreasing order of g I r-k while d [ j Ir ]] > max(d [i]..g 2 ? >. Rearranging schedules to bring identical tasks together. To allow us to use sentinels. . sequences S.3. for each time slot. V q w Si after reorganization.1 { task 1 is always chosen } {greedy loop I for i <-. the greedy algorithm should have chosen h before even considering a since (I \ { a }) u { h } would be feasible. n ] d [0].0 (sentinels) k. For our first implementation of the algorithm suppose without loss of generality that the jobs are numbered so that g I >. r) do r F. function sequence (d [0.n . j [0] f. This implies that the total worth of I is identical with that of the optimal set J.k] P Y q x r r s r P u S. q V q w that one will be b common tasks Figure 3.2. suppose further that n > 0 and that di > 0.. or two distinct tasks yielding the same profit.. if this task is a x Y P r u S P r S.. 3.

Clearly. more efficient algorithm is obtained if we use a different technique to verify whether a given set of jobs is feasible. It may happen that there are gaps in the schedule thus constructed when no job is to be executed. but no later than its deadline. the schedule can be compressed once all the jobs are included. which is always free.3. as we assign new jobs to vacant positions. 11 The lemma suggests that we should consider an algorithm that tries to fill one by one the positions in a sequence of length l = min(n. then the set J is infeasible.3. Prove Lemma 3.3. This obviously does not affect the feasibility of a set of jobs. Problem 3. . disjoint set structures are intended for just this purpose. A set of jobs J is feasible if and only if we can construct a feasible sequence including all the jobs in J as follows : for each job i e J. define a fictitious position 0. and show that it requires quadratic time in the worst case.2.3). For a given set K of positions.3.4.Greedy Algorithms 98 Chap. define n. = max{k <_ t I position k is free }. 3 The exact values of the g. consider each job i e J :n turn. If a job cannot be executed in time for its deadline. A second. We obtain an algorithm whose essential steps are the following : I Positions of the same set i n. Finally. If we so wish. Also define certain sets of positions : two positions i and j are in the same set if ni = nj (see Figure 3. are unnecessary provided the jobs are correctly numbered in order of decreasing profit. Verify that the algorithm works. di) and the job to be executed at time t has not yet been decided. In other words. For any position t. = nj j Free position Occupied position Figure 3.3. Problem 3.3. execute i at time t.3.5. max({ di 1 <_ i <_ n 1)). and add it to the schedule being built as late as possible. let F (K) be the smallest member of K.2. where t is the largest integer such that 0 < t <_ min(n. Sets of positions. these sets will merge to form larger sets . Lemma 3.

k ] array j..1. To simplify the descrip- tion.merge K and L .i initialize set { i } {greedy loop } for i F. Example 3. .3..1 to n do { in decreasing order of g } k .. 1 is in a different set and F({i})=i..assign the job to position F (K) . 99 2. reject the job . 3 1 1 3 1 3. F [0. Consider a problem involving six jobs : i 1 2 3 4 5 6 gi 20 15 10 7 5 3 d.find the set that contains F (K) .0<-i <<-l. j[l . function sequence 2(d [ i n ]) : k. .3.5 illustrate the workings of the slow and fast algorithms. the value of F for this new set is the old value of F (L).0 to l do j [i] E.j[i] . 1 ) { it remains to compress the solution } k -0 for i F.k] j[k] E.find (min(n. . 1.i 1 <--find (m -1) F[k] F-F[1] merge (k. 1 ] 1 -min(n.I to 1 do if j[i]>0then k-k+1 return k.3 Greedy Algorithms for Scheduling i. let this be set K . 3. d [i])) m . if F (K) = 0.4 and 3.Sec.1. addition of a job with deadline d : find the set that contains min(n. Figures 3. ii.0 F[i] .F [k] if m * 0 then j[m]F. array[1 .. Here is a more precise statement of the fast algorithm. respectively.3.max{d[i]I1<i <n1) { initialization) for i F. d ). . let this be set L (it is necessarily different from K) . initialization : each position 0. . we have assumed that the label of the set produced by a merge operation is necessarily the label of one of the sets that were merged. ifF(K) 0.

If the instance is given to us with the jobs already ordered by decreasing profit. the required time is in 0 (n lg*1). on the other hand. 3 dU[il] 3 Initialization: Ad k 3 Try 2: 2 Try 3: unchanged 3 Try 4: 2 3 4 Try 5: unchanged Try 6: unchanged Optimal sequence: 2. 4.3. 1.Greedy Algorithms 100 Chap. we need a time in 0 (n log n) to obtain the initial sequence. These examples also serve to illustrate that the greedy approach does not always yield an optimal solution. the jobs are given to us in arbitrary order. so that we have to begin by sorting them.4 GREEDY HEURISTICS Because they are so simple. which is essentially linear. most of the time will be spent manipulating disjoint sets. We content ourselves with giving two examples of this technique. If. and since n > 1. Since there are at most n+1 find operations and I merge operations to execute.4. greedy algorithms are often used as heuristics in situations where we can (or must) accept an approximate solution instead of an exact optimal solution. so that an optimal sequence can be obtained merely by calling the preceding algorithm. value = 42 Figure 3. . 3. Illustration of the slow algorithm.

4. no free position available Try 6: d6 = 3. A > be an undirected graph whose nodes are to be coloured. Our aim is to use as few different colours as possible.3. the graph in Figure 3. An obvious greedy algorithm consists of choosing a colour and an arbitrary starting node. assign task 4 to position 2 F= 0 Try 5: d5 = 1.1 can be coloured using only two colours: red for nodes 1.4 Greedy Heuristics 101 Initialization: I = min(6. Illustration of the fast algorithm. When no further nodes can be painted. then they must be of different colours. and blue for nodes 2 and 5.4. 3.5. painting it with this colour if possible. = 3. 3. assign task 1 to position 3 F= 0 1 2 Fry 2: d2 = 1. value = 42 Figure 3. assign task 2 to position 1 F= 0 2 Try 3: d3 = 1.Sec. 1. 3 and 4. we choose a new colour and a new . max(d)) = 3 F= 0 1 2 3 Try 1: d. For instance.1 Colouring a Graph Let G = < N. no free position available Optimal sequence: 2.4. If two nodes are joined by an edge. no free position available since the Fvalue is 0 Try 4: d4 = 3. and then considering each other node in turn.

but not certainly. we get a different answer: nodes I and 5 are painted red. R. For a graph G and an ordering a of the nodes of G. 2. starting node that has not yet been painted.4. this is an optimal solution. Let C (G) be the optimal (smallest) number of colours. 4. nodes 3 and 4 can be red. we paint as many nodes as we can with this second colour. and lastly node 5 may not be painted. In our example if node 1 is painted red. but now nodes 3 and 4 require us to use a third colour. be the number of colours used by the greedy algorithm. (VG)(3(. and we are forced to make do with an approximate method (of which this greedy heuristic is among the least effective). find a "good" solution. then node 2 is painted blue. terms of the graph colouring problem. (t/(XEIR+)(3G)(3o)[ca(G)1c(G) > a]. Find two or three practical problems that can be expressed in Problem 3:4. Chap.2.1. to visit each other town exactly once. If we start again at node 2 using blue paint.)[c6(G)=c(G)]. but it may also give an arbitrarily bad answer. and to arrive back at the starting point.1. This is an example of the NP-complete problems that we shall study in Chapter 10. 3. We . in this case the result is not optimal.4. 3. having travelled the shortest total distance possible. The algorithm is therefore no more than a heuristic that may possibly. For a large-scale instance these algorithms cannot be used in practice. Why should we be interested by such algorithms? For the colouring problem and many others the answer is that all the exact algorithms known require exponential computation time.2 The Travelling Salesperson Problem We know the distances between a certain number of towns. The travelling salesperson wants to leave one of these towns.Greedy Algorithms 102 Figure 3. we can colour nodes 2 and 5 and finish the job using only two colours . 5. the greedy heuristic may find the optimal solution. However. Prove the following assertions : i. In other words.4. we are not allowed to paint node 2 with the same colour. if we systematically consider the nodes in the order 1. and so on. let c(G) Problem 3. 3 A graph to be coloured.

(1.6.5).4 Greedy Heuristics 103 assume that the distance between two towns is never negative. The problem can be represented using a complete undirected graph with n nodes. 1) has a total length of only 56. In this instance the greedy algorithm does not find an optimal tour since the tour (1. 4.3. (2. 1) whose total length is 58. if chosen. Give a heuristic greedy algorithm to solve the travelling salesperson problem in the case when the distance matrix is Euclidean. 2. Hence they are impractical for large instances. In some instances it is possible to find a shorter optimal tour if the salesperson is allowed to pass through the same town several times.4. it is true that distance (i . 6. if our problem concerns six towns with the following distance matrix : From 1 2 3 To : 2 3 4 5 6 3 10 11 7 25 6 12 8 26 9 4 20 5 15 4 5 18 edges are chosen in the order (1. (Hint : start by constructing a minimal spanning tree and then use the answer to the previous problem. (4.4. What happens to this greedy algorithm if the graph is not complete. 3. 5 ). Problem 3. Your algorithm must find a solution whose length is not more than double the length of an optimal tour. and k. j) <_ distance (i .3). 4. all the known exact algorithms for this problem require exponential time (it is also NP-complete).4. Show that in this case it is never advantageous to pass through the same town several times. (4. that is. if it is not possible to travel directly between certain pairs of towns ? Problem 3. Edge (1. 1). was not kept when we looked at it because it would have completed a circuit (1.2). (The graph can also be directed if the distance matrix is not symmetric: see Section 5. 3.5). 2. 5.) One obvious greedy algorithm consists of choosing at each step the shortest remaining edge provided that it does not form a cycle with the edges already chosen (except for the very last edge chosen. which completes the salesperson's tour) . 6. 3. 3. and also because it would have been the third edge incident on node 5. (3. i. For example.) .Sec. 5. 2. a distance matrix is said to be Euclidean if the triangle inequality holds : for any towns i. k) + distance (k . it will not be the third chosen edge incident on some node. On the other hand. j.5. Give an explicit example illustrating this.6) to make the circuit (1. Problem 3.4. 5. ii.6). j ). for example. As for the previous problem.

4. Faster algorithms for both these problems are given in Fredman and Tarjan (1984). see also Problem 5. in particular. see Schwartz (1964). and Tarjan (1983).6. The algorithm to which Prim's name is attached was invented by Jarnik (1930) and rediscovered by Prim (1957) and Dijkstra (1959).8. 3 Invent a heuristic greedy algorithm for the case when the disProblem 3.12 can be found in Johnson (1977). An important greedy algorithm that we have not discussed is used to derive optimal Huffman codes. Prove that if a directed graph is complete (that is. Problem 3. . and give an algorithm for finding such a path in this case. Other greedy algorithms for a variety of problems are described in Horowitz and Sahni (1978).1 can be found in Wright (1975) and Chang and Korsh (1976) .4.4.Greedy Algorithms 104 Chap. Kruskal's algorithm comes from Kruskal (1956). Cheriton and Tarjan (1976). Similar improvement for the minimal spanning tree problem (Problem 3. There exists a greedy algorithm (which perhaps would better be called "abstinent") for solving the problem of the knight's tour on a chessboard : at each step move the knight to the square that threatens the least possible number of squares not yet visited. In a directed graph a path is said to be Hamiltonian if it passes exactly once through each node of the graph.8. 3.13) is from Johnson (1975).2. tance matrix is not symmetric.5 of this book.1.4. Other more sophisticated algorithms are described in Yao (1975). The first algorithm proposed (which we have not described) is due to Boruvka (1926). Other ideas concerning shortest paths can be found in Tarjan (1983). but without coming back to the starting node. The implementation of Dijkstra's algorithm that takes a time in 0 (n 2) is from Dijkstra (1959). The details of the improvement suggested in Problem 3. The solution to Problem 3. Try it ! Problem 3.5 is given in Christofides (1976) . The problem of minimal spanning trees has a long history.2. the same reference gives an efficient heuristic for finding a solution to the travelling salesperson problem with a Euclidean distance matrix that is not more than 50% longer than the optimal tour.7. use of the Fibonacci heap allows them to implement Dijkstra's algorithm in a time in 0 (a + n log n ).5 REFERENCES AND FURTHER READING A discussion of topics connected with Problem 3. which is discussed in Graham and Hell (1985). if each pair of nodes is joined in at least one direction) then it has a Hamiltonian path.

solving successively and independently each of these subinstances. Although this improvement is not to be sneezed at. The first question that springs to mind is naturally "How should we solve the subinstances ?". 105 . Suppose you already have some algorithm A that requires quadratic time. and combining the results. You discover that it would be possible to solve such an instance by decomposing it into three subinstances each of size Fn/21.1 INTRODUCTION Divide-and-conquer is a technique for designing algorithms that consists of decomposing the instance to be solved into a number of smaller subinstances of the same problem. you obtain a new algorithm B whose implementation takes time tB(n)=3tA([n/21)+t(n)<_3c((n+l)/2)2+dn =4cn2+(4c+d)n+4c The term 4cn 2 dominates the others when n is sufficiently large. Let d be a constant such that the time needed to carry out the decomposition and the recombination is t(n) <_ dn. which means that algorithm B is essentially 25% faster than algorithm A. The efficiency of the divide-and-conquer technique lies in the answer to this question. solving these subinstances.4 Divide-and-Conquer 4. nevertheless. you have not managed to change the order of the time required : algorithm B still takes quadratic time. Let c be a constant such that your particular implementation requires a time tA (n) <_ cn2 to solve an instance of size n. and then combining the subsolutions thus obtained in such a way as to obtain the solution of the original instance. By using both your old algorithm and your new idea.

and 4. we come back to the question posed in the opening paragraph : how should the subinstances be solved? If they are small. this chapter shows how divide-and-conquer is used to solve a variety of important problems and how the resulting algorithms can be analysed. We shall see in the following section how to choose no in practice.10). but instead.59 The improvement compared to the order of n 2 is therefore quite substantial. After looking at the question of how to choose the optimal threshold. We shall see that it is sometimes possible to replace the recursivity inherent in divide-and-conquer by an iterative loop. the basic subalgorithm. 4. and in this case it goes by the name of simplification (see sections 4. xk fori . 's to obtain a solution y for x return y . it is possible that algorithm A may still be the best way to proceed.8. the decision when to use the basic subalgorithm rather than to make recursive calls must be taken judiciously. and the subinstances should be as far as possible of about the same size. . Although this choice does not affect the order of the execution time of our algorithm. we are also concerned to make the hidden constant that multiplies nlg3 as small as possible.) recombine the y. When implemented in a conventional language such as Pascal on a conventional .6). might it not be better to use our new algorithm recursively? The idea is analogous to profiting from a bank account that compounds interest payments ! We thus obtain a third algorithm C whose implementation runs in time tc (n) _ tA(n) ifn <no 3tc( [ n/2 1 )+t(n) otherwise where no is the threshold above which the algorithm is called recursively. the more this improvement is worth having.l tokdoy1 E-DQ(x. We should also mention that some divide-and-conquer algorithms do not follow the preceding outline exactly.3. which is similar to the one in Example 2. it is hard to justify calling the technique divide-and-conquer. However. This equation.3. k. and the bigger n is. 4 To do better than this.10. x 2. where ADHOC. is used to solve small instances of the problem in question. The number of subinstances.106 Divide-and-Conquer Chap. is usually both small and also independent of the particular instance to be solved. gives us a time in the order of ntg3 which is approximately n 1. when the subinstances are sufficiently large. For this approach to be worthwhile a number of conditions are usually required : it must be possible to decompose an instance into subinstances and to recombine the subsolutions fairly efficiently. When k = 1. they require that the first subinstance be solved even before the second subinstance is formulated (Section 4. Here then is the general outline of the divide-and-conquer method: function DQ (x) { returns a solution to instance x ) if x is sufficiently small or simple then return ADHOC (x) decompose x into smaller subinstances x 1.

Find all the values of the threshold that allow an instance of 0 size 1024 to be solved in less than 8 minutes.Sec. since the instance can be solved in little more than a quarter of an hour by using the basic subalgorithm directly. 4. and t (n) is the time taken to do the decomposition and recombination. whose execution time is given by tc (n) _ ifn <no f tA(n) 3tc ([n / 21) + t (n) otherwise. provided we choose the threshold no intelligently. Suppose we have an instance of size 1024 to solve. it is not sufficient to know that to (n) E O(n 2) and that t (n) E O(n). 4. the answer is no : in our example. the recursive algorithm uses a stack whose depth is often in 1 (log n) and in bad cases even in S2(n). that is.2. if no = 1. If the algorithm proceeds recursively until it obtains subinstances of size 1. To illustrate this. by setting no = oo. Problem 4. This is ridiculous. In this case.2 Determining the Threshold 107 machine.1. an iterative algorithm is likely to be somewhat faster than the recursive version. that is. Must we conclude that divide-and-conquer allows us to go from a quadratic algorithm to an algorithm whose execution time is in 0(n 1g 3 ). it may be possible to save a substantial amount of memory space in this way : for an instance of size n.2. Choosing the threshold . Prove that if we set no= 2k for some given integer k >_ 0. although only by a constant multiplicative factor. consider an implementation for which the values of to (n) and t(n) are given respectively by n2 and 16n milliseconds. consider once again algorithm C from the previous section. For instance. This example shows that the choice of threshold can have a considerable influence on the efficiency of a divide-and-conquer algorithm. On the other hand. where to (n) is the time required by the basic subalgorithm. it is better to apply the basic subalgorithm. then for all 1 >_ k the implementation considered previously takes 2k 31-k (32+2k) -21 +5 milliseconds to solve an instance of size 21. but only at the cost of an increase in the hidden constant so enormous that the new algorithm is never economic on instances that can be solved in a reasonable time? Fortunately. it takes more than half an hour to solve this instance. To determine the value of the threshold no that minimizes tc (n).2. Problem 4.2 DETERMINING THE THRESHOLD An algorithm derived by divide-and-conquer must avoid proceeding recursively when the size of the subinstances no longer justifies this. the instance of size 1024 can be solved in less than 8 minutes.

If we neglect this difficulty. the optimal threshold can be determined empirically. and then finding empirically the values of the constants used in these equations for the implementation at hand.8 makes clear. corresponding to the fact that the average value of Fn / 21 is (2n + 1)/4. Obviously. a purely theoretical calculation of the optimal threshold is rarely possible. A reasonable compromise. in order to compare it with the classic algorithm from Section 1. Several groups of students tried to estimate the optimal threshold empirically. as Problem 4.2 that in fact no uniformly optimal threshold exists. there is in general no uniformly best value of the threshold : in our example.000 (1982) Canadian dollars worth of machine time ! On the other hand. that it is not usually enough simply to vary the threshold for an instance whose size remains fixed.1. we obtain n = 64. This remark may appear trivial. We vary the value of the threshold and the size of the instances used for our tests and time the implementation on a number of cases. changes in the value of the threshold may have no effect on the efficiency of the algorithm when only instances of some specific size are considered. the preceding problem shows that. but Section 4.2 makes it clear. Finally. We once asked the students in an algorithmics course to implement the algorithm for multiplying large integers given in Section 4.2. because tc ([n / 21) = to ([n / 21) if [n / 21 <_ no. over a certain range. for an instance of size n. however. but also on the particular implementation. we must avoid thresholds below the ultimate threshold. The optimal threshold can then be estimated by finding the value of n at which it makes no difference. There is nothing surprising in this. given that it varies from one implementation to another. a threshold larger than 66 is optimal for instances of size 67. the optimal threshold can be found by solving to (n) = 3tA ([n / 21) + t (n). which we recommend. if we systematically replace In / 21 by (n + 1)/2. Given a particular implementation.7. Moreover. This approach may require considerable amounts of computer time. Problem 4.2. since we saw in Problem 4. . whether we apply the basic subalgorithm directly or whether we go on for one more level of recursion. consists of determining theoretically the form of the recurrence equations.6. On the other hand. The presence of a ceiling in this equation complicates things. So how shall we choose no? One easy condition is that we must have no > 1 to avoid the infinite recursion that results if the solution of an instance of size 1 requires us first to solve a few other instances of the same size. We shall in future abuse the term "optimal threshold" to mean nearly optimal. is to choose n o = 67 for our threshold. The hybrid approach. each group using in the attempt more than 5.6 describes an algorithm for which the ultimate threshold is less obvious.108 Divide-and-Conquer Chap. whereas it is best to use a threshold between 33 and 65 for instances of size 66. we find n = 70. Coming back to our example. It is often possible to estimate an optimal threshold simply by tabulating the results of these tests or by drawing a few diagrams. 4 is complicated by the fact that the best value does not generally depend only on the algorithm concerned.

Notice that this u is chosen so that au 2 = 3a (u / 2)2 + bu . measure to (n) a number of times for several different values of n .2.Sec.) In practice. we wish to find the index i such that 0 <_ i <_ n and T [i] S x < T [i + I]. and let x be some item. then instead we want to find the position where it might be inserted.1R* defined by the recurrence x2 f(x)= if X <_ s 3fs (x / 2) + bx otherwise. with the logical convention that T [0] _ . If the item we are looking for is not in the array. and then estimate all the necessary constants.T [ j ]. the problem can be solved without recourse to transfinite induction.. (By logical convention. (x) for every real number x. probably using a regression technique. Supposing. 4. 1 < j 5 n = T [i] <. we mean that these values are not in fact present as sentinels in the array. It is probably the simplest application of divide-and-conquer. (For purists : even if the domain of f. Formally. 4. Let a and b be real positive constants. it may happen that tA (n) = an 2 + bn + c for some constants a. and f is not countable.) Problem 4. Prove by mathematical induction that if u = 4b /a and if v is an arbitrary positive real number. Let T [I .4. Although bn + c becomes negligible compared to an 2 when n is large. precisely because infinite recursion is not a worry. Instead. The problem consists of finding x in the array T if indeed it is there. b.3 BINARY SEARCHING Binary searching predates computers.3 Binary Searching 109 * Problem 4. that is. prove that there are no instances that take more than I % longer with threshold 67 than they would with any other threshold. For each positive real number s. it is the algorithm used to look up a word in a dictionary or a name in a telephone directory. In essence.3. if instances of fractional size were allowed. consider the function f5 : IR* -. It is therefore usually insufficient merely to estimate the constant a.2. (Notice that this would not cause an infinite recursion because the threshold is strictly larger than zero. Show that this choice of no = 67 has the merit of being suboptimal for only two values of n in the neighbourhood of the threshold. one more complication arises. and c depending on the implementation. for instance.oo and T [n + 1] _ + oo. n] be an array sorted into increasing order.) The obvious . that is. the basic subalgorithm is used in fact precisely on instances of moderate size. then fu (x) 5 f. that to (n) is quadratic. Furthermore. The following problem shows that a threshold of 64 would be optimal were it always possible to decompose an instance of size n into three subinstances exactly of size n / 2.

. x) function binrec (T [i . Because the recursive call is situated dynamically at the very end of the algorithm. otherwise it is sufficient to search T [1+Ln / 2j . x) [ binary search for x in subarray T [i . we compare x to an element in the middle of the array : if x < T [ 1+ Ln / 2j].. sequential search takes a time in O(n). To speed up the search.. then the average number of trips round the loop is (n 2+ 3n -2)/2n.I to n do if T[i] >x then return i -1 return n This algorithm clearly takes a time in O(1 + r). it is always true that v . with j < i .. n ]. x) else return binrec (T [k . j ]. then the search for x can be confined to T [I . function sequential (T [ 1 .. The algorithm in fact executes only one of the two recursive calls. [n12]]. and that it is to be found with equal probability at each possible position. n ] whatever the position of x in T. Conclude from these two results that a call on binsearch always terminates. so that technically it is an example of simplification rather than of divide-and-conquer.3. as well as in the worst case... If we assume that all the elements of T are distinct. function binsearch (T [ 1 . x) f binary search for x in array T [ if n = 0 or x < T [1] then return 0 return binrec (T. Problem 4. x) [ sequential search for x in array T [ for i F.110 Divide-and-Conquer Chap.3. k -1 ]. j ]. Show that the algorithm takes a time in 0( log n) to find x in T [1 .. 4 approach to this problem is to look sequentially at each element of T until we either come to the end of the array or find an item bigger than x.2.. v ]..u < j . n J. j ] Prove too that when binrec (T [i . Show finally that the values T [0] Problem 4.1. therefore.i. and T [n + 1] are never used (except in the comments ! ). it is easy to produce an iterative version. x) makes a recursive call binrec (T [u . where r is the index returned : this is S2(n) in the worst case and O (1) in the best case.. We obtain the following algorithm. To find out which of these searches is appropriate.x) Prove that the function binrec is never called on T [i . On the average. x). j this procedure is only called if T[i] S x < T [j + 1] and i j[ if i = j then return i k -(i+j+1)div2 if x < T [k] then return binrec (T [i . that x is indeed somewhere in the array. j ].. n ].. divide-and-conquer suggests that we should look for x either in the first half of the array or in the second half.

This comparison could allow us to end the search immediately. "j F. On the first trip round the loop. "k F(i+j+1)div2" by " k <-.j<-. and k = 9. A comparison is made between x and T [13].x) ( variant on iterative binary search } if n = 0 or x < T [1] then return 0 iE. j = 17. j <-.l. The comparison between x and T [9] causes the assignment i .3.Sec. and so the assignment i <. On the second trip round the loop i = 9.(i+j)div2". Two more trips round the loop are necessary before we leave with i = j = 13.. regardless of the position of x in T. i = 1.n while i < j do (T[i] <-x <T [j+1]} k <-(i+j)div2 case x< T [k] : k-1 x?T[k+1]:i -.k return i It is easy to go wrong when programming the concept of Problem 4. or iii. Suppose T contains 17 distinct elements and that x = T [13]. "i k" by " i E--k+l". Show by examples that the preceding algorithm would be incorrect if we replaced i. 11 A first inspection of this algorithm shows what is apparently an inefficiency. but no test is made for equality. n ]. n]. The following algorithm leaves the loop immediately after we find the element we are looking for. simple though this is.(i+j+1)div2 if x < T [k] then j <-. 4.j'.x) ( iterative binary search for x in array T } if n = 0 or x < T [1] then return 0 iE--1.13 is carried out.k return i Which of these algorithms is better ? The first systematically makes a number of trips round the loop in O(log n).k -1" by " j <-.n while i < j do ( T [ i ] Sx <T [j+1] } k <-. ii.k+1 otherwise : i .3 Binary Searching 111 function iterbin (T [1 .3.. binary searching.k-1 else i <.k ". and k = 13. while it is possible that the variant will only make one or two trips round the loop if x is favourably . j = 17. function iterbin 2(T [1 .9 to be executed.

On the other hand. n >-2 b(n)=n +b([n/21-1)+b(Ln/2j). which causes the assignment j . which causes the assignment i E. We proceed by constructive induction. n >-3 a(1)=b(1)=0. (*) The first equation is easy in the case when n is a power of 2. guessing the likely form of the answer and determining the missing parameters in the course of a tentative proof by mathematical induction. It seems reasonable to hope that . we shall analyse exactly the average number of trips round the loop that each version makes. Analysis of the Second Version. which yields a (n) = n lg n using the techniques of Section 2. x < T [k]. occupying each possible position with equal probability. Let A (n) and B (n) be the average number of trips round the loop made by the first and the second iterative versions. Llg n] = lg n *. we obtain the recurrence B(n)= 1+ Fn/ 21 . B(2)=I. after which the algorithm starts over on an instance reduced to k -1 elements. In a similar way.B(Ln/2j). b(2)=2.a (n) <. respectively. What might we add to n Llg n j to arrive at a (n) ? Let n * denote the largest power of 2 that is less than or equal to n. A likely hypothesis. With probability (k -1)/n . respectively.1)/n. Analysis of the First Version. since it then reduces to a(n)=2a(n/2)+n. n >-3 B(1)=0. x > T [k]. In particular. Define a (n) and b (n) as n A (n) and n B (n).Divide-and-Conquer 112 Chap. n ?2 a(1)=0. already shown to hold when n is a power of 2.n Fig n 1. n?2 A(1)=0. With probability I .3.k . after which the algorithm starts over on an instance reduced to n -k + 1 elements.k. 4 situated. Suppose to make life simpler that T contains n distinct elements and that x is indeed somewhere in T.1. One trip round the loop is carried out before the algorithm starts over. Exact analysis for arbitrary n is harder. To compare them. The equations then become a(n)=n +a(Ln/2])+a([n/2l). is that n Llg n j <. a trip round the loop in the variant will take a little longer to execute on the average than a trip round the loop in the first algorithm.IB(Fn/21-1)+ Ln/2. Let k = 1 + Ln / 2j.(k . taking k = In / 21. so the average number of trips round the loop is given by the recurrence A(n)=1+ Ln/2]A(Ln/2])+ rn/21A(Ln/21).

c and d being still unconstrained. that is A (n) = Llg n] + 2(1.n */ n) In particular. we therefore know a (n) = n lg n * + cn + do * . d. it is necessary and sufficient that n1gn+1 +(c+d)n+d = nlgn+l +(c+3d+!)n+(3d+ i).3 113 Binary Searching a (n) = n lg n * + cn + do * + e lg n * +f for appropriate constants c. 2 4 2 2 2 2 4 2 that is 4c + 2d = 4c + 3d + 2 and 2d = 3d+2. To prove HI(n) in this case. and f .2n * true for the base n = 1. we need e = 0 and f = 0. To prove HI(n) in this case. Denote this hypothesis by HI (n). then Ln / 2] _ (n -1)/2.rig n I . They allow us to conclude that d = .A (n) <. When n > 1 is not of the form 2'-1. when looking for an element that is in fact present with uniform probability distribution among the n different elements of an array sorted into increasing order. using the recurrence (*).2. . then we shall have proved by mathematical induction that it is true for every positive integer n. These two equations are not linearly independent. When n > 1 is of the form 2'-1. there still being no constraints on c. which gives c = 2 and implies that the general solution of the recurrence (*) for a (n) is a(n)=nlgn*+2(n-n*). is given by A (n) = a (n)l n . e. our initial guess holds : Llg nJ <. HI (Ln / 2]) and HI (rn / 2 1 ). (Ln / 2] )* =(n+1)14 and Fn/21 = (rn / 21)* = n * = (n + 1)/2. then (Ln / 2j)* _ (rn / 21)* = n */ 2. 4. If our hypothesis is correct. Our final constraint is therefore 0=a(1)=c -2. At this point we know that if only we can make the hypothesis a (n) = n lg n * + cn . The average number of trips round the loop executed by the first iterative algorithm for binary searching.Sec. it is thus necessary and sufficient that n lgn*+cn +dn*+e lgn*+f = n lgn*+cn +dn*+2e lgn*+(2f -2e) that is.

e . * Problem 4. and such that the exact solution of the recurrence is b (n) = n lg n * + n .n(n). d.2n* + lg n * + (1 +c).n(n) <. we obtain A(n)-B(n)=1+ lt(n)-LngnJ-2 3 Thus we see that the first algorithm makes on the average less than one and a half trips round the loop more than the second. 1N+ such that Show that there exists a function it : IN+ (n + 1)/3 <. Nonetheless. The problem arises because the two basic cases are incompatible in the sense that it is impossible to obtain them both from the recurrence starting from some artificial definition of b (0).Divide-and-Conquer 114 Chap. Given that the first algorithm takes less time on the average than the variant to execute one trip round the loop. thus obtaining four linear equations in four unknowns. Show that the function n(n) of the previous exercise is given by lt(n -1) = jl n */ 2 if 2n < 3n * n-n * otherwise for all n > 2. It seems reasonable to formulate the same incompletely specified induction hypothesis : b (n) = n lg n * + cn + do * + e lg n * + f for some new con- stants c. which can easily be solved to give the same c. The general solution of the recurrence for b (n) is more difficult than the one we have just obtained. and 4 for n in the hypothesis HI (n).2. solve equation (*) exactly for b (n) when n is of the form 2' -1. 2.3. b (1) = 0 = c = 2 . and f .3. Stated more elegantly : * Problem 4. The case when n ? 4 is a power of 2 obliges us to choose d = .3 is not a power of 2. 3.nlgn*+2n/3-2n*+lgn*+5/3.3.6. A simple modification of the argument allows us to conclude that nlgn*+n/2-2n*+lgn*+3/2 <.2n * + lg n * + 2 . Problem 4. hypothesis was wrong. we conclude that the . d.3. The hypothesis therefore becomes b (n) = n lg n * + cn .4. A seemingly simpler approach to determine the constants would be to substitute the values 1. Equivalently lt(n -1) = [n * + On / n *j -2)(2n -3n*)] / 2 We are finally in a position to answer the initial question : which of the two algorithms for binary searching is preferable? By combining the preceding analysis of the function a (n) with the solution to Problem 4. Unfortunately.(n + 1)/2 for every positive integer n.5. Explain why this is insufficient. e.7.6.3. Constructive induction yields e = 1 and f = 1 + c to take account of the case when n >. and f .b(n) <. 4 Problem 4. is whereas b (2) = 2 which inconsistent and shows that the original c=3. our efforts were not entirely in vain. Using the techniques presented in Section 2.3.

We are interested in the problem of sorting these elements into ascending order. or by heapsort (Example 2.4 SORTING BY MERGING Let T [I . 4. that is.. V [ 1 . whereas both the former methods take quadratic time... but without using an auxiliary array : the sections T [I . 4. n ] of an array are sorted independently. ** Problem 4.3). Recall that an analysis both in the worst case and on the average shows that the latter method takes a time in O(n log n). n] mergesort (U) . However. and then merging the solutions for each part. and your algorithm must work in linear time. Give an algorithm capable of merging two sorted arrays U and V in linear time. Repeat the previous problem. mergesort (V ) merge (T. n ]) { sorts array T into increasing order } if n is small then insert (T) else arrays U [ 1 . The situation is similar if the element we are looking for is not in fact in the array. n ] be an array of n elements for which there exists a total ordering. and you wish to sort the whole array T [I .. when this is justified by the number of elements.2. .4. When the number of elements to be sorted is small. k ] and T [k + 1 .2.2. You may only use a fixed number of working variables to solve the problem. n div 2] V -T[1+(n div 2) . where insert (T) is the algorithm for sorting by insertion from Section 1. V) merges into a single sorted array T two arrays U and V that are already sorted.. The obvious divide-and-conquer approach to this problem consists of separating the array T into two parts whose sizes are as nearly equal as possible. in a time in the exact order of the sum of the lengths of U and V. n div 2]. This sorting algorithm is a good illustration of all the facets of divide-andconquer. a relatively simple algorithm is used.Sec. On the other hand.4). (n + 1) div 2] U F T[ 1 .. n I.4 Sorting by Merging 115 first algorithm is more efficient than the second on the average whenever n is sufficiently large. We have already seen that the problem can be solved by selection sorting and insertion sorting (Section 1.4.. the threshold beyond which the first algorithm is preferable to the variant can be very high for some implementations.V)..U. sorting these parts by recursive calls.4 and Problem 2.1. We obtain the following algorithm : procedure mergesort (T [ 1 . being careful to preserve the order.. U.4. and merge (T. Problem 4.

Problem 4. have two points in common. The fact that the sum of the sizes of the subinstances is equal to the size of the original instance is not typical of algorithms derived using divide-and-conquer.1) + t'(1) + O(n). and then to merge the three sorted arrays. this algorithm chooses one of the items in the array to be sorted as the pivot. and analyse its execution time.5. look at what happens if instead we decide to separate T into an array U with n -1 elements and an array V containing only 1 element. This poor sorting algorithm is very like one we have already seen in this book. 4 mergesort separates the instance into two subinstances half the size. and those suggested by the two previous problems.1. Problem 4. as we shall see in several subsequent examples. the fact that the original instance is divided into subinstances whose sizes are as nearly as possible equal is crucial if we are to arrive at an efficient algorithm. the nonrecursive part of the work to be done is spent constructing the subinstances rather than combining their solutions.5 QUICKSORT The sorting algorithm invented by Hoare. This equation.1. respectively. and analyse its performance. to sort each of these recursively. and why? 4.6.4. solves each of these recursively. Simply forgetting to balance the sizes of the subinstances can therefore be disastrous for the efficiency of an algorithm obtained using divide-and-conquer. and then combines the two sorted half-arrays to obtain the solution to the original instance.4. L(n + 1)/3j. we might choose to separate T into about LI arrays. On the other hand. By the result of Problem 4.4.116 Divide-and-Conquer Chap. Let t'(n) be the time required by this variant to sort n items. we might choose to separate it into three arrays of size Ln/3]. usually known as "quicksort". The array is then partitioned on either side of the pivot : elements . Show that t'(n) E O(n 2).4. Which one. Consequently. Let t(n) be the time taken by this algorithm to sort an array of n elements. The merge sorting algorithm we gave. ** Problem 4.3. Rather than separate T into two half-size arrays. which we analysed in Section 2. is also based on the idea of divide-and-conquer. Separating T into U and V takes linear time. allows us to conclude that the time required by the algorithm for sorting by merging is in O(n log n). Following up the previous problem.4. Unlike sorting by merging.6.4. We obtain t'(n) E t'(n . and L(n + 2)/3]. each containing approximately L' elements. To see why. t(n) E t(Ln / 2j) + t (Fn / 21) + 0(n). Problem 4. the final merge also takes linear time. Develop this idea. Give a more formal description of this algorithm. As a first step.

Finally.1 until T [1] p interchange T [i] and T [1] Invent several examples representing different situations that Problem 4. j ] into increasing order } if j . no subsequent merge step being necessary. Now T [k] and T [1] are interchanged. respectively. . j ] just once. however.Sec. crucial in practice that the hidden constant be small..j. the final result is a completely sorted array. T[11 =p. Pointer k is then incremented until T [k] >p. i <. procedure pivot (T [i .j repeat I E.1 until T [1] <. To balance the sizes of the two subinstances to be sorted. j ].. j ] . 1-1] are not greater than p.. finding the median takes more time than it is worth. might arise.k + 1 until T [k] > p repeat 1 f.) Unfortunately. Quicksort is inefficient if it happens systematically on most recursive calls that the subinstances T [i ..1.. and the elements of T J1+1 .6. var 1) { permutes the elements in array T [i .1<-j+1 repeat k. j ] are greater than p.4 } else pivot (T [i . see Section 4. j ]) = T[k]>T[1]} Designing a linear time pivoting algorithm is no challenge..1.p. (For a definition of the median. we would like to use the median element as the pivot.k + 1 until T [k] > p or k >. One good way of pivoting consists of scanning the array T [i .. I . Pointers k and 1 are initialized to i and j + 1.. j ]) { sorts array T [i . and pointer I is decremented until T [1] <.1.i is small then insert (T [i j ]) { Section 1.k < 1 1 <k 5j and quicksort(T [i ..5.l <. T [i] and T [1] are interchanged to put the pivot in its correct position. whereas all the others are moved to its left.p while k < I do interchange T [k] and T [1] repeat k F. If now the two sections of the array on either side of the pivot are sorted independently by recursive calls of the algorithm. and simulate the pivoting algorithm on these examples. It is.. the elements of T [i . i <. where p is the initial value of T [i ] } p -T[i] kF-i.. Here is the algorithm. at the end. For this reason we simply use the first element of the array as the pivot. procedure quicksort (T [i . This process continues as long as k < 1.1) T [k] 5 T [l] { after pivoting.5 117 Quicksort are moved in such a way that those greater than the pivot are placed on its right. I -1]) quicksort (T [1 +1 . j ] to be sorted are severely unbalanced. j ] in such a way that. but starting at both ends. 4. Let p = T [i ] be the pivot..1] and T [1 + 1 .

and let f : [a. n ]. each value having probability 11n.1) can therefore be any integer between 1 and n. This pivoting operation takes a time in O(n). On the other hand. Give an explicit example of an array to be sorted that causes such behaviour. It remains to sort recursively two subarrays of size 1 -1 and n -1.) Let a and b be real numbers.j < k <. we assume that all the elements of T are distinct and that each of the n! possible initial permutations of the elements has the same probability of occurring. j = no and k = n . reasonable to hope that t(n) will be in 0 (n log n) and to apply constructive induction to look for a constant c such that t (n) S cn lg n . we obtain n-1 E i lg i i=no+1 n <- f x lg x dx x=no+1 . if the array to be sorted is initially in random order.do + n t (k) for n > n 0 k=0 An equation of this type is more difficult to analyse than the linear recurrences we saw in Section 2.. Let t (m) be the average time taken by a call on quicksort (T [a + 1 . respectively. b ] . taking f (x) = x lg x. To determine the average time required by quicksort to sort an array of n items. it is likely that most of the time the subinstances to be sorted will be sufficiently well balanced.n -m. Consequently.m <. Then k-1 k f(i) < J f(x) dx . a < b . (We suggest you find a graphical interpretation of the lemma.Divide-and-Conquer 118 Chap. it is. By analogy with sorting by merging. a + m ]) for 0 <.b. The value of I returned by the pivoting algorithm after the initial call pivot (T [I . (t(1-1) + t(n -1)) ..3. nevertheless. Show that in the worst case quicksort requires quadratic time. let d and no be two constants such that n-1 t (n) <.2. t(n) E O(n) + n 1=1 A little manipulation yields t(n)eO(n)+ n n-1 t(k) k =O To make this more explicit.IR be a nondecreasing function. The average time required to execute these recursive calls is t(I -1) + t(n -1).5. To use this approach we need an upper bound on i =no+ 1 i lg i. In particular. 4 Problem 4. Let j and k be two integers such that a <. The pivot chosen by the algorithm is situated with equal probability in any position with respect to the other elements of T.n and 0:5 a <. This is obtained with the help of a simple lemma.

Igex2 4 2 n . n ] I T [ i ] < m } <n/2 and # { i E [ I . 4.cn lg n for all n > n o >_ 1. 4. Outline a modification to the algorithm to avoid this. Intuitively. The hidden constant is in practice smaller than those involved in heapsort or in merge sort. Thus we define m to be the median of T if and only if m is in T and # I i E [ I . at the price of a small increase in the hidden constant. by choosing as pivot the median of T [i]. We mention this possibility only to point out that it should be shunned : the hidden constant associated with the "improved" version of quicksort is so large that it results in an algorithm worse than heapsort in every case.Ige n2 provided no > 1. T[(i+j)div21 andT[j]. whatever the choice of pivot.5. Complete the Problem 4. where c= 2d lge + proof by 4 (no+l)2lge mathematical induction that t (k) . it is not obvious that the median can be found so easily. n ] be an array of integers. Problem 4. .3. quicksort as described here always takes a time in S2(n2) in the worst case. . n ] I T [ i ] < m f >_n/2. If an occasional long execution time can be tolerated..6 SELECTION AND THE MEDIAN Let T [I . k=o Quicksort can therefore sort an array of n distinct elements in an average time in O (n log n).x=no+1 2 < 2 lgn . Show by a simple argument that. or that the elements of T may not all be distinct. What could be easier than to find the smallest element or to calculate the mean of all the elements ? However. this is the sorting algorithm to be preferred among all those presented in this book.Sec. the median of T is that element m in T such that there are as many items in T smaller than m as there are items larger than m.6 Selection and the Median 119 [-lx . t (n) <. By combining the modification hinted at in the previous problem with the linear algorithm from the following section.5. The probability of suffering an execution time in U(n 2) can be greatly diminished.4. we can obtain a version of quicksort that takes a time in 0 (n log n) even in the worst case. . The formal definition takes care of the possibility that n may be even.

Divide-and-Conquer 120 Chap.2.6. containing the elements of T that are smaller than p. Problem 4.n]IT[i]Sp } if k S u then array U[1 . n ]. this algorithm assumes that 1 k <_ n } if n is small then sort T return T RI p .# { i e [l . give a nonrecursive version of the selection algorithm.. n ].. and T [j + 1 . n i smallest element of T is that l T[i] <m } <k. function selection(T [ 1 . If we use heapsort or merge sort. 4 The naive algorithm for determining the median of T consists of sorting the array into ascending order and then extracting the In / 21th entry. and let k be an integer between 1 and n. Your algorithm should scan T once only. not calculated beforehand. k) { finds the k th smallest element of T ..m } >_ k . In other words. . the median of T is its In / 21th smallest element. which is not yet completely specified. Using ideas from the iterative version of binary searching seen in Section 4.1. Let T be an array of n elements. this algorithm takes a time in 0 (n log n) to determine the median of n ele- ments. Can we do better? To answer this question. k -v ) Problem 4. T [i . and greater than p. T [I . n] element m such that I T [i ] <. j]. and no auxiliary arrays should be used. respectively. k) if k <.some element of T [I ...5 to partition the array T into three sections. equal to p. For instance. n ] I T [i] < p } v #{i a [1.the elements of T larger than p { the k th smallest element of T is also the (k-v )th smallest of V } return selection(V.. Do not use any auxiliary arrays.v then { got it ! } return p otherwise { k > v } array V[1 .6. n -v ] V . n ] { to be specified later) u . solves the selection problem in a way suggested by quicksort... it is the k th item in T if the array is sorted into ascending order.3 and the pivoting procedure of the previous problem. The kth # { i e [ l .. The following algorithm. Your algorithm is allowed to alter the initial order of the elements of T. whereas # { i e [ I . i -1 ]. Generalize the notion of pivoting from Section 4. u ] U F the elements of T smaller than p { the k th smallest element of T is also the k th smallest element of U } return selection(U.. we consider a more general problem : selection.. The values i and j should be returned by the pivoting procedure.

4. the arrays U and V therefore contain a maximum of Ln / 2] elements. n ]. In this problem.6.6. consider the selection algorithm obtained by choosing "p f. n . and k > 1. Show that tm (n) is in O (n).) Note that the technique hinted at in this exercise only aprlies because the average time turns out to be linear : the average time taken on several instances is not otherwise equal to the time taken on an average instance. k) stand for the expected size of the subarray involved in the first recursive call produced by a call on selection (T [I . which causes a recursive call on n -1 elements.Ln / 2]. independently of the value of k.. When we do this.3. u < [n / 21 and v ? [n / 21. For the time being. v = 1. by definition of the median. the algorithm works by simplification.k) < n + k (n-k) < 3n 2 n 4 Assuming that the pivoting algorithm preserves random ordering in the subarrays it produces for the recursive calls. We have tm(n)E0(n)+max{ tm(i) I i 5 Ln/2] }. Prove that E(n. * Problem 4. (The hidden constant must not depend on k. (n + 1) div 2) "? Suppose first that the median can be obtained by magic at unit cost.selection (T.6 Selection and the Median 121 Which element of T should we use as the pivot p ? The natural choice is surely the median of T. still supposing that the median can be obtained magically. we can once again borrow an idea from quicksort and choose simply p F-T[1]. But what shall we do if there is no magic way of getting the median? If we are willing to sacrifice speed in the worst case in order to obtain an algorithm reasonably fast on tf. Thus we have an algorithm capable of finding the k th smallest element of an array in a time linear in the size of the array. Let E (n . Consequently.T [I] ". taking the size to be zero if there is no recursive call. prove that this selection algorithm takes linear time on the average.6. The remaining operations. 4. To analyse the efficiency of the selection algorithm. we hope that on the average the sizes of the arrays U and V will not be too unbalanced. notice first that. What happens to the selection algorithm if the choice of p is Problem 4. therefore. so that the sizes of the arrays U and V will be as similar as possible (even if at most one of these two arrays will actually be used in a recursive call).e average. not by divide-and-conquer. k). take a time in O (n). Problem 4. Assume the n elements of the array T are distinct and that each of the n! permutations of the elements has the same probability of occurring. whatever the value of k. even if occasionally we meet an instance where u = 0.v <. made using "p <.5. Let tm (n) be the time required by this method in the worst case to find the k th smallest element of an array of at most n elements. If there is a recursive call.Sec. .

n div 5 array S [I . Show similarly that #{iE[I. s ] for i E. imagine that all the elements of T are arranged in five rows. . Notice that the box contains approximately three-fifths of one-half of the elements of T. Therefore # { i E [ 1 .6. The conclusion is that although m is perhaps not the exact median of T.. Similarly. about 3n / 10 elements. I T[i]m }>31s/21=31Ln/5]/21>(3n-12)/10. yet its rank is approximately between 3n / 10 and 7n / 10.s]IS[i]<-m }> Is/21. that is. i2. the element in the circle corresponds to the median of this array. Assuming n ? 5. although nothing in the execution of the algorithm pseudomed really corresponds to this illustration. We now look at the efficiency of the selection algorithm given at the beginning of this section when we use p E.. the smallest elements going to the left and to the top. as is each of the Ln / 5] columns.n]IT[i]<m }<(7n-3)/10. This can be done with a little cunning. i3 between Si . consider the following algorithm : function pseudomed (T [1 .6.m . each of the elements in the box is less than or equal to m. (s + 1) div 2) . To visualize how these factors arise. respectively.. Consequently. We look first at the value of the approximation to the median found by the algo- rithm pseudomed. to the value of m returned by the algorithm. with the possible exception of one to four elements left aside (Figure 4.6. n ] Problem 4. n ]) { finds an approximation to the median of array T) s . This quadratic worst case can be avoided without sacrificing linear behaviour on the average : the idea is to find quickly a good approximation to the median.1 to s do S [i] t. Note that the time taken by adhocmed5 is bounded above by a constant.1).7. we have #{ie[l. But each element of S is the median of five elements of T. there are three i 1.Divide-and-Conquer 122 Problem 4. Since m is the exact median of the array S.6. Chap.adhocmed5(T [5i-4 . By the transitivity of ":5 ".pseudomed(T) .. where adhocmed 5 is an algorithm specially designed to find the median of exactly five elements.. 4 Show.4 and 5i such that T [i 1] <_ T [i 2] <_ T [i 3] = S [i ] m. that is. Now suppose that the middle row is sorted by magic. for every i such that S [i ] <. that in the worst case this algorithm requires quadratic time.. 5i ]) return selection(S. Let m be this approximation. however. The middle row corresponds to the array S in the algorithm.

6. yet simpler problem of the same type. because the array S can be constructed in linear time. l7 Problem 4.(7n + 12)/10 } . calculating pseudomed (T) takes a time in O (n) + t( Ln / 5]).10.f (LpnJ) +.12)/10. Let n be the number of elements in T. Let f : IN -* IR* be any function such that f(n) =. Problem 4. 4. Problem 4.f (Lqn]) + bn for every n > n 0. Let p and q be two positive real constants such that p + q < 1. Argue that t(n)ES2(n). Let t (n) be the time required in the worst case to find the k th smallest element in an array of n elements using the selection algorithm discussed earlier. Calculating u and v also takes linear time. * Problem 4.6.9 (with p =1/5 and q = 3/4) such that t(n) <_ f(n) for every integer n.6 Selection and the Median Figure 4. and let t (n) be the time required in the worst case by this algorithm to find the k th smallest element of T.6. What is it in the algorithm that makes this restriction necessary? This equation looks quite complicated. and let b be some positive real constant.(7n . We have to ensure that n > 4 in the preceding equation. Give explicitly a nondecreasing function f(n) defined as in Problem 4.(7n + 12)/10. The recursive call that may follow therefore takes a time bounded above by max { t (i) I i <.9. still independently of the value of k. it is possible to find the median of an array of n elements in linear time.1. 123 Visualization of the pseudomedian. so n -v <. Use constructive induction to prove that f (n) E O(n). let no be a positive integer. First let us solve a more general.6.3)/ 10 and v >.6.Sec. and thus t(n)EO(n). At the first step. The version of the algorithm suggested by Problem 4. there exists a constant c such that t(n)<-t(Ln/5j)+max{t(i)Ii <-(7n+12)/10}+cn for every sufficiently large n.2 is preferable in prac- .6. The initial preparation of the arrays U and V takes linear time. Hence.6. Conclude that t(n)eO(n).7 and the preceding discussion show that u <. In particular.8.(3n .

It must also allow negative numbers to be represented. From a more practical point of view. and it must be possible to carry out in linear time multiplications and integer divisions by positive powers of 10 (or another base if you prefer). Problem 4. 4 tice. the classic algorithm and multiplication a la russe both take quadratic time to multiply these same operands. large integers are of crucial importance in cryptology (Section 4. that is. Problem 4.7. Can we do better? Let u and v be two integers of n decimal digits to be multiplied. On the other hand.8).3. For some applications we have to consider very large integers. 4. Your representation should use a number of bits in O (n) for an integer that can be expressed in n decimal digits. the time required to execute these operations is bounded above by a constant that depends only on the speed of the circuits in the computer being used.1). This is only reasonable if the size of the operands is such that they can be handled directly by the hardware. for instance. we can also construct the array S (needed to calculate the pseudomedian) by exchanging elements inside the array T itself. sufficiently efficient to be used with such operands (see Chapter 9 for more on this). Your solution to Problem 4. we are obliged to implement the arithmetic operations in software. Representing these numbers in floating-point is not useful unless we are concerned solely with the order of magnitude and a few of the most significant figures of our results. Design a good data structure for representing large integers on Problem 4.2.1. as well as additions and subtractions.7.7 ARITHMETIC WITH LARGE INTEGERS In most of the preceding analyses we have taken it for granted that addition and multiplication are elementary operations.2 shows how to add two integers in linear time. To use still less auxiliary space. If results have to be calculated exactly and all the figures count. this feat constitutes an excellent aerobic exercice for the computer !) The algorithm developed in this section is not.2 on a computer using the representation you invented for Problem 4. This was necessary.124 Divide-and-Conquer Chap. this is no longer true when the operands involved are very large.1.7.7. Give an algorithm able to add an integer with m digits and an integer with n digits in a time in O(m + n). Divide-and-conquer suggests that we should separate . a computer. alas. Implement your solution to Problem 4.7.7. when the Japanese calculated the first 134 mil- lion digits of It in early 1987. Also implement the classic algorithm for multiplying large integers (see Section 1. Although an elementary multiplication takes scarcely more time than an addition on most computers. (At the very least. even though this does not constitute an iterative algorithm : it avoids calculating u and v beforehand and using two auxiliary arrays U and V.

Splitting the operands for large integer multiplication. y )) x 101 +mult(x. where 0<_ x< 10'. are executed in linear time. If we use the representation suggested in Problem 4. even if this means that .7 Arithmetic with Large Integers 125 each of these operands into two parts of as near the same size as possible : u = 10S w + x and v =10'y +z.7. and s = Ln / 2j.u mod IOs y E--v div 101z 4-v mod 10. a multiplication. as well as the additions. (For convenience. In fact.smallest integer so that _u and v are of size n if n is small then multiply u by v using the classic algorithm return the product thus computed s f-n div 2 w <-. each of which serves to multiply two integers whose size is about n12. 4. The same is true of the modulo operations.) n u w x V y z fn/21 Ln/2 Figure 4. wz +xy. return mult (w .7 the time taken by the preceding algorithm is therefore quadratic. See Figure 4.7. z) + mult (x . y) x 102s + (mult (w . and a subtraction. we have only managed to increase the hidden constant ! The trick that allows us to speed things up consists of calculating wy. x . The product that interests us is uv = 102swy + 10S(wz+xy)+xz We obtain the following algorithm.u div lOs .1.7. we say that an integer has j digits if it is smaller than 10j. This equation becomes tb (n) E 4tb (n / 2) + O(n) when n is a power of 2.Sec. The integers w and y therefore both have Fn/21 digits.1. By Example 2. so we have not made any improvement compared to the classic algorithm.2. Thus tb (n) E 3tb ([n / 21) + tb (Ln / 2j) + O(n).7. even if it is not greater than or equal to 10i -1. the integer divisions and multiplications by 102s and 10'.3. and xz by executing less than four half-length multiplications. The last statement of the algorithm consists of four recursive calls. v : large-integers) : large-integer n .z) Let tb (n) be the time required by this algorithm in the worst case to multiply two n digit integers. since these are equivalent to an integer division.1 and the algorithms of Problem 4. 0!5z< 10S . function mult (u .

This resembles the equation tc (n) E 3tc J n / 21) + 0 (n). A good implementation will probably not use base 10.608 -1.541-4. It is thus possible to multiply two n digit integers in a time in 0 (n 1g3). We require to multiply u = 2.410. which is unacceptable if m is very much smaller . Of course. and z = 89.345 and v = 6.7. the algorithm we have just seen can multiply u and v. y = 67. Recall that the performance of this algorithm and of the classic algorithm are compared empirically at the end of Section 1. our example is so small that it would be quicker to use the classic multiplication algorithm in this case. but rather.t(Ln12. This gives an execution time in 0 (n'3). Let u and v be integers of exactly m and n digits. This suggests we should replace the last statement of the algorithm by r E. p = 23 x 67 = 1541. Using Problem 2.J)+t([n/21)+t(l+Fn121)+cn for every n ? no. the required product uv is obtained by calculating 1.920. However.200 + 405 = 15. Problem 4.mult(w. Suppose without loss of generality that m <. Two other multiplications are needed to isolate these terms.608. Consider the product r =(w+x)(y+z)=wy +(wz+xy)+xz .4.11 (notice that t(n) is nondecreasing by definition). which is in 0 (n 1.789.59). Taking account of the fact that w + x and y + z may have up to 1 + Fn / 21 digits. This is sensible because addition is much faster than multiplication when the operands are large. the hidden constants are such that this algorithm only becomes interesting in practice when n is quite large.7. Even when m # n. y +z) p <. we conclude that t (n) E 0 (n 1g3). decomposition of the operands gives n = 4. we find that there exist constants c E IIt+ and no E IN such that t(n) <. q = 45 x 89 = 4. After only one multiplication this includes the three terms we need in order to calculate uv.z) return 102sp + l0s (r -p -q) + q Let t(n) be the time required by the modified algorithm to multiply two integers of size at most n .Divide-and-Conquer 126 Chap. We obtain successively and r = (23 + 45)(67 + 89) = 68 x 156 = 10. The classic algorithm obtains the product of u by v in a time in 0 (mn). s = 2. 4 we have to do more additions.205 . since it will simply treat u as though it too were of size n. The initial Example 4.000 + 506.q -mult(x.7. w = 23.2.541x104 + (10.y). the largest base for which the hardware allows two "digits" to be multiplied directly. Finally. x = 45.n.1.005 = 15. which we met at the beginning of the chapter.005)x102 + 4.005. respectively.mult (w +x.3.

8.7. Generalize the algorithm suggested by Problem 4. * Problem 4.11 (analysis of the algorithm fib3) in the context of our new multiplication algorithm. Following up Problem 4. and compare your answer once again to Example 2. Problem 4.7.7. (Remark: Integer division of an n digit integer by an arbitrary constant k can be carried out in a time in O(n). Analyse the efficiency of the algorithm suggested by this idea.7.7.3. Following up Problem 4. Find two fundamental errors in this analysis of supermul.7. that is.7.7.) ** Problem 4.2. so as to obtain the required product using five multiplications of integers about one-third as long (not nine as would seem necessary at first sight). consider the following algorithm for multiplying large integers. although the value of the hidden constant may depend on k.8. Show by a simple argument that the preceding problem is impossible if we insist that the algorithm A a must take a time in the exact order of n a. Your algorithm must allow for the possibility that the operands may differ in size.5.8.1+(lglgn)/lgn multiply u and v using algorithm A a return the product thus computed At first glance this algorithm seems to multiply two n digit numbers in a time in the order of n a. Show that it is possible to separate each of the operands to be multiplied into three parts rather than two. it is nevertheless possible to multiply two n digit integers in a time in 0 (n log n log log n) by separating each operand to be multiplied into about parts of about the same size and using Fast Fourier Transforms (Section 9.7 Arithmetic with Large Integers 127 than n. Problem 4. Show that it is possible in this case to multiply u and v in a time in O (nm lg(3i2)) ** Problem 4.2. .10.6. function supermul (u.10 does not work. 4. an algorithm Aa.7. * Problem 4.7. implement on your machine the algorithm we have just discussed. Compare empirically this implementation and the one you made for Problem 4. that can multiply two n digit integers in a time in the order of n a.Sec.smallest integer so that u and v are of size n if n is small then multiply u by v using the classic algorithm elsea .9.7. by showing that there exists. v : large-integers) : large-integer n .3. Rework Problem 2.7. where a = 1 + (lg lg n) / lg n. Although the idea tried in Problem 4. for every real number a > 1.5). in a time in 0 (n log n).7.

x < p } under multiplication modulo p.3).Divide-and-Conquer 128 Chap. although Eve can overhear conversations. The cyclic multiplicative group ZP is defined as { x E N I 1 <.8 EXPONENTIATION : AN INTRODUCTION TO CRYPTOLOGY Alice and Bob do not initially share any common secret information. They do not want Eve to be privy to their newly exchanged secret. Clearly.2. Let p be an odd prime number. the condition that g be a generator is necessary if Alice and Bob require that the secret .8.1. neither of them can control directly what this value will be. An integer g in this group is called a generator if each member of the group can be obtained as some integral power of g. Such a generator always exists. a malevolent third party. Their problem is complicated by the fact that the only way they can communicate is by using a telephone. Bob sends Alice the value b = g B mod p. The security of the secret they intend to establish is not compromised should Eve learn these two numbers. such as calculating the greatest common divisor. Several other protocols have been proposed since. 4. 4 Multiplication is not the only interesting operation involving large integers. and the calculation of the integer part of a square root can all be carried out in a time whose order is the same as that required for multiplication (Section 10. At the second step Alice and Bob choose randomly and independently of each other two positive integers A and B less than p.1. which they suspect is being tapped by Eve. Find a protocol by which Alice and Bob can attain their ends.8.2. delay reading the rest of this section!) A first solution to this problem was given in 1976 by Diffie and Hellman. Some other important operations. To simplify the problem. modulo operations. Finally. similarly. and on some other integer g between 2 and p . the secret value exchanged can now be used as the key in a conventional cryptographic system. they are not treated here. Alice computes x = b A mod p and Bob calculates y = a B mod p. ** Problem 4. As a first step. we assume that. For some reason they wish to establish such a secret. This value is therefore a piece of information shared by Alice and Bob. Now x = y since both are equal to g' mod p. Alice and Bob agree openly on some integer p with a few hundred decimal digits.g A mod p and transmits this result to Bob . They cannot therefore use this protocol to exchange directly a message chosen beforehand by one or the other of them. (If you wish to think about the problem. Next Alice computes a =. she can neither add nor modify messages on the communications line. Clearly. * Problem 4. Integer division. may well be inherently harder to compute . Nevertheless.

a. There exists an obvious algorithm to solve it. and 1 <. For the time being. g. g. Calculating A' from p.Sec. and b only. it seems therefore that this method of providing Alice and Bob with a shared secret is sound.2 hold. Prove. there is no integer A such that 3=2Amod7. the algorithm returns the value p. although no one has yet been able to prove this.A' < p . g and a is called the problem of the discrete logarithm. and then to proceed like Alice to calculate x' = b A' mod p .xg until (a = x mod p) or (A = p) return A This algorithm takes an unacceptable amount of time.O.8 Exponentiation : An Introduction to Cryptology 129 exchanged by the protocol could take on any value between 1 and p . and sox =x and the secret is correctly computed by Eve in this case. The value x' calculated by Eve in this way is therefore always equal to the value x shared by Alice and Bob.8. none of them is able to solve a randomly chosen instance in a reasonable amount of time when p is a prime with several hundred decimal digits. At the end of the exchange Eve has been able to obtain directly the values of p. p) A <. Problem 4. this average time is more than the age of Earth even if p only has two dozen decimal digits. that regardless of the choice of p and g some secrets are more likely to be chosen than others. g a generator of ZP . Show that even if A #A'. An attentive reader may wonder whether we are pulling his (or her) leg. .) function dlog (g.8. The obvious algorithm for this is no more subtle or efficient than the one for discrete logarithms. there is no known way of recovering x from p. and b that does not involve calculating a discrete logarithm. For instance.l repeat A<-A+1 x 4. If Eve needs to be able to calculate discrete logarithms efficiently to discover the secret shared by Alice and Bob. then A' is necessarily equal to A. a.3. Furthermore. If p is an odd prime. however. 4. since it makes p 12 trips round the loop on the average when the conditions of Problem 4. a. Although there exist other more efficient algorithms for calculating discrete logarithms.x 4. One way for her to deduce x would be to find an integer A' such that a = g A'mod p . even if A and B are chosen randomly with uniform probability between 1 andp -1. it is equally true that Alice and Bob must be able to calculate efficiently exponentiations of the form a = g A mod p. (If the logarithm does not exist. If each trip round the loop takes 1 microsecond. still b A mod p =b A' mod p provided that g A mod p = g A' mod p and that there exists a B such that b = g B mod p.1.

4. Repeat the problem using the divide-and-conquer algorithm from Section 4. A . (The same improvement can be made in dlog. and so on. y.A/2. which is necessary if we hope to execute each trip round the loop in 1 microsecond. The preceding formula for x 25 arises because x 25 = x 24x . where M (p) is an upper bound on the time required to multiply two positive integers less than p and to reduce the result modulo p. there exists a more efficient algorithm for computing the exponentiation. x25 = (((x2x)2)2)2x Thus x25 can be obtained with just two multiplications and four squarings. A . Use the classic algorithm for multiplying large integers.7 for the multiplications. p ). This idea can be generalized to obtain a divide-and-conquer algorithm.p) return (a2 mod p) Let h (A) be the number of multiplications modulo p carried out when we calculate dexpo (g . p ) a 4.p) return (ag mod p) else a f.1 for i . z.ag return a mod p The fact that x y z mod p = ((x y mod p) x z ) mod p for every x. 4 function dexpo l (g . 0 Happily for Alice and Bob.ag mod p return a Analyse and compare the execution times of dexpol and * Problem 4. A . p ) if A = 0 then return 1 if A is odd then a . which consequently takes a time in O (h (A) x M (p )).dexpo(g.A-l.) function dexpo2(g. suppose that g is approximately equal to p/2. p ) a<-. By inspection of the algorithm we find .dexpo(g.1 for i F. We leave the reader to work out the connection between 25 and the sequence of bits 11001 obtained from the expression (((x2x)2x 1)2x1)2x by replacing every x by a 1 and every 1bya0.8.1 to A do a F. An example will make the basic idea clear. For simplicity. These operations dominate the execution time of the algorithm. function dexpo (g .Divide-and-Conquer 130 Chap. dexpo2 as a function of the value of A and of the size of p. assume that calculating a modulo takes a time in the exact order of that required for multiplication. including the squarings. and p allows us to avoid accumulation of extremely large integers in the loop.1 to A do a E. A . x 24 = (x 12)2. In both cases.

mathematical induction. Show that in fact x 15 can be calculated with only five multiplications. how much time do the algorithms corresponding to dexpo2 and dexpo . This means that Alice and Bob can use numbers p.Sec. The efficiency of these algorithms depends in turn on the efficiency of the algorithm used to multiply large integers. let us just say that h (A) is situated between once and twice the length of the binary representation of A. As was the case for binary searching. On the other hand.5. which is entirely reasonable. the computation of dexpo (g . It is therefore an example of simplification rather than of divide-and-conquer. More generally. for an arbitrary base g and an arbitrary exponent A.a . Nonetheless. A and B of 200 decimal digits each and still finish the protocol after less than 3.8. P) takes a time in 0 (M (p ) x log A ). A .8. (Do not try to use characteristic equations.) Without answering Problem 4. function dexpoiter (g . provided A >_ 1. This recursive call is not at the dynamic end of the algorithm. p ) n F. with seven multiplications. that is. dexpo calculates x 15 as (((lx)2x)2x)2x. which makes it harder to find an iterative version. A .7.8. The algorithms dexpo and dexpoiter do not minimize the number of multiplications (including squarings) required.A.5. dexpoiter calculates x 15 as 1 x x 2x 4x 8. the algorithm dexpo only requires one recursive call on a smaller instance.g.y F.ay mod p y <__ y2modp n(-ndiv2 return a Problem 4.8 Exponentiation : An Introduction to Cryptology 0 h(A)= 131 ifA =0 l + h (A -1) if A is odd 1 + h (A /2) otherwise . By suppressing all the reductions modulo p in the preceding * Problem 4. algorithms.000 multiplications of 200-digit numbers and 3.8. we obtain algorithms for handling large integers capable of calculating efficiently all the digits of g'. 4. As a function of the size of the base and of the value of the exponent. which involves eight multiplications (the last being a useless computation of x 16).6. there exists a similar iterative algorithm. In both cases the number of multiplications can easily be reduced to six by avoiding pointless multiplications by the constant 1 and the last squaring carried out by dexpoiter. For example. Find an explicit formula for h (A) and prove your answer by Problem 4.1 while n > 0 do if n is odd then a . which corresponds intuitively to calculating x25 as x 16x8x 1.000 computations of a 400-digit number modulo a 200-digit number.

The preceding problem shows that it is sometimes not sufficient to be only halfclever ! 4. The basic idea is similar to that used in the divide-and-conquer algo- rithm of Section 4.9 MATRIX MULTIPLICATION Let A and B be two n x n matrices to be multiplied.7. and let C be their product.7 for multiplying large integers. Towards the end of the 1960s. First we show that two 2 x 2 matrices can be multiplied using less than the eight scalar multiplications apparently required by the definition. the product of A and B can be calculated in a time in 8(n 3). Let A= all a12 a21 a22 and B= b11 b12 b21 b22 be two matrices to be multiplied.Divide-and-Conquer 132 Chap. The classic algorithm comes directly from the definition : Cij = E Aik Bkj k=1 Each entry in C is calculated in a time in O(n). Strassen caused a considerable stir by improving this algorithm. Consider the following operations : m1 =(a21 +a22-a11)(b22-b12+b11) m2=a11b11 M3 = a 12b21 m4=(a11-a21)(b22-b12) m5=(a21+a22)(b12-b11) m6=(a12-a21 +all -a22)b22 m7=a22(b11+b22-b12-b21). We leave the reader to verify that the required product AB is given by the following matrix : m2+m3 m1+m2+m5+m6 m 1+m2+m4-m7 m 1+m2+m4+m5 J . 4 take when the classic multiplication algorithm is used? Rework the problem using the divide-and-conquer multiplication algorithm from Section 4. assuming that scalar addition and multiplication are elementary operations. Since there are n2 entries to compute in order to obtain C.

Building on the preceding discussion. Following publication of Strassen's algorithm. Bearing in mind what you learnt from Section 4. Assuming that n is a power of 2. The number of additions and subtractions needed to calculate the product of two 2 x 2 matrices using this method seems to be 24. a number of researchers tried to improve the constant w such that it is possible to multiply two n x n matrices in a time in 0 (n °). Almost a decade passed before Pan discovered a more efficient algorithm.9. Strassen's original algorithm takes 18 additions and subtractions as well as 7 multiplications. find the exact number of scalar additions and multiplications executed by your algorithm for Problem 4.2.2. it was discovered by Coppersmith and Winograd in September 1986.Sec. none of the algorithms found after Strassen's is of much practical use.) Numerous algorithms. If we now replace each entry of A and B by an n x n matrix.9 Matrix Multiplication 133 It is therefore possible to multiply two 2 x 2 matrices using only seven scalar multiplications.3. * Problem 4.9. What do you do about matrices whose size is not a power of 2 ? Problem 4. the few additional additions compared to the classic algorithm are more than compensated by saving one multiplication.640 scalar multiplications. The asymptotically fastest matrix multiplication algorithm known at the time of this writing can multiply two n x n matrices in a time in 0 (n 2. once again based on divide-and-conquer: he found a way to multiply two 70 x 70 matrices that involves only 143. The algorithm discussed in this section is a variant discovered subsequently by Shmuel Winograd. 4. Because of the hidden constants.81. we obtain an algorithm that can multiply two 2n x 2n matrices by carrying out seven multiplications of n x n matrices.1. This is possible because the basic algorithm does not rely on the commutativity of scalar multiplication. propose a threshold that will minimize the number of scalar operations.2.000 and to which exceeds 150. show that it is possible to multiply two n x n matrices in a time in 0 (n 2.1. asymptotically more and more efficient. provided n is sufficiently large. taking account of the idea from Problem 4. this algorithm does not look very interesting : it uses a large number of additions and subtractions compared to the four additions that are sufficient for the classic algorithm.376) . At first glance. however. Show that this can be reduced to 15 by using auxiliary variables to avoid recalculating terms such as m1+m2+m4.9. . 703 = 343. have been discovered subsequently. (Compare this to 702. Your answer will depend on the threshold used to stop making recursive calls.000. as well as a number of additions and subtractions of n x n matrices.9.9. Problem 4.81). Given that matrix additions can be executed much faster than matrix multiplications.

l do interchange T [i +p ] and T [ j +p ] procedure transpose (T [I . the required result in T is d e f g i g 0 h k i a j k b c It is easy to invent an algorithm exchange (i . Then i T(i. We have . k+j. j . n ]. Figure 4. The general algorithm is as follows. T [i . i+m -1] procedure exchange (i. i ) The analysis of this algorithm is interesting.10 EXCHANGING TWO SECTIONS OF AN ARRAY For this additional example of an algorithm based on simplification.10. we can solve our problem as illustrated in Figure 4. if T is the array d a e f and k = 3. k) i .j <. With its help.1). Let T be an array of n elements. j ) i 4i -j else j4-j-i exchange (k -i . k .n -m + 1 (see Fig.k 4-k+1 while i # j do if i > j then exchange (k -i . we ignore the recursive formulation and give only an iterative version.10. i) exchange (k -i .10. After each exchange this part is smaller than before : thus we can affirm that each exchange simplifies the solution of the instance. j +m -1] in a time in 0(m).2.. Let T (i .j-i) ifi=j if i > j if i < j For instance.j)= j + T(ij. k .2 shows how a block of three elements and a block of eight elements are transposed.. j -n -k. provided that m < i +m <. m ) forp *.Otom . j. We wish to interchange the first k elements and the last n -k. m) to interchange the elements and T [ j . 4 4.. j) be the number of elementary exchanges that have to be made to transpose a block of i elements and a block of j elements.j) i +T(i. Here the arrows indicate the part of the array where there are still some changes to make. without making use of an auxiliary array. For instance.k. 4.Divide-and-Conquer 134 Chap.

8) = 3 + T(3.Sec. j) denotes the greatest common divisor of i and j (Section 1. The progression of the parameters of the function T recalls an application of Euclid's algorithm and leads to the following result : Problem 4.4). where gcd(i.10. 3) I h J exchange (1.2. 5. 3) k J d e I g exchange (1.5) = 6 + T(3. h i Progress of transpose (T. 6.2)=9+T(1.10 Exchanging Two Sections of an Array 135 T(3.j)=i + j -gcd(i. 4.2) = 8+T(l.10. 2) d e I h I a c a c exchange (3.1. 9. a c d e I g h i j k h a b c k a b c exchange (1. 3). Prove that T(i. 4. 1) h e exchange (3.1. b c . 1) = 10. in elements m elements T J exchange Figure 4. 1) d e g Figure 4. 4.j). Effect of the exchange algorithm.7.10.

We could subsequently find the smallest element of T by making n . The Fibonacci sequence. nd be any integers.2 to n do if max < T [i ] then max 4.Divide-and-Conquer 136 Chap. Give an efficient algorithm based on divide-and-conquer to find the unique monic polynomial p (n) of degree d such that p (n 1) = p (n 2) = = p (nd) = 0. n ] be an array of n elements. It is easy to find the largest element of T by making exactly n -1 comparisons between elements. Problem 4. T[ind]*-T[1] min *. Does this help you to understand the algorithm fib3 of Section 1. (A polynomial is monic if its coefficient of highest degree ad = 1.11 SUPPLEMENTARY PROBLEMS Problem 4.7.T [2] for i F-3tondo if min > T [i] then min T[i] Find an algorithm that can find both the largest and the smallest elements of an array of n elements by making less than 2n -3 comparisons between elements.11.. 4 Can we do better if we are allowed unlimited auxiliary Problem 4.11.10. . n 2.2. Consider the matrix F=(011.) Analyse the efficiency of your algorithm.5 ? Polynomial interpolation.11..3. Suppose you already have an algorithm capable of multiplying a polynomial of degree i by a polynomial of degree 1 in a time in 0 (i ). j) and the matrix F? What happens if i and j are two consecutive numbers from the Fibonacci sequence? Use this idea to invent a divide-and-conquer algorithm to calculate this sequence. max FT[1]. Represent the polynomial p (n) = a o + a 1 n + a 2 n + + ad n d of degree d by an array P [0 . we exclude implicit comparisons in the control of the for loop.see Chapter 9. as well as another algorithm capable of multiplying two polynomials of degree i in a time in 0 (i log i ) Problem 4.. ind F i We only count comparisons between elements . Ill 1 Let i and j be any two integers. . Let T [1 . 2 .2. What is the product of the vector (i . Let n 1.. space ? 4.1.ind t-1 for i +.2 more comparisons. You may .T [i ]. Smallest and largest elements. d ] containing its coefficients.

11. including the initial conditions. n ] be a sorted array of integers. Majority element.5 with the supplementary constraint that the only comparisons allowed between elements are tests of equality. a control. Your algorithm should take a time in 0 (log n) in the worst case. Using full adders and (i .. Telephone switching. one j bit input. give the simplest expression you can for the number of 3-tallies needed in the construction of your n-tally.6. It connects input A with output A and input B with output B. some of which may be negative but all of which are different. 4. j )-adders as primitive elements. Use these switches to construct a network with n inputs and n outputs able to implement any of the n! possible permutations of the inputs. if n = 9 and the inputs are 011001011.8. ii. Using the O notation. Let T [1 . show how to build an efficient n-tally. try again. Problem 4. provided such an index exists. j)-adder is a circuit that has one i bit input. for the number of 3-tallies needed to build your n-tally. The number of switches used must be in 0 (n log n ). n ] includes a majority element (it cannot have more than one). and the inputs are 101 and 10111 respectively. * Problem 4. j = 5.11 Supplementary Problems 137 assume that n is a power of 2. It adds its two inputs in binary.Sec..11. Give the recurrence. Do not forget to count the 3-tallies that are part of any (i . . and one [I+ max(i. * Problem 4.11. i. You may therefore not assume that an order relation exists between the elements.. Exactly how many comparisons does your algorithm require? How would you handle the situation when n is not a power of 2 ? Problem 4. Give an algorithm that is able to find an index i such that I <_ i <_ n and T [i] = i. Justify your answer. the output is 011100. Your algorithm must run in linear time. Tally circuit. If you could not manage the previous problem.11.11. iii. An element x is said to be a majority element in T if # [ i T [i ] = x } > n 12. Problem 4.9. An n-tally is a circuit that takes n bits as inputs and produces 1 + LIg n] bits as output.11. and if so find it. Rework Problem 4. For this reason the 3-tally is often called a full adder. and two outputs.4. but allow your algorithm to take a time in 0 (n log n). the output is 0101. Let T [I . j )-adders you might have used. if i = 3. or input A with output B and input B with output A. depending on the position of the control .11. Give an algorithm that can decide whether an array T [I . For example.11. It is always possible to construct an (i . An (i . A switch is a circuit with two inputs. Problem 4. You may not suppose that n has any special form.see Figure 4.7. j)]-bit output. It counts (in binary) the number of bits equal to 1 among the inputs. j)-adder using exactly max(i. n ] be an array of n elements. For example.1.5. j) 3-tallies.

the F4 merge circuit shown in Figure 4. a sorting circuit S.Chap. Provided each of the two groups of inputs is already sorted. Batcher's sorting circuit. The smaller input appears on the upper output. Figure 4. For a given integer n.2 shows an F4 circuit. The depth is interesting because it determines the reaction time of the circuit.11. inputs are on the left and outputs are on the right. A merge circuit.11. show how to construct an efficient sorting circuit S . A comparator is a circuit with two inputs and two outputs. a merge circuit F has two groups of n inputs and a single group of 2n outputs. Following up the previous problem. There are two ways to measure the complexity of such a circuit : the size of the circuit is the number of comparators it includes. For example. By convention. 7 sorted outputs . has n inputs and n outputs . Figure 4. respectively. Problem 4. then each input appears on one of the outputs.10.11. For n a power of 2. By "efficient" we mean that the depth of your circuit must be significantly less than n 3 first sorted group 1 1 3 3 3 3 6 4 4 4 2 of inputs L37: 6 4 It 2 5 5 2 2 4 5 second sorted group of inputs 2 5 5 5 6 6 7 7 7 7 7 7 6 2 4 Figure 4.11. Merge circuit. For instance. and the larger input appears on the lower output. and the outputs are also sorted.11.11. 4 Divide-and-Conquer 138 4 N A A A A B B B B Figure 4. which is of size 5 and depth 3. show how to construct a merge circuit F whose size and depth are exactly 1 + n lg n and 1 + lg n .2. Each rectangle represents a comparator. illustrating how the inputs are transmitted to the outputs. Telephone switches. it sorts the inputs presented to it.1. and the depth is the largest number of comparators an input may have to pass before it reaches the corresponding output.11. * Problem 4. For n a power of 2.3 gives an example of S4.2 has size 9 and depth 3.11.

A sorting circuit. 4.11. but their depth and size must then be taken into account. . You may use merge circuits to your heart's content. and express T and P in O notation as simply as possible.12. for the size T and the depth P of your circuit S.11. You are to organize a tournament involving n competitors.11.Sec. whenever n is sufficiently large. Solve these 0 equations exactly.3. show that it is pos- ** Problem 4.11. Player n=5 1 2 3 4 1 2 1 - 5 2 3 5 1 - 2 3 4 3 2 1 - 4 5 - 4 3 1 - 4 5 2 3 2 3 4 5 6 Player 1 n=6 5 Day 4 Day 1 2 1 6 5 4 3 2 3 5 1 6 2 4 3 4 3 2 1 6 5 4 5 6 4 3 1 2 5 6 4 5 2 3 1 Figure 4. Timetables for five and six players. Give recurrences. Continuing the two previous problems.11 Supplementary Problems 2 3 139 2 1 3 1 2 3 2 2 1 2 2 4 2 4 1 /\3-3 4 1 3 3 3 4 4 4 1 Figure 4. sible to construct a sorting circuit for n elements whose size and depth are in 0(n log n) and 0(logn). respectively. Tournaments.4. Each competitor must play exactly once against each of his opponents.13. including the initial conditions. . O Problem 4.

6 is examined in Knuth (1969).8 is crucial.2 was solved by Kronrod. The natural generalization of Problem 4. If n is a power of 2.140 Divide-and-Conquer Chap.7 and 4. The answer to Problems 4. see the solution to Exercise 18 of Section 5. You are given the coordinates of n algorithm capable of finding the closest pair 4. points in the plane. Figure 4. 1.12 REFERENCES AND FURTHER READING Quicksort is from Hoare (1962).11. The original solution to Problem 4. Mergesort and quicksort are discussed in detail in Knuth (1973).1 is due to Diffie and Hellman (1976). Monet. For more information about cryptology. Notice too that the integers involved in these applications are not sufficiently large for the algorithms of Section 4.10 comes from Gries (1981). On the other hand. however. 2.8. and Zuffellato (1986) covers computation with very large integers.7. that the cryptosystem based on the knapsack problem.7. the first positive success was obtained by Pan (1978). or in n days if n is odd. Floyd.8. The algorithm for multiplying large integers in a time in 0 (n 1. give an algorithm to construct a timetable allowing the tournament to be finished in n -1 days.8 is discussed in Knuth (1969). For any integer n > 1 give an algorithm to construct a timetable allowing the tournament to be finished in n -1 days if n is even. Closest pair of points. ** Problem 4.4. and Brassard (1988). and Tarjan (1972). each competitor must play exactly one match every day.2. . has since been broken. The algorithm of Section 4. Problem 4. For example. and the algorithm that is asymptotically the most efficient known at present is by Coppersmith and Winograd (1987). Bear in mind. 4 Moreover. Give an of points in a time in O (n log n ). The algorithm linear in the worst case for selection and for finding the median is due to Blum. Kranakis (1986). and Adleman (1978). efficient exponentiation as described in Section 4. Shamir.81) comes from Strassen (1969).11. consult the introductory papers by Gardner (1977) and Hellman (1980) and the books by Kahn (1967). The survey article by Brassard.59) is attributed to Karatsuba and Ofman (1962). Denning (1983).4 gives possible timetables for tournaments involving five and six players.14.7 to be worthwhile. with the possible exception of a single day when he does not play at all. as described in Hellman (1980). Pratt.4 of Knuth (1973). Subsequent efforts to do better than Strassen's algorithm began with the proof by Hopcroft and Kerr (1971) that seven multiplications are necessary to multiply two 2x2 matrices in a non-commutative structure . The importance for cryptology of the arithmetic of large integers and of the theory of numbers was pointed out by Rivest. Rivest. The algorithm that multiplies two n x n matrices in a time in O (n 2.

Sec.11 are solved in Batcher (1968).12 References and Further Reading 141 The solution to Problem 4. at least in principle. 4.11. Problem 4. and Szemeredi (1983). Problem 4.11.11.11. Problem 4.1 can be found in Gries and Levin (1980) and Urbanek (1980).10 and 4. in Ajtai.11. Komlos.3 is discussed in Pohl (1972) and Stinson (1985).14 is solved in Bentley and Shamos (1976). .12 is solved. but consult Section 8.11.7 for more on this problem. Problems 4.

then a more efficient algorithm will result. they will in turn create a large number of identical subinstances. on the other hand. usually by keeping a table of known results. It sometimes happens that the natural way of dividing an instance suggested by the structure of the problem leads us to consider several overlapping subinstances. When a problem is solved by divide-and-conquer. The underlying idea of dynamic programming is thus quite simple : avoid calculating the same thing twice. Divide-and-conquer. We usually start with the smallest. and then to combine the solutions of the subinstances so as to solve the original instance. which we fill up as subinstances are solved. 142 .5 Dynamic Programming 5. which we then divide into smaller and smaller subinstances as the algorithm progresses. until finally we arrive at the solution of the original instance. If we pay no attention to this duplication. to solve the subinstances (perhaps by further dividing them). If we solve each of these independently.1 INTRODUCTION In the last chapter we saw that it is often possible to divide an instance into subinstances. saving the solution for later use. Dynamic programming is a bottom-up technique. on the other hand. it is likely that we will end up with an inefficient algorithm. If. we immediately attack the complete instance. and hence the simplest. we take advantage of the duplication and solve each subinstance only once. subinstances. we obtain the answers to subinstances of increasing size. is a top-down method. By combining their solutions.

If we calculate [ k] directly by function C (n .5 uses dynamic programming? Dynamic programming is often used to solve optimization problems that satisfy the principle of optimality: in an optimal sequence of decisions or choices. i < n . Problem 5. k . Since the final result is obtained by adding up a certain number of Is.1. If. Thus the algorithm takes a time in 0 (nk) and space in 0 (k). the execution time of this algorithm is certainly in S2( [ k 1). k . Which of the algorithms presented in Section 1.7). 0<k <n k otherwise .. it does not always apply. we use a table of intermediate results (this is of course Pascal's triangle . We have already met a similar phenomenon in algorithm fib] for calculating the Fibonacci sequence (see Section 1. if we assume that addition is an elementary operation.Introduction Sec. Prove that the total number of recursive calls made during the computation of C (n. which we update from left to right. representing the current line. The table should be filled line by line.. k ) if k = 0 or k = n then return 1 else return C (n .1). . In fact.I k 0 1 C(n-1. see Figure 5. k) is exactly 2 [ n -2- Problem 5. on the other hand. Calculating the Fibonacci sequence affords another example of this kind of technique. 0 1 1 1 1 2 1 2 2 3 .l .1.1) + C (n .7. each subsequence must also be optimal.1 Example 5.1.1. Although this principle may appear obvious.1.1. Pascal's triangle.5 and Example 2. we obtain a more efficient algorithm.1. k) Figure 5.2. 5.k) C(n.1. k) many of the values C (i.7.1. j). it is not even necessary to store a matrix : it is sufficient to keep a vector of length k. 143 Consider calculating the binomial coefficient + n Inkl j. j < k are calculated over and over.2.k-1)C(n-1.

For this reason the problem considered in this section is not one of optimization. the winner being the first team to achieve n victories.2 THE WORLD SERIES As our first example of dynamic programming. Show that the principle of optimality does not apply to the problem of finding the longest simple path between two cities. Let P (i.2. if the fastest way to drive from Montreal to Toronto takes us first to Kingston.144 Dynamic Programming Chap. Instead. and the principle of optimality does not apply. Imagine a competition in which two teams A and B play not more than 2n -1 games. it does not follow that we should drive from Montreal to Kingston as quickly as possible : if we use too much petrol on the first half of the trip.3. However. We assume that there are no tied games. and that for any given match there is a constant probability p that team A will be the winner and hence a constant probability q = 1 -p that team B will win. but rather concentrate on the control structure and the order of resolution of the subinstances. maybe we have to stop to fill up somewhere on the second half. The subtrips Montreal-Kingston and Kingston-Toronto are not independent. and only then are these combined into an optimal solution to the original instance. (A path is simple if it never passes through the same place twice. Problem 5. For example. it is not immediately obvious that the subinstance consisting of finding the shortest route from Montreal to Ottawa is irrelevant to the shortest route from Montreal to Toronto. 5 Example 5.) The principle of optimality can be restated as follows for those problems for which it applies : the optimal solution to any nontrivial instance is a combination of optimal solutions to some of its subinstances. let us not worry about the principle of optimality. losing more time than we gained by driving hard.1. Without this restriction the longest path might be an infinite loop. The difficulty in turning this principle into an algorithm is that it is not usually obvious which subinstances are relevant to the instance under consideration.1.2. before the first game of the series the probability that . Coming back to Example 5. then that part of the journey from Montreal to Kingston must also follow the shortest route between these two cities : the principle of optimality applies. 5. that the results of each match are independent. This difficulty prevents us from using a divide-and-conquer approach that would start from the original instance and recursively find optimal solutions precisely to those relevant subinstances. Argue that this is due to the fact that one cannot in general splice two simple paths together and expect to obtain a simple path. j) be the probability that team A will win the series given that they still need i more victories to achieve this. If the shortest route from Montreal to Toronto goes via Kingston. whereas team B still needs j more victories if they are to win.1. dynamic programming efficiently solves every possible subinstance in order to figure out which are in fact relevant.

1) + d . 0) is undefined.2.2.4" /(2n + 1). Prove that (nn. T (k) is therefore in 0 (2k) = 0 (4") if i = j = n. j -1). Similarly P (i. Problem 5. j) + qP (i. j) + C (i + (j -1). j -1) Let T (k) be the time needed in the worst case to calculate P (i. j) using function P (i. The total number of recursive calls is therefore exactly 2(' ' I . j). j . where k = i +j. 0) = 0. we find the pattern shown in Figure 5.2 matches left etc. (Although sporting competitions with n > 4 are the exception.1.2) k .1. ll the required time is thus in S2( [ 2n n JJ P(i. To calculate the probability P (n . n) is in 0(4 n ) and Q(4 n In). 1 < i <. This method is therefore not practical for large values of n.2.j)=pP(i-1. j) P(i. P (0. 1 <.i < n. we see that the time required to calculate P (n .2 (Problem 5. P(i. then it is of course certain that they will win the series : P (0. j . With this method.j-1) i ? l.2 145 team A will be the overall winner is P (n .2. n) that team A will win given that the series has not yet started. since team A wins any given match with probability p and loses it with probability q. j) if i = 0 then return 1 else if j = 0 then return 0 else return pP (i -1.1). j) P(i . If team A has already won all the matches it needs. k>1 where c and d are constants. j) = C ((i -1) + j .1) P(i.1 matches left that call P(i . Figure 5. if we look at the way the recursive calls are generated.1. j) k matches left calls P(i . j). Combining these results.1. 5. Recursive calls made by a call on function P (i . >. i) = 1. this problem does have other applications ! ) .1) k .The World Series Sec.1.j)+gP(i. which is identical to that followed by the naive calculation of the binomial coefficient C (i + j . we see that T(1)=c T(k):5 2T(k .1. j . In fact. n) : both teams still need n victories.n. Finally.j ? 1 Thus we can compute P (i.

n) in a time in O(n).pP[k-l.n-k]+qP[s+k. This time.) 5.s]. n) using the preceding algorithm is in 0(4"I[ ). 5 * Problem 5.s -k . Using this algorithm.0]-0 fork .I to n do P[0.146 Dynamic Programming Chap. Show how to compute P (n.1 to s-1 do P[k. Here is the algorithm to calculate P (n . n ] Problem 5.M" Matrix multiplication is associative. Problem 5.. Show that a memory space in O(n) is sufficient to implement this algorithm.s-k] + gP[k.pP[s+k-1. p ) array P[0.P[s. so we can compute the product in a number of ways : .5.2. we work diagonal by diagonal. Problem 5. To speed up the algorithm. function series (n .n. instead of filling the array line by line.3. and since a constant time is required to calculate each entry. calculate the probability that team A will win the series if p = 0.6. its execution time is in 0(n 2).4.2.1] for s 1 to n do fork . n) .3 CHAINED MATRIX MULTIPLICATION We wish to compute the matrix product M=M1M2.1.2.see Section 8... (Hint : use a completely different approach ..45 and if four victories are needed to win.0 to n -s do P[s+k.2.n] qE-1-p for s F.n-k-1] return P [n.2. however.n-k] .0. Prove that in fact the time needed to calculate P (n . Since in essence the algorithm has to fill up an n x n array. we proceed more or less as with Pascal's triangle : we declare an array of the appropriate size and then fill in the entries.s-k] <.

1.785 multiplications 3.i) ways to parenthesize the right-hand term. Let T (n) be the number of essentially different ways to parenthesize a product of n matrices. ))) The choice of a method of computation can have a considerable influence on the time required.. here is the corresponding number of scalar multiplications.1..) In each case.. we could simply parenthesize the expression in every possible fashion and count each time how many scalar multiplications will be required. ) (Mi + 1 Mi +2 .326 multiplications for a total of 10. we do not differentiate between the method that first calculates AB and the one that starts with CD. and D is 3 x 34. 5. There are five essentially different ways of calculating this product. M.. To measure the efficiency of the different methods.. We wish to calculate the product ABCD of four matrices : A is 13 x 5.. (In the second case that follows. (Mn-1Mn) . Show that calculating the product AB of a p x q matrix A and a q x r matrix B by the direct method requires pqr scalar multiplications.. Suppose we decide to make the first cut between the i th and the (i + 1)st matrices of the product : M = (M 1 M 2 .471 multiplications 1.055 26. For example. There are now T (i) ways to parenthesize the left-hand term and T (n .1.3 Chained Matrix Multiplication 147 M=(.582 multiplications. i=1 . Since i can take any value from 1 to n .Mn) = (M1(M2(M3 .. Example 5.856 4.. if we calculate M = ((AB )C )D. Problem 5.Sec. we obtain the following recurrence for T (n) : nI T(n)= E T(i)T(n-i). Mn) . we obtain successively (AB) (AB )C ((AB )C )D 5.418 The most efficient method is almost 19 times faster than the slowest.. 0 To find directly the best way to calculate the product..582 54. we count the number of scalar multiplications that are involved.. ((AB )C )D (AB) (CD) (A (BC ))D A ((BC )D) A (B (CD )) 10. B is 5 x 89.3.((M1M2)M3).3.201 2. C is 89 x 3.

Since T(n) is in S2(4"/n2) (from Problems 5.n. m34 = 9. 2. Mi +s) and choose the best for i <. Fortunately..89. It is only for clarity that the second case is written out explicitly. 2. . This method is therefore impracticable for large values of n : there are too many ways in which parentheses can be inserted for us to look at them all.078. we find n 1 2 3 4 5 10 15 T (n) 1 1 2 5 14 4.i+s+di-ldkdi+s). This suggests that we should consider using dynamic programming.34). .. Continuation of Example 5. Mn must also be calculated in an optimal way.2.2.3. The solution to the original problem is thus given by m 1. We have d =(I 3.3. for s = 2 we obtain .. The values of T (n) are called Catalan numbers. . n s = 1 : mi. i =1.335.i <.=(13.2. 0 <. i<k<i+s The third case represents the fact that to calculate Mi Mi+l . we find m 12 = 5.440.3. We build the table mid diagonal by diagonal : diagonal s contains the elements mid such that j . Example 5. .j <. the principle of optimality applies to this problem. we can thus calculate all the values of T. where mid gives the optimal solution .. Mk ) (Mk + 1 . Prove that T(n)= 1 (2n-2] n-1 n For each way that parentheses can be inserted it takes a time in U(n) to count the number of scalar multiplications required (at least if we do not try to be subtle). then both the subproducts M1M2 Mi and Mi + A12 . Next..n-s. i + 1 = di -1di di + 1. n -1 (see Problem 5.i <. Among other values..i = s.785. i = 1.2). We thus obtain in succession : s = 0: mii = 0. Mi+s we try all the possibilities (Mi Mi + 1 .that is..1) 1 <S <n:mi. . Suppose the dimensions of the matrices Mi are given by a vector di . finding the best way to calculate M using the direct approach requires a time in Q(4"/n). the required MM of the required pronumber of scalar multiplications . We construct a table mid .2.3. 5 Adding the initial condition T (l) = 1.1 and 5. as it falls under the general case with s = 1. such that the matrix Mi is of dimension di -I by di .Dynamic Programming 148 Chap.3. m23 = 1.674.n. if the best way of multiplying all the matrices requires us to make the first cut between the ith and the (i + l)st matrices of the product.i+s= min (mik+mk+l. I <. .3.k < i + s .5. For s = 1.. i = 1.. *Problem 5...for the part Mi Mi +I duct.. For instance.1.862 2.

1.Sec. {k=3} m +m44+ 13x3x34) = min(4055. How must the algorithm be modified if we want not only to Problem 5. {k=2} m12 m34+ 13x89x34 .3 Chained Matrix Multiplication 149 m13 =min(mll +m23+ 13x5x3. m23+m44+5x3x34) = min(24208.4.s elements to be computed in the diagonal s .3.1)/2 . Example of the chained matrix multiplication algorithm.3. for s = 3 m14=min({k=1} mll+m24+13x5x34.845 Finally. Write the algorithm to calculate m In .1)/6 = (n 3-n )/6 The execution time is thus in 0(n 3). we must choose between s possibilities (the different possible values of k). 5. 2856) = 2. m12+m33+ 13x89x3) = min(1530.y s2 = n 2 (n .3. The array m is thus given in figure 5.5 01. Problem 5. calculate the value of m In . 9256) = 1. 1845) = 1.1)(2n . for each.1.856 . there are n . . 2 856 s=3 2 s=2 3 S=1 4 s=0 Figure 5.3. The execution time of the algorithm is therefore in the exact order of n-1 s=l n-1 n-I s=1 s=l (n-s)s =n2:s . j=1 2 4 3 0 1 \ 5 785 \ 1 530 i=1 .n (n . but also to know how to calculate the product M in the most efficient way ? For s > 0.530 m24=min(m22+m34+5x89x34.3.

and that a matrix L gives the length of each edge. N is the set of nodes and A is the set of edges. (Compare this to Section 3. then that part of the path from i to k.00 fork <--i toj-1 do ans F. k) + minmat (k + 1. .. and that from k to j. suppose that the nodes of G are numbered from 1 to n. i ] = 0. Let Dk be the matrix D after the k th iteration. where we were looking for the length of the shortest paths from one particular node. with L [i . (Hint: for the "0 " part.. The principle of optimality applies : if k is a node on the shortest path from i to j. It is this duplication of effort that causes the inefficiency of minmat. N = { 1.Dynamic Programming 150 Chap.) As before. where the array d [0.5 is faster than naively trying all possible ways to parenthesize the desired product. n) takes a time in O(3" ).. and L [i. After n iterations we therefore obtain the result we want. k -1 }. Prove that with the following recursive algorithm function minmat (i. minmat recursively solves 12 subinstances. d [i -1 ]d [k]d [ j ] + minmat (i. the algorithm has to check for each pair of nodes (i. n ] is global. j) does not exist.3. . 2. both of which recursively solve BCDEF from scratch. n) of Problem 5.4 SHORTEST PATHS Let G = <N. the source. it is still much slower than the dynamic programming algorithm described previously. 5 * Problem 5. Each edge has an associated nonnegative length. j)) return ans . D gives the length of the shortest paths that only use nodes in 11. use ccnstructive induction to find constants a and b such that the time taken by a call on minmat (1. n) is no greater than a 3" . In order to decide on the best way to parenthesize the product ABCDEFG... It then does n iterations... 2. j] ? 0 if i # j. At itera- tion k. L [i.) Although a call on the recursive minmat (1.2.5. . must also be optimal. 2. a call on minmat (1. A > be a directed graph . to all the others. n }. We construct a matrix D that gives the length of the shortest path between each pair of nodes. j] = oc if the edge (i. including the overlapping ABCDEF and BCDEFG.3. k) as intermediate nodes. The necessary check can be written as .min(ans. After iteration k . . The algorithm initializes D to L. j) whether or not there exists a path passing through node k that is better than the present optimal path passing only through nodes in { 1. We want to calculate the length of the shortest path between each pair of nodes. 5. j) if i = j then return 0 ans .b. This behaviour illustrates a point made in the first paragraph of this chapter. .2. .

j]) return D Figure 5. if the graph is dense (a = n 2).j].k]+Dk-1[k. where a is the number of edges in the graph.Sec.n] D -L fork <-Iton do for i . .. that is. k ] +D [k. If we use the version of Dijkstra's algorithm that works with a matrix of distances.k]+D[k.n. that is.Dk-i[i.2.2) to solve the same problem. 5. The order is the same as for Floyd's algorithm. j] then D[i. k ] is always zero.l.k]+D[k..1 to n do D [i.min(D [i. It is obvious that this algorithm takes a time in O(n3).j] =min(Dk_i[i. The algorithm.j]). n. it is better to use Floyd's algorithm. the total time is in n x O ((a +n) log n). n ] array D[I. We usually want to know where the shortest path goes.4 Shortest Paths 151 Dk[i. procedure Floyd(L [1 . I. It is therefore not necessary to protect these values when updating D. j]. j] . We have also implicitly made use of the fact that an optimal path through k does not visit k twice. We can also use Dijkstra's algorithm (Section 3. n ]) : array[ l . it may be preferable to use Dijkstra's algorithm n times . each time choosing a different node as the source. In this case we have to apply the algorithm n times.j]-k . j] P[i. where we make use of the principle of optimality to compute the length of the shortest path passing through k. follows.. known as Floyd's algorithm. n. This allows us to get away with using only a twodimensional matrix D. in O ((an +n 2) log n). whereas at first sight a matrix n x n x 2 (or even n x n x n) seems necessary. D [i. The innermost loop of the algorithm becomes ifD[i.. if we use the version of Dijkstra's algorithm that works with a heap. since D [k . and hence with lists of the distances to adjacent nodes. j] <--D[i. in O(n 3 ). but the simplicity of Floyd's algorithm means that it will probably be faster in practice. the total computation time is in n x O(n 2).1 to n do for j E.. j] <D[i. If the graph is not very dense (a << n 2). On the other hand. At the k th iteration the values in the k th row and the k th column of D do not change. I. In this case we use a second matrix P initialized to 0..4.1 gives an example of the way the algorithm works. not just its length.

If P [i. Look recursively at P [i. k ] and P [k. the shortest path is directly along the edge (i. j] to find any other intermediate nodes along the shortest path. To recover the shortest path from i to j.1. j]. P [i. look at P [i. otherwise. 5 15 0 Do=L = 0 D D 1 = 5 00 00 50 0 15 5 30 35 0 15 15 20 5 0 0 20 10 5 45 0 3 = 15 30 35 0 15 20 5 Figure 5.Dynamic Programming 152 Chap. j ] = k. if P [i. . When the algorithm stops. 5 50 0 15 5 30 00 0 15 15 00 5 0 0 0 5 50 0 D2 = 5 30 35 0 15 15 20 5 0 5 20 0 D4 = 20 10 15 0 5 15 oo o 15 10 10 5 30 35 0 15 15 20 5 0 Floyd's algorithm at work. the shortest path from i to j passes through k. j].4. j) . j ] contains the number of the last iteration that caused a change in D [i. j] = 0.

We want to find a matrix D such that D [i. 4. Even if a graph has edges with negative length. the shorter our path will be ! Does Floyd's algorithm work i. j) exists. and D [i.4 153 Shortest Paths Example 5.3. .1. In this case.2. Problem 5. but that does not include a negative cycle ? Prove your answer. Suppose we allow edges to have negative lengths. On a graph that includes a negative cycle ? ii.4. j] = false otherwise.4. Warshall's algorithm. 3].2 in the * Problem 5. and L [i . but that from 4 to 3 we proceed directly. If G includes cycles whose total length is negative.4.1. Initially. For the graph of Figure 5.3). On a graph that has some edges whose lengths are negative. the notion of a shortest simple path still makes sense. 5. These two problems are NP-complete (see Section 10.Sec. Problem 5. (We are looking for the reflexive transitive closure of the graph G.3. No efficient algorithm is known for finding shortest simple paths in graphs that may have edges of negative length. L [i.) Adapt Floyd's algorithm for this slightly different case. the shortest path from 1 to 3 passes through 4.1. j] = true if there exists at least one path from i to j.4. i ]). j] = false otherwise. the notion of "shortest path" loses much of its meaning : the more often we go round the negative cycle. case when the matrix L is symmetric (L [i. P becomes 0 0 4 2 4 0 4 0 P = 0 1 0 0 0 1 0 0 Since P [1. The shortest path from I to 3 is thus 1. 2. j] = L [ j.4. 3. the length of the edges is of no interest . Looking now at P [1.1. Finally we see that the trips from 1 to 2 and from 2 to 4 are also direct. we discover that between 1 and 4 we have to go via 2.4. 3] = 4. This is the situation we encountered in Problem 5.2.2. (We shall see an asymptotically more efficient algorithm for this problem in Section 10. j] = true if the edge (i.) Find a significantly better algorithm for Problem 5. 4] and P [4. only their existence is important.

) To determine whether a key X is present in the tree. we only need look at the right-hand subtree.5. and the search stops . For a given set of keys. If X=R. (For the rest of this section.2 contains the same keys as those in Figure 5. Problem 5.154 Dynamic Programming Chap. and less than or equal to the key of its righthand child. several search trees may be possible : for instance." Figure 5. .5 OPTIMAL SEARCH TREES We begin by recalling the definition of a binary search tree. C . but rather.1.5.5. (It provides an example of simplification : see chapter 4. the tree in Figure 5. and less than or equal to the values contained in its righthand descendants.1 shows an example of a binary search tree containing the keys A. H.2.5. A recursive implementation of this technique is obvious.3. A binary tree each of whose nodes contains a key is a search tree if the value contained in every internal node is greater than or equal to (numerically or lexicographically) the values contained in its left-hand descendants. tinct keys ? How many different search trees can be made with eight dis- . The nodes may also contain further information related to the keys : in this case a search procedure does not simply return true or false. A binary search tree. we have found the key we want.5. Figure 5.5.5.) Problem 5.. 5 5. the information attached to the key we are looking for. if X < R. Write a procedure that looks for a given key in a search tree and returns true if the key is present and false otherwise. search trees will be understood to be binary. . Problem 5. we first examine the key held in the root.1. Show by an example that the following definition will not do : "A binary tree is a search tree if the key contained in each internal node is greater than or equal to the key of its left-hand child. Suppose this key is R. B. we only need look at the left-hand subtree .1. and if X > R.

is held in a node of depth dk in the sub- tree.5. . i < k < j.5. be pi . Recall that the depth of the root of a tree is 0. n . i=1 This is the function we seek to minimize.2. For the time being.i. If T (n) is the number of different search trees we can make with n distinct keys. Problem 5. . i = 1. If the key Ck .4.. we shall solve a more general problem still. a single comparison suffices. cj .5..) In Figure 5..5. For a given tree the average number of comparisons needed is set c 1 < c 2 < * C= pi (di + 1). Suppose that in an optimal tree containing all the n keys this sequence of j-i + 1 keys occupies the nodes of a subtree.1.2 Another binary search tree. on the other hand.Sec. 2.1 two comparisons are needed to find the key E . Consider the sequence of successive keys ci . If some key ci is held in a node at depth di . the average number of comparisons carried out in this subtree when we look Figure 5. Suppose we have an ordered < cn of n distinct keys. suppose that Ein=1 pi = 1. j >.. in Figure 5. and so on. give a tree that minimizes the average number of comparisons needed. (Hint: reread Section 5. 5. then di + 1 comparisons are necessary to find it. For the case when the keys are equiprobable. the depth of its children is 1. . find either an explicit formula for T (n) or else an algorithm to calculate this value. Repeat the problem for the general case of n equiprobable keys.5. and (4+3+2+3+1+3+2+3)/8=21/8 comparisons on the average in Figure 5. all the requests refer to keys that are indeed present in the search tree. ci + i . . Let the probability that a request refers to key c. If all the keys are sought with the same probability. it takes (2+3+1+3+2+4+3+4)/8 = 22/8 comparisons on the average to find a key in Figure 5. .5.5.2. In fact.5 155 Optimal Search Trees *Problem 5.3. that is.5.

. cJ .) One of these keys. To find the optimal search tree if the probabilities associated Example 5.3. with five keys c1 to c5 are . . . . k - I+ Ck + I . Let mij = Yk=i Pk .. In this case one comparison is made with Ck . and a change in the subtree does not affect the contribution made to C by other subtrees of the main tree disjoint from the one under consideration. . k...3. J where the three terms are the contributions of the root. ci + I . Figure 5.. Pk(dk+l).. the probability that it is in the sequence ci .5.I and R is an optimal subtree containing Ck +I . cJ when a key is sought in the main tree. To obtain a dynamic programming scheme. and let CiJ be the average number of comparisons carried out in an optimal subtree containing the keys ci . it remains to remark that the root k is chosen so as to minimize Cij : CiJ =mil + In particular. The average number of comparisons carried out is therefore CiJ = mil + Ci. L is an optimal subtree con- taining the keys ci . and others may then be made in L or R. (It is convenient to define CiJ = 0 if j = i . We thus arrive at the principle of optimality : in an optimal tree all the subtrees must also be optimal with respect to the keys they contain. respectively. When we look for a key in the main tree. 5 for a key in the main tree (the key in question is not necessarily one of those held in the subtree) is CY. L and R.1. A subtree of the optimal search tree is optimal. . In Figure 5. say. c j is mil .5.Dynamic Programming 156 Chap.J) (*) . must occupy the root of the subtree.. ci +I . Ck . k=i We observe that this expression has the same form as that for C. ci + I . . . .k-I +Ck+I.1. Cii = pi mm n ISk<J (Ci.5..

0. 1. but how do we find the form of this tree ? 0 .50.45 5 0.5. We know how to find the minimum number of comparisons Problem 5.08 0.38.5.40 Similarly C23=0. 0.00 + min(1. 0.30.00. C11+C35.20.49 C25 = m25 + min(C21 +C35.76 C35 = m35 + min(C32+C45.61) = 0.43 + min(0.09.00 C15=m15+min(C1o+C25. CII+C32) = 0.88 + min(0.76.35 0.91.05 0. and next. we note that Cii = pi . C45=0.5 157 Optimal Search Trees i 1 2 3 Pi 0.00 0. 1.18) = 0. 0.30 0.05 0. C12+C44. 0. C12+C43) = 0.30 0. 0.65 0.73. C12 = m12 + min(CI0+C22.08 4 0. C23+C54) = 0.45 0.15.6.85 C14=m14+min(C1o+C24. 0.61) = 1. C23+C55.70 0.53 0. Then C13 = m13 + min(C10+C23. C13+C54) = 0.69.40) = 0.05.30) = 0. C12+C45.58 0.85. m = 0.12 Now. C33+C55. C24+C65) = 0.65 + min(0. 0. C11 +C34. C14+C65) = 1.74.70 + min(0.61.57 0.13 0. C13+C55. 1. C22+C44. 0. we use (*) to calculate the other values of Cii.69.88 1.85.73 comparisons on the average to find a key (see Figure 5.43 0. 5.12 we first calculate the matrix m.4).18.76) = 1.58 + min(0. The optimal search tree for these keys requires 1. C11+C33. 0. 0.49) =1.Sec.35 + min(0. 0. 0.73.18. necessary in the optimal tree. C22+C45. C34+C65) = 0. 1 <_ i <_ 5.61 C24 = m24 + min(C21+C34. 0.61. C34=0.

m=1 Problem 5. =Mid + Ci . each involving a choice among m + 1 possibilities.7.I+ Ck + 1.(In-t (n-m)(m+l)) =8(n3) . Problem 5.i <. . be the probability that a request concerns a key ci that is in the tree. Give an algorithm that can determine the optimal search tree in this context.) In this algorithm we calculate the values of Cij first for j -i = 1.. * Problem 5. 2. For 1 <. n .. Specifically.5.5. 2...j and C.5.5. to calculate. . ** Problem 5. k . Figure 5. and let qj . .n.4. let pi . We now have n n . i = 1. n . ] } . n. Generalize the preceding argument to take account of the possibility that a request may involve a key that is not in fact in the tree. When j -i = m . (Provided the keys are sorted. 2. there are n -m values of C. and so on. .5. we do not need their exact values. or to ascertain that it is missing. i = 0. . then for j -i = 2.. Prove this last equality. let rid = max { k I i <. The optimal tree must minimize the average number of comparisons required to either find a key. i = 1 . The required computation time is therefore in . 5 Write an algorithm that accepts the values n and pi . and that produces a description of the optimal search tree for these probabilities.pi +Iqi=1. 1.Dynamic Programming 158 Chap. An optimal binary search tree. . if it is present in the tree. be the probability that it concerns a missing key situated between ci and ci+1 (with the obvious interpretation for q0 and qn )..k <.9.j <.8..10.

Using this definition. . and the lengths of the edges are denoted by L. . explicit example that this greedy algorithm does not always find the optimal search tree..6 The Travelling Salesperson Problem 159 be the root of an optimal subtree containing c. we take N = { 1. Ck+2 . Consider a set of nodes S c N \ 111 and a node i EN \ S.5. at the root. S # 0. Write also r. n j.j})).11 generalize to the case discussed in Problem 5. Ck.S)=rni (L. j] = oo if the edge (i.5. 5.4.2. after having gone exactly once through each of the other nodes. followed by a path from j to 1 that passes exactly once through each node in N \ 11. Give an optimal search tree for your example. and calculate the average number of comparisons needed to find a key for both the optimal tree and the tree found by the greedy algorithm.10 to show how to calculate an optimal search tree in a time in 0(n2). recursively in the same way. if i # 1. i.5. . we are required to find the shortest possible circuit that begins and ends at the same node. A > be a directed graph. It therefore consists of an edge (1.5. (Problems 5.10 and 5....N\{l. . . say. More generally. j ] >_ 0 if i and L [i. . i ] = 0. Show with the help of a simple. c 2. Let G = < N. As usual.Sec.5. j }. -I = i ...5. and i 0 S. j # 1. N \ { 1 }) is the length of an optimal circuit.. j).j for every 1 <_i j Sn. If the circuit is optimal (as short as possible).. Suppose without loss of generality that the circuit begins and ends at node 1. with L [i .S\ { j})). then so is the path from j to 1 : the principle of optimality holds. g(i.11. Use the result of Problem 5. 2. c. assuming the keys are already sorted ? ii. with i =1 allowed only if S = N \ { 1 } .j _i <_rj <r. .j . By the principle of optimality. How much time does this algorithm take in the worst case. c.j +g(j. cj . +I .9.12.and right-hand subtrees for c i .. we see that g(1. g (1.+1. jES (*) . Given a graph with nonnegative lengths attached to the edges. ck _ 1 and Ck + i . 5. S # N \ { 1 ). Define g (i. and construct the left.N\{1})=2<j<n min(Ljj +g(j. Prove that r. Problem 5. S) as the length of the shortest path from node i to node 1 that passes exactly once through each node in S. L [i .) Problem 5. . There is an obvious greedy approach to the problem of constructing an optimal search tree : place the most probable key. j) does not exist.6 THE TRAVELLING SALESPERSON PROBLEM We have already met this problem in Section 3.

N \ { 1. g (4. g(i.6. A directed graph for the travelling salesperson problem.. 5 Furthermore. j }) is known for all the nodes j except node 1. . We can apply (**) to calculate the function g for all the sets S that contain exactly one node (other than 1). Example 5. N \ { 11) and solve the problem.1.6. Let G be the complete graph on four nodes given in Figure 5. i =2.6. 0) = 8 .0)=Li1. n .Dynamic Programming 160 Chap. S) are therefore known when S is empty. 0) = 5. we can use (*) to calculate g (1. 15 6 Figure 5. 0) = 6. g (3.1: 0 10 15 20 0 9 10 6 13 0 12 8 9 0 5 L = 8 We initialize g (2.. and so on. then we can apply (**) again to calculate g for all the sets S that contain two nodes (other than 1).1..3. Once the value of g ( j. The values of g (i..

using (**) for sets of two nodes.1. we need an extra function : J (i .6.14))=20 g(4.3.4)) = min(L23 + g (3.4 } ).{3.(31)=3 -1 The required computation time can be calculated as follows : In this example we find . { 3) ). (3. Example 5.3. S) is the value of j chosen to minimize g at the moment when we apply (*) or (**) to calculate g(i.0)= 18 and similarly g(3.4)) = min(L 32 + g (2.6.4})=4 -3J(4. L 14 + g (4.Sec.{2})=18.2.1 has length 35. 27) = 23 Finally we apply (*) to obtain g (1. { 2 } )) = min(31. L24 + g (4.{3})= 15. we have g (2. g(3.6. To know where this circuit goes.(2. { 4) ). L 13 + g (3. Next. g(4. { 3.4)) = 2 and the optimal circuit is 1 -+J(1.{41)=L24+g(4. { 2.6 The Travelling Salesperson Problem 161 Using (**).12))) = min(23. {2.25) = 25 g (3.3) )) = min(35. (4)).{3.4)) =4 =4 J(4. ( 2.3. 5.41) = min(L 12 + g (2.12.25) = 25 g (4. (Continuation of Example 5.13 1) =L23+g(3. 40. 1 2. L 34 + g (4.) J(2.4))=2 J(2.S). 43) = 35 The optimal circuit in Figure 5. L43 + g (3.3 )) = min(L42 + g (2. { 3) )) = min(29.31) =2 J(3. (2.4 1). we obtain g(2.{2.0)= 15 g(2. (2.{2})= 13.4)) J(1.

Dynamic Programming 162 Chap.1 illustrates the dramatic increase in the time and space necessary as n goes up.400 491. 0) : n -1 consultations of a table.520 Problem 5. as would be the case if we simply tried all the possible circuits..372. 13 in Q(n 2" ). The preceding analysis assumes that we can find in constant time a value of g (j .800 419. N \ (1 }) : n -1 additions. TABLE 5.to calculate g ( j. Since S is a set. For instance.(n -1)k In k 21) = 19(n22") k=1 since k1 k (k1 = r 2r-1 This is considerably better than having a time in SZ(n !).43x1018 7. consider .to calculate all the g (i .6. whereas 20! microseconds exceeds 77 thousand years. it is easy to write a function that calculates g recursively.to calculate g (1. What is more . but it is still far from offering a practical algorithm. which data structure do you suggest to hold the values of g ? With your suggested structure.. S) that has already been calculated. Verify that the space required to hold the values of g and J is Problem 5. 202220 microseconds is less than 7 minutes. Time: Direct method n! Time: Dynamic programming n22" Space: Dynamic programming n 2" 5 120 800 160 10 3.971. These operations can be used as a barometer. 5. The computation time is thus in 0(2(n -1) + .430.628. which is not very practical either.2.400 10.520 20.1. how much time is needed to access one of the values of g ? Table 5.6.31 x 1012 20 2.7 MEMORY FUNCTIONS If we want to implement the method of Section 5.800 102.6 on a computer.1. .2: (n -1) (n k 21 k additions in all.6.6. n SOLVING THE TRAVELLING SALESPERSON PROBLEM. For example.n . S) such that 1 :5 #S = k <.240 15 1. 5 . .

S ] . however.L [i. Formulated in the following way : function g (i. S ] ans F oo for each j E S do distviaj . Although it is maybe not too hard to write such a generator. whenever we call the function. we first look in the table to see whether it has already been evaluated with the same set of parameters. One easy way to take advantage of the simplicity of a recursive formulation without losing the efficiency offered by dynamic programming is to use a memory function. . S \ { j } ) if distviaj < ans then ans .1)!). In this way it is never necessary to calculate the function twice for the same values of its parameters. Thereafter. Before returning the calculated value. then all the sets containing just one element from N \ { I). we save it at the appropriate place in the table.6 let gtab be a table all of whose entries are initialized to -1 (since a distance cannot be negative). all the entries in this table hold a special value to indicate that they have not yet been calculated.7 Memory Functions 163 function g (i.1] ans <. we return the value held in the table. For the algorithm of Section 5. then all the sets containing two elements from N \ { 1). S ) if S = 0 then return L [i . j ] + g ( j. it ends up back in S2((n .ans return ans .1 ] if gtab [i. we come up once more against the problem outlined at the beginning of this chapter : most values of g are recalculated many times and the program is very inefficient. (In fact. S \ { j } ) if distviaj < ans then ans . S ) if S = 0 then return L [i . If not. it is not immediately obvious how to set about it. and so on. we go ahead and calculate the function. To the recursive function we add a table of the necessary size. Unfortunately.L [i. If so.) So how can we calculate g in the bottom-up way that characterizes dynamic programming ? We need an auxiliary program that generates first the empty set.oo for each j E S do distviaj F. the function g combines the clarity obtained from a recursive formulation and the efficiency of dynamic programming.Sec.distviaj return ans .distviaj gtab [i. Initially. S ] > 0 then return gtab [i. if we calculate g in this top-down way. j ] + g ( j. 5.

If we are willing to use a little more space (the space needed is only multiplied by a constant factor.7. We sometimes have to pay a price for using this technique. however).8.8 SUPPLEMENTARY PROBLEMS Let u and v be two strings of characters.2.Dynamic Programming 164 Chap. it is possible to avoid the initialization time needed to set all the entries of the table to some special value.1. For instance. we can transform abbac into abcbc in three stages. returns a default value (such as -1) otherwise } A call on any of these procedures or functions (including a call on init !) should take constant time in the worst case. * Problem 5. add a character. Show how to calculate (i) a binomial coefficient and (ii) the function series (n . 5. p) of Section 5.. abbac -* abac ababc abcbc (delete b) (add b ) (change a into c ). This is particularly desirable when in fact only a few values of the function are to be calculated.1. We saw in Section 5. for instance. .2 using a memory function.. v ) { sets T [i ] to the value v } function val (i) { returns the last value given to T [i ].2. if any .6. see Section 6. 5 Problem 5. n ] can be virtually initialized with the help of two auxiliary arrays B [1 . transform u into v with the smallest possible number of operations of the following types : delete a character. n ] and a few pointers. n ] and P [1 . You should write three algorithms. Show that this transformation is not optimal.7. We want to * Problem 5. the calculation takes the same amount of time but needs space in Si(nk). n ] } procedure store (i . that we can calculate a binomial coefficient (k] using a time in 0 (nk) and space in 0(k).1. (For an example.. Implemented using a memory function. procedure init { virtually initializes T [1 ..) Show how an array T [1 . change a character. but we do not know in advance which ones.

c } .8. However. it can happen that the cost of renting from i to j is higher than the total cost of a series of shorter rentals. have the multiplication table given in Table 5. This algorithm works correctly in a country where there are coins worth 1.2. In the introduction to Chapter 3 we saw a greedy algorithm for making change. b. Problem 5. and that tells us what these operations are. (b (b (b (ba )))) = a as well.) For each possible departure point i and each possible arrival point j the company's tariff gives the cost of a rental between i and j. and so on.8. The elements of Y.3.8. the length of the string x.4. 10.5.Sec.1. what is the computing time needed by your algorithm ? Problem 5. Find an efficient algorithm to determine the minimum cost of a trip by canoe from each possible departure point i to each possible arrival point j.8.) In terms of n. 5. Modify your algorithm from the previous problem so it returns the number of different ways of parenthesizing x to obtain a.8 Supplementary Problems 165 Write a dynamic programming algorithm that finds the minimum number of operations needed to transform u into v. (This expression is not unique.8. In terms of N. TABLE 5. how much time does your algorithm take ? Problem 5.8. For example. if x = bbbba. in which case you can return the first canoe at some post k between i and j and continue the journey in a second canoe. Thus ab = b. (It is next to impossible to paddle against the current. For instance. but it does not always find an optimal solution if there . AN ABSTRACT MULTIPLICATION TABLE Right-hand symbol Left-hand symbol a b c a b b b c b a a c a c C Find an efficient algorithm that examines a string x = x 1x2 x of characters of E and decides whether or not it is possible to parenthesize x in such a way that the value of the resulting expression is a. and 25 units. your algorithm should return "yes" because (b (bb ))(ba) = a. ba = c. There are N Hudson's Bay Company posts on the River Koksoak. At any of these posts you can rent a canoe to be returned at any other post downstream. 5. There is no extra charge if you change canoes in this way. Note that the multiplication defined by this table is neither commutative nor associative. How much time does your algorithm take as a function of the lengths of u and v ? Problem 5. Consider the alphabet E = { a.1.

should take a time in 0 (n + provided You have n objects. n). the number of different possible orderings.1. Give a dynamic programming algorithm to calculate A (m.Dynamic Programming 166 Chap. including the initial conditions.. Let L be a bound on the sum we wish to obtain.n >0. ii.n-l)) ifm. i.n)=n+1 ifm >0 A(m.1.. ii. 4). For example.7. 1 <_ j <. ** Problem 5. let cij be the minimum number of coins required to obtain the sum j if we may only use coins of types T [1]. Your algorithm should take a time in 0 (n 2) and space in 0 (n).L. relations "<" and "=". or cii _+00 if the amount j cannot be obtained using just these coins. T[i].8. We suppose that an unlimited number of coins of each value is available. Calculate A (2. A (m. 5 also exists a coin worth 12 units (see Problem 3. For 1 <. 5). As a function of n and L. i. and A (4. n ] be an array giving the value of these coins. 0) = A (m . which you wish to put in order using the * Problem 5. how much time does your algorithm take? iii. Your algorithm must consist simply of two nested loops (recursion is not allowed).6.i <_ n and 1 <_ j <_ L. although some of these memory words can . . Give a dynamic programming algorithm that calculates all the c.n)=A(m-1. 3). as a function of n. .A(m. Moreover. The general problem can be solved exactly using dynamic programming.. Ackermann's function is defined recursively as follows : A(0. Your algorithm may use only a single array of length L. Give a greedy algorithm that can make change using the minimum number of coins for any amount M <_ L once the % have been calculated.1 . A (3. Your algorithm 0. 13 different orderings are possible with three objects. Let n be the number of different coins that exist.1). it is restricted to using a space in 0 (m). A =B =C A =C <B C <A =B A =B <C B <A =C C <A <B A <B =C B <A <C C <B <A A <B <C A <C <B B <C <A B =C <A Give a dynamic programming algorithm that can calculate. T[2]. 1) This function grows extremely rapidly. and let T [I .8. Give a recurrence for cij .

9 REFERENCES AND FURTHER READING Several books are concerned with dynamic programming. able to solve the problem of chained matrix multiplications in a time in 0 (n log n). can be found in Hu and Shing (1982. comes from Gilbert and Moore (1959). m ] such that at every instant val [i ] = A (i. and Lauriere (1979).5.1). A theoretically more efficient algorithm is known : Fredman (1976) shows how to solve the problem in a time in O (n 3(log log n / log n) 1/3 ). a more efficient algorithm. (Hint : use two arrays val [0. 1973).2 is supplied by the algorithm in Warshall (1962). Both Floyd's and Warshall's algorithms are essentially the same as the one in Kleene (1956) to determine the regular expression corresponding to a given finite automaton (Hopcroft and Ullman 1979). The improvements suggested by Problems 5. a hexagon can be cut in 14 different ways.5 for constructing optimal search trees.2 triangles using diagonal lines that do not cross is T (n .. 11 5. m ] and ind [0.1. polygon into n . including Sloane (1973) and Purdom and Brown (1985). 1984).. We mention only Bellman 1957. including the solution to Problem 5. as shown in Figure 5. Floyd's algorithm for calculating all shortest paths is due to Floyd (1962). 5.5.10 and 5. the (n -1)st Catalan number (see Section 5. The algorithm of Section 5.3).5. A solution to Problem 5. Cutting a hexagon into triangles. Catalan's numbers are discussed in many places. this .10 that is both simpler and more general is given by Yao (1980).11 come from Knuth (1971. All these algorithms (with the exception of Fredman's) are unified in Tarjan (1981). The algorithm in Section 5.) 11 Prove that the number of ways to cut an n -sided convex Problem 5.9 167 References and Further Reading 0 Figure 5.8.8. grow quite large.8.8.5.9.3 is described in Godbole (1973) . Nemhauser (1966).4. The solution to Problem 5.Sec. For example. ind [i ]).1. Bellman and Dreyfus (1962).

Problem 5.8. A solution to Problem 5.2. comes from Exercise 2.8.2).1 is given in Wagner and Fischer (1974). Problem 5.7.12. which takes cubic time to carry out the syntactic analysis of any context-free language (Hopcroft and Ullman 1979).6 suggested itself to the authors one day when they set an exam including a question resembling Problem 2. 5 paper gives a sufficient condition for certain dynamic programming algorithms that run in cubic time to be transformable automatically into quadratic algorithms. The algorithm for the travelling salesperson problem given in Section 5. which suggests how to avoid initial- izing a memory function.1. Problem 5.8.5 is discussed in Wright (1975) and Chang and Korsh (1976). Memory functions are introduced in Michie (1968) . Problem 5. An important dynamic programming algorithm that we have not mentioned is the one in Kasimi (1965) and Younger (1967).8.11: we were curious to know what proportion of all the possible answers was represented by the 69 different answers suggested by the students (see also Lemma 10. for further details see Marsh (1970).6 comes from Held and Karp (1962).7 is based on Ackermann (1928). Problem 5.1. The optimal search tree for the 31 most common words in English is compared in Knuth (1973) with the tree obtained using the greedy algorithm suggested in Problem 5.168 Dynamic Programming Chap.5.8.8 is discussed in Sloane (1973). . Hopcroft and Ullman (1974).12 in Aho.

or all the edges. In this chapter we introduce some general techniques that can be used when no particular order of visits is required. and the fact that an edge exists between two nodes means that it is possible to get from the first to the second of these positions by making a single legal move. the shortest edge. Sometimes. The operations to be carried out are quite concrete : to "mark a node" means to change a bit in memory. In this case to "mark a node" means to take any appropriate measures that enable us to recognize a position we have already seen. of the node we are in the process of visiting) and possibly representations of a few other positions. In this case. and so on. To solve such problems. the structure of the problem is such that we need only visit some of the nodes or edges. of a graph. it does not really exist in the memory of the machine. and so on. for instance.1 INTRODUCTION A great many problems can be formulated in terms of graphs. A graph may be a data structure in the memory of a computer.6 Exploring Graphs 6. the graph exists only implicitly. the nodes are represented by a certain number of bytes. or to avoid arriving at 169 . the algorithms we have seen have implicitly imposed an order on these visits : it was a case of visiting the nearest node. At other times. the shortest route problem and the problem of the minimal spanning tree. all we have is a representation of the current position (that is. we often need to look at all the nodes. For instance. We shall use the word "graph" in two different ways. Most of the time. we often use abstract graphs to represent games : each node corresponds to a particular position of the pieces on the board. We have seen. and the edges are represented by pointers. to "find a neighbouring node" means to follow a pointer. When we explore such a graph. Up to now.

and so on.2. if we visit first the left-hand subtree. then the right-hand subtree. 6. For each of the six techniques mentioned. However. for some m > 0. then we are visiting the tree in postorder. 6 the same position twice . that is.T (0).do + c where d is a constant such that d ? 2c. all the nodes in the right-hand subtree. Preorder and postorder generalize in the obvious way to nonbinary trees. whether the graph is a data structure or merely an abstraction.2 TRAVERSING TREES We shall not spend long on detailed descriptions of how to explore a tree. we may suppose that c >. and finally. 0 <. then all the nodes in the left-hand subtree. to "find a neighbouring node" means to change the current position by making a single legal move . we are traversing the tree in preorder . and lastly.dm +c .1. These three techniques explore the tree from left to right. Three corresponding techniques explore the tree from right to left. Now suppose that it is true for all n . and finally. n > 0.170 Exploring Graphs Chap. the right-hand subtree. We simply remind the reader that in the case of binary trees three techniques are often used.dm +3c -d <. Suppose that visiting a node takes a time in O(1). the time required is bounded above by some constant c. the node itself. Suppose further that we are to explore a tree containing n nodes. we are traversing the tree in inorder. g nodes are situated in the left-hand subtree. Then T (m) <_ <_ max (T (g) + T (m -g -1) + c) max (dg +c +d(m-g-1)+c +c) 0<g <m-1 0<g <m-1 <. In this chapter we therefore do not distinguish the two cases. and if we visit first the left- hand subtree.n < m . of which one node is the root. the time T (n) needed to explore a binary tree containing n nodes is in O(n). the techniques used to traverse it are essentially the same. Without loss of generality. If at each node of the tree we visit first the node itself. Proof. Then T(n) <- max (T(g) + T(n-g -1) + c) n > 0 O<g <n -1 We prove by mathematical induction that T (n) <. By the choice of c the hypothesis is true for n = 0. It is obvious how to implement any of these techniques using recursion. Lemma 6. and n -g -1 nodes are in the right-hand subtree. then the node itself.

Initially. 6.not-visited for each v E N do if mark [v] # visited then dfs (v ) procedure dfs (v : node ) [ node v has not been visited } mark [v] .3. Problem 6. a recursive implementation takes memory space in S2(n) in the worst case. it is clear that T (n) is in c2(n) since each of the n nodes is visited.2. This proves that T (n) 5 do + c for every n >_ 0.Sec. Therefore T (n) is in O(n). * Problem 6.2. procedure search (G) for each v E N do mark [v] f. even when the nodes do not contain a pointer to their parents (otherwise the problem becomes trivial). and call the procedure yet again. if there is another node adjacent to v that has not been visited. When all the nodes adjacent to v have been marked. choose any one of them as a new starting point. Here is the recursive algorithm. Prove that for any of the techniques mentioned.3 DEPTH-FIRST SEARCH : UNDIRECTED GRAPHS Let G = < N. On return from the recursive call. Assume the trees are represented as in Figure 1. Continue in this way until all the nodes of G have been marked. choose this node as the next starting point. Mark this node to show that it has been visited. Prove that both these techniques still run in a time in the order of the number of nodes in the tree to be traversed. choose this node as a new starting point and call the depth-first search procedure recursively. and so on. On the other hand. the search starting at v is finished. To carry out a depth-first traversal of the graph. Show how the preceding exploration techniques can be implemented so as to take only a time in 0(n) and space in 0(1).2.5. Show how to generalize the concepts of preorder and postorder to arbitrary (nonbinary) trees.9.visited for each node w adjacent to v do if mark [w] # visited then dfs (w ) . choose any node v E N as the starting point. Problem 6.1. A > be an undirected graph all of whose nodes we wish to visit. call the procedure recursively once again. If there remain any nodes of G that have not been visited. if there is a node adjacent to v that has not yet been visited.2. Next. Suppose that it is somehow possible to mark a node to indicate that it has already been visited. and hence T (n) is in O (n). 6. no nodes are marked.3 Depth-First Search : Undirected Graphs 171 so the hypothesis is also true for n = in.

progress is blocked there are no more nodes to visit. . and that node 1 is the first starting point.1.1.3. When we visit a node. n)). The algorithm therefore takes a time in 0 (n) for the procedure calls and a time in 0 (a) to inspect the marks.9. 0 Problem 6. this work is proportional to a in total.Exploring Graphs 172 Chap.1. Show how a depth-first search progresses through the graph in Figure 6. progress is blocked a neighbour of node 1 has not been visited recursive call recursive call.1 if the neighbours of a given node are examined in numerical order but the initial starting point is node 6. 6 The algorithm is called depth-first search since it tries to initiate as many recursive calls as possible before it ever returns from a call. How much time is needed to explore a graph with n nodes and a edges ? Since each node is visited exactly once. we look at the mark on each of its neighbouring nodes. a depth-first search of the graph in Figure 6. An undirected graph.2).1 progresses as follows : dfs (1) dfs (2) dfs (3) dfs (6) dfs (5) dfs (4) dfs (7) dfs (8) initial call recursive call recursive call recursive call recursive call . If the graph is represented in such a way as to make the lists of adjacent nodes directly accessible (type lisgraph of Section 1. At this point the recursion "unwinds" so that alternative possibilities at higher levels can be explored. Example 6.3. there are n calls of the procedure dfs.3. Figure 6. The recursivity is only stopped when exploration of the graph is blocked and can go no further. If we suppose that the neighbours of a given node are examined in numerical order.3. The execution time is thus in 0 (max(a.3.

4). form a spanning tree for the graph in Figure 6.1 numbers the nodes as follows : node prenum 1 1 2 2 3 3 4 6 5 5 6 4 The depth-first search illus- 7 7 8 8 11 Of course. 3).3 Depth-First Search : Undirected Graphs 173 What happens if the graph is represented by an adjacency Problem 6. and so on.3. Problem 6.8). (3. If the graph being explored is not connected. (6.1 are (1.3. the nodes of the associated tree are numbered in preorder. A depth-first search also provides a way to number the nodes of the graph being visited : the first node visited (the root of the tree) is numbered 1. The initial starting point of the exploration becomes the root of the tree.4.3. 31. 14.2) rather than by lists of adjacent nodes ? Problem 6. matrix (type adjgraph of Section 1.pnum where pnum is a global variable initialized to zero. a depth-first search associates to it not merely a single tree. but depend on the chosen starting point and on the order in which neighbours are visited.6). (Continuation of Example 6. The root of the tree is node 1.2).3. one for each connected component of the graph. the tree and the numbering generated by a depth-first search in a graph are not unique. Edges that are not used in the traversal of the graph have no corresponding edge in the tree.9.3.7) and (7. Show how depth-first search can be used to find the connected components of an undirected graph. The edges of the tree correspond to the edges used to traverse the graph .3. (1.3. 6.1) trated by Example 6.1. Exhibit the tree and the numbering generated by the search of . To implement this numbering. the second is numbered 2. Example 6. (Continuation of Example 6. Problem 6. they are directed from the first node visited to the second. 2). 12.3.1. A depth-first traversal of a connected graph associates a spanning tree to the graph. we need only add the following two statements at the beginning of the procedure dfs : pnum F.3.5).Sec.2.2. (2. The corresponding directed edges (1.3. Example 6. but rather a forest of trees. and so on.3.1) The edges used in the depth-first search of Example 6.3. In other words. See Figure 6.2.pnum + 1 prenum [v] E-.3.3.

prenum [v] ii. 6 6. and 1 successively. The following algorithm finds the articulation points of a connected graph G . Articulation points are now determined as follows: i.3. if G is bicoherent. Traverse the tree T in postorder. (Continuation of Examples 6.3. A graph G is biconnected (or unarticulated) if it is connected and has no articulation points. 6.1 Articulation Points A node v of a connected graph is an articulation point if the subgraph obtained by deleting v and all the edges incident on v is no longer connected. The search Example 6.3. Verify that the same articulation points are found if we start 11 .3.1 . 8) . Let T be the tree generated by the depth-first search. The articulation points of G are nodes 1 (by rule c(i)) and 4 (by rule c(ii)). node 1 is an articulation point of the graph in Figure 6. b. if we delete it.3. 3. a telecommunications network. prenum [w] for each node w such that there exists an edge [ v. and 6. or 2-edge-connected) if each articulation point is joined by at least two edges to each component of the remaining sub-graph.61 and (4.4. w } in G that has no corresponding edge in T W.3. let prenum [v ] be the number assigned by the search. a. and for each node v of the graph.1.3. we can be sure that all the nodes will be able to communicate with one another even if one transmission line stops working. The edges of G that have no corresponding edge in T are represented by broken lines. the search at node 6.3. and the value of lowest [v] to the right. The values of lowest are calculated in postorder. there remain two connected components (2. Carry out a depth-first search in G. 8. 4.5. say. calculate lowest [v] as the minimum of i.7. These ideas are important in practice : if the graph G represents. For example. A node v other than the root of T is an articulation point of G if and only if v has a child x such that lowest [x] >_ prenum [v]. c. starting from any node.3. For each node v visited. then the fact that it is biconnected assures us that the rest of the network can continue to function even if the equipment in one of the nodes fails . 7.3.3) described in Example 6.2.2. The root of T is an articulation point of G if and only if it has more than one child. It is bicoherent (or isthmus free. Problem 6.5. 6.Exploring Graphs 174 Chap. The value of prenum [v] appears to the left of each node v. H. 2. for nodes 5. that is. lowest [x] for every child x of v in T.1 generates the tree illustrated in Figure 6.

Prove that an edge of G that has no corresponding edge in T Problem 6. given an undirected graph that is connected but not biconnected.2.3. If x is a child of v and if lowest [x] < prenum [v]. there thus exists a chain of edges that joins x to the other nodes of the graph even if v is deleted.3 Depth-First Search : Undirected Graphs Figure 6.6.3. Informally.3.3. 175 A depth-first search tree .7. (a broken line in Figure 6. Show how to carry out the operations of steps (a) and (b) in parallel and write the corresponding algorithm. x to the parent of v if v is not the root and if Problem 6. Analyse the efficiency of your algorithm. Complete the proof that the algorithm is correct.10.8.2) necessarily joins some node v to one of its ancestors in T. we can define lowest [v] by lowest [v] = min(prenum [w] I you can get to w from v by following down as many solid lines as you like and then going up at most one broken line) .9. Your algorithm should find the smallest possible set of edges. Problem 6. prenum on the left and lowest on the right.3. nected graph is bicoherent. .3. On the other hand. Write an algorithm that decides whether or not a given con* Problem 6. * Problem 6. there is no chain joining lowest [x] >_ prenum [v].3. Write an efficient algorithm that. 6.Sec. finds a set of edges that could be added to make the graph biconnected.

Consider a depth-first search of the directed graph in Figure 6. the edges used to visit all the nodes of a directed graph G = <N. the algorithm progresses as follows : 1. then it is bicoherent. If the neighbours of a given node are examined in numerical order.1. then w is adjacent to v but v is not adjacent to w. If a graph is biconnected. With this change of interpretation the procedures dfs and search from Section 6. however. progress is blocked a neighbour of node 1 has not been visited recursive call recursive call .Exploring Graphs 176 Problem 6. 6. dfs(1) 2. w) exists and (w .12. Problem 6.3. the difference being in the interpretation of the word "adjacent". 5. 8.3. progress is blocked new starting point recursive call . 6. 3.1. Prove that a node v in a connected graph is an articulation point if and only if there exist two nodes a and b different from v such that every path joining a and b passes through v. and if the starting point is node 1.4 DEPTH-FIRST SEARCH : DIRECTED GRAPHS The algorithm is essentially the same as the one for undirected graphs. 7. given node are examined in decreasing numerical order. initial call recursive call recursive call . biconnected graph. 9. If a graph is bicoherent.4. In this case. Prove that for every pair of distinct nodes v and w in a Problem 6. ii.3 shows that the time taken by this algorithm is also in 0 (max(a. n)). node w is adjacent to node v if the directed edge (v . progress is blocked there are no more nodes to visit. If (v . dfs (2) dfs (3) dfs (4) dfs (8) dfs (7) dfs (5) dfs (6) 4. however.13.11.3 apply equally well in the case of a directed graph. v) does not. The algorithm behaves quite differently. then it is biconnected.4. 6 Prove or give a counterexample : i.3. 0 An argument identical with the one in Section 6. there exist at least two chains of edges joining v and w that have no nodes in common (except the starting and ending nodes). w) exists. Illustrate the progress of the algorithm if the neighbours of a Problem 6. In a directed graph. A > may form a forest of several trees even . Chap.

8) that lead from a node to one of its descendants . 1) or (7. 6).4.2).3) that join one node to another that is neither its ancestor nor its descendant. 4). 8). then prenum [v] > prenum [w].6). w) is an edge of the graph that has no corresponding edge in the forest.4. (1.) Let F be the set of edges in the forest.4. form the forest shown by the solid lines in Figure 6. Edges of this type are necessarily directed from right to left. Figure 6. and W.2. (The numbers to the left of each node are explained in Section 6.1. ii. (8.4 Depth-First Search : Directed Graphs Figure 6. Prove that if (v . if G is connected. . In the case of an undirected graph the edges of the graph with no corresponding edge in the forest necessarily join some node to one of its ancestors (Problem 6. 2). those like (5. those like (1. 3). where the values of prenum are attributed as in Section 6. 7).4) that lead from a node to one of its ancestors . A depth-first search forest. and if v is neither an ancestor nor a descendant of w in the forest. and (5. (2. i. This happens in our example : the edges used. Those like (3. In the case of a directed graph three kinds of edges can appear in A \ F (these edges are shown by the broken lines in Figure 6.2.Sec. 6.2) or (6. 177 A directed graph. namely (1.3.2. Problem 6. (4.2.4.4.4.3.

3. Figure 6.d) .Exploring Graphs 178 Chap.4.5 gives an example of this type of diagram.4 illustrates part of another partial ordering defined on the integers. Let F be the forest generated by a depth-first search on a Problem 6. This class of structures includes trees. and the edges correspond to activities that have to be completed to pass from one state to another. Such graphs also offer a natural representation for partial orderings (such as the relation "smaller than" defined on the integers and the set-inclusion relation). Prove that G is acyclic if and only if A \ F includes no 11 edge of type (i) (that is.4. from the initial state to final completion.4. a directed acyclic graph can be used to represent the structure of an arithmetic expression that includes repeated subexpressions : thus Figure 6.4.4. Figure 6.3 A directed acyclic graph. directed graph G = <N. A >. Depth-first search can be used to detect whether a given directed graph is acyclic. but is less general than the class of all directed graphs.1 Acyclic Graphs: Topological Sorting Directed acyclic graphs can be used to represent a number of interesting relations.4.4. Figure 6. Figure 6. directed acyclic graphs are often used to represent the different stages of a complex project : the nodes are different states of the project. from a node of G to one of its ancestors in the forest).4. Another directed acyclic graph.3 represents the structure of the expression (a +b) (c +d) + (a +b) (c . For example. 6 6. . (What is the partial ordering in question?) Finally.

nodes 11. Problem 6. in our example. For graphs as in Figure 6. Problem 6. 4.1. but the order 1. In the graph of Figure 6. 12.8}. the order A.4. for the graph of Figure 6. B.5. 6. Another component corresponds to the nodes {4. 3. the numbers of the nodes will be printed in reverse topological order. 6. then i precedes j in the list.4. . For the graph of Figure 6. 24 is also acceptable.8).Sec. 12. the natural order 1.2 Strongly Connected Components A directed graph is strongly connected if there exists a path from u to v and also a path from v to u for every distinct pair of nodes u and v. Prove this. Each of these subgraphs is called a strongly connected component of the original graph. we are interested in the largest sets of nodes such that the corresponding subgraphs are strongly connected.5.4.4. for instance. 8.4. as are several others. E.5. it is not possible to merge these two strongly connected components into a single component because there exists no path from node 4 to node 1.4. 3. 8.7. 3) and the corresponding edges form a strongly connected component.4. 6.4.4) and (1. a topological ordering of the states can be used to obtain a feasible schedule for the activities involved in the project. Despite the fact that there exist edges (1. For example. D. 2. 2. G will serve. j ). A topological sort of the nodes of a directed acyclic graph is the operation of arranging the nodes in order in such a way that if there exists an edge (i. C. If a directed graph is not strongly connected. 24 is adequate.4. The necessary modification to the procedure dfs to make it into a topological sort is immediate : if we add a supplementary line write v at the end of the procedure.4.4 Depth-First Search : Directed Graphs 179 C make coffee drink coffee dress D get out G bring documents Figure 6. what is the topological order obtained if the neighbours of a node are visited in numerical order and if the depth-first search begins at node I ? 6. F. 2.4. 4. Yet another directed acyclic graph.

Example 6. The graph G' is illustrated in Figure 6. ii.1) are the subgraphs corresponding to the sets of nodes (5. In Section 6.7.4. 8 1.1.4. it follows that postnum [w] = n.6. . 3. and 2.180 Exploring Graphs Chap. Here. * Problem 6. Prove that if two nodes u and v are in the same strongly connected component of G.4. Figure 6. we number each node at the moment when exploration of the node has been completed. with postnum [4] = 5 . iii. the search reaches nodes 5 and 6. with postnum [1] = 6. (If G contains n nodes. choose as the second starting point the node that has the highest value of postnum among all the unvisited nodes . In other words.2. this time the search reaches nodes 1.3 we number each node at the instant when exploration of the node begins. 7. Carry out a depth-first search of the graph starting from an arbitrary node. To each tree in the resulting forest there corresponds one strongly connected component of G. The following algorithm finds the strongly connected components of a directed graph G . 3. the first depth-first search assigns the values of postnum shown to the left of each node in Figure 6. Construct a new graph G' : G' is the same as G except that the direction of every edge is reversed. we choose node 1. iv.) If the search starting at w does not reach all the nodes.1. The strongly connected components of the original graph (Fig.4. this time the remaining nodes 4. i. We carry out a depth-first search starting from node 5. the nodes of the tree pro- duced are numbered in postorder. For our second starting point. 11. On the graph of Figure 6. 7. with the values of postnum shown to the left of each node. 6). To do this.2 shows to the left of each node the number thus assigned. For the third starting point we take node 4.4. Begin this search at the node w that has the highest value of postnum. and 8 are all reached. we need only add at the end of procedure dfs the following two statements : nump F nump + 1 postnum [v] .4. Carry out a depth-first search in G'. we must first modify the procedure dfs. since postnum [5] = 8. and so on.4. 21 and '{ 4.6. The corresponding forest is illustrated in Figure 6.nump where nump is a global variable initialized to zero. then they are in the same tree when we carry out the depthfirst search of G'. For each node v of the graph let postnum [v] be the number assigned during the search.4. 6. 6 To detect the strongly connected components of a directed graph.

When carrying out the search of G'. and then go down the tree to r. When we carried out the search in G.7. This implies that there exists a path from r to v in G' . The second possibility is ruled out by the fact that postnum [r] > postnum [v]. We should have postnum [x] > postnum [r] since x is an ancestor of r. Since we chose r rather than v to be the root of the tree in question. we always choose as a new starting point (that is. But this is quite impossible. since there exists a path from v to x in G. say) of v and r.4 Depth-First Search : Directed Graphs Figure 6. 6. any such path must go up the tree from v to a common ancestor (x.Sec. In the third case it would be necessary for the same reason that r be to the right of v. thus there exists a path from v to r in G. as the root of a new tree) that node not yet visited with the highest value of postnum. and suppose v #r.4. there exists at least one path from v to r in G. It is harder to prove the result the other way.2).4.4.6. r was a descendant of v . three possibilities seem to be open a priori: r was an ancestor of v . there would exist a path from x to v in G'. Let v be a node that is in the tree whose root is r when we carry out the search of G'. Since in a depth-first search the edges not used by the search never go from left to right (Problem 6.4.1. or r was neither an ancestor nor a descendant of v. Next. Before choosing r as . 0 O O 0 O Figure 6. However. The forest of strongly connected components. 181 Reversing the arrows in the graph of Figure 6. we have postnum [r] > postnum [v].

5 BREADTH-FIRST SEARCH When a depth-first search arrives at some node v. by contrast. we would have already visited x (otherwise x rather than r would be chosen as the root of the new tree) and therefore also v. this completes the proof that the algorithm works correctly. we begin by giving a nonrecursive formulation of the depth-first search algorithm. first served".visited push v on P while P is not empty do while there exists a node w adjacent to top (P) such that mark [w] # visited do mark [w] . When a breadth-first search arrives at some node v.Exploring Graphs 182 Chap. 6 the root of a tree in the search of G'. This contradicts the hypothesis that v is in the tree whose root is r when we carry out the search of G. This implies that there exists a path from r to v in G. With the result of Problem 6. Let stack be a data type allowing two operations. To underline the similarities and the differences between the two methods. and not until this has been done does it go on to look at nodes farther away. Here is the modified depth-first search algorithm. .4. Problem 6.7. Unlike depth-first search. Estimate the time and space needed by this algorithm. and so on. The function first denotes the element at the front of the queue. push and pop.visited push w on P { w is now top (P) } pop top (P) For the breadth-first search algorithm. The function top denotes the element at the top of the stack. on the other hand.4. then there exist in G both a path from v to r and a path from r to v. they are therefore both in the same strongly connected component of G since there exist paths from u to v and from v to u in G via node r. This type represents a list of elements that are to be handled in the order "first come. 6. The type is intended to represent a list of elements that are to be handled in the order "last come. If two nodes u and v are in the same tree when we search G'. Here now is the breadth-first search algorithm. procedure dfs'(v : node) P F empty-stack mark [v] t. then a neighbour of this neighbour. We have thus proved that if node v is in the tree whose root is r when we carry out the search of G'. it next tries to visit some neighbour of v. it first visits all the neighbours of v. first served". breadth-first search is not naturally recursive. Only the first possibility remains : r was an ancestor of v when we searched G.6. we need a type queue that allows two operations enqueue and dequeue.

6 7. visited in numerical order. 7 8 8. but that neither u nor v is an ancestor of the other.visited enqueue v into Q while Q is not empty do u <--first (Q ) dequeue u from Q for each node w adjacent to u do if mark [w] # visited then mark [w] t.5.3.5. Node Visited Q 1.5 Breadth-First Search 183 procedure bfs (v : node ) Q .8 6.1 shows the tree generated by the search in Example 6.6.5. namely 0 (max(a.4. It is easy to show that the time required by a breadth-first search is in the same order as that required by a depth-first search. If the appropriate . After a breadth-first search in an undirected graph G = <N.1. n)). Problem 6. breadth-first search proceeds as follows. and if node I is used as the starting point. 6.5.1. 1 2. Show that the edges { u . 8 - As for depth-first search.6 4.7. 2 3.not-visited for each v E N do if mark [v] # visited then { dfs' or bfs } (v ) On the graph of Figure 6. 5 6. procedure search (G) for each v E N do mark [v] . let F be the set of edges that have a corresponding edge in the forest of trees that is generated.3.Sec.4 2.5.6 3.5. we can associate a tree with the breadth-first search.8 7. The edges of the graph that have no corresponding edge in the tree are represented by broken lines.visited enqueue w into Q In both cases we need a main program to start the search. 3 4. Figure 6. 4 5.1.empty-queue mark [v] F-. v } E A \ F are such that u and v are in the same tree.7. A >. if the neighbours of a node are Example 6.8 5.1.

path or pattern in the associated graph. One powerful application is in playing games of strategy by techniques known as minimax and alpha-beta . Backtracking is a basic search technique on implicit graphs.2) Sketch the corresponding forest and the remaining edges of the graph.Exploring Graphs 184 Chap.1.6. it may be wasteful or infeasible to build it explicitly in computer memory before applying one of the search techniques we have encountered so far. The economy in memory space is even more dramatic when nodes that have already been searched can be discarded.5. interpretation of the word "neighbouring" is used. Problem 6. Show how a breadth-first search progresses through the graph of Figure 6.2. Therefore computing time is saved whenever the search succeeds before the entire graph has been constructed.6 IMPLICIT GRAPHS AND TREES As mentioned at the outset of this chapter.2). making room for subsequent nodes to be explored.5. we can use the nodes of a graph to represent configurations in a game of chess and edges to represent legal moves (see Section 6. 6.5.4. and that necessary starting points are also chosen in numerical order. If the graph contains a large number of nodes. assuming that the neighbours of a node are always visited in numerical order. the breadth-first search algorithm can be applied without modification to either directed or undirected graphs. Often the original problem translates to searching for a specific node.4) Breadth-first search is most often used to carry out a partial exploration of certain infinite graphs or to find the shortest path from one point to another. A breadth-first search tree.5. Problem 6. 6 Figure 6.3 (Continuation of Problem 6. How many kinds of "broken" edges are possible ? (see Section 6.1. For instance. various problems can be thought of in terms of abstract graphs. Relevant portions of the graph can thus be built as the search progresses. An implicit graph is one for which we have available a description of its nodes and edges.

checking each time to see whether a solution has been obtained. For instance. use a different example. program Queens 1 for ii-lto 8do for i2-lto 8do for i8-lto 8do try F-(iI. At least one application of this technique dates back to antiquity : it allows one to find the way through a labyrinth without danger of going round and round in circles. Solve this problem without using a computer. Some optimization problems can be handled by the more sophisticated branch-and-bound technique. and also two pairs of queens lie on the same diagonal. Consider the classic problem of placing eight queens on a chess-board in such a way that none of them threatens any of the others. since the number of positions we have to check would be [64 J = 4. To illustrate the general principle. the number of cases to be considered is reduced to 88 = 16. 1. 4.. we shall. the vector (3.6 Implicit Graphs and Trees 185 pruning. however.6.777.852 cases. in the same column.. These graphs are usually trees.6.. (A queen threatens the squares in the same row. or on the same diagonals. The first obvious way to solve this problem consists of trying systematically all the ways of placing eight queens on a chess-board. looking for solutions to some problem. Using this representation. The first improvement we a than one queen in any given row. or at least they contain no cycles. This approach is of no practical use. 6.426. 6. We now discuss these notions.i2. we can write the algorithm very simply using eight nested loops.165. This might try consists of never putting more reduces the computer representation of the chess-board to a simple vector of eight elements. 2.1 Backtracking Backtracking algorithms use a special technique to explore implicit directed graphs. 6.. 7) does not represent a solution since the queens in the third and the sixth rows are in the same column. i8) if solution (try) then write try stop write "there is no solution" This time.Sec.368. although in fact the algorithm finds a solution and stops after considering only 1. 6. 8. each giving the position of the queen in the corresponding row.) Problem 6.299. 216. A backtracking algorithm carries out a systematic search. .1.

830 cases are in fact considered before the algorithm finds a solution. Although it is more complicated to generate permutations rather than all the possible vectors of eight integers between 1 and 8.320. how much time is Problem 6. Assuming that use (T) takes constant time. 6 Problem 6. . it suffices to verify that they are not in the same diagonal.Exploring Graphs 186 Chap. n ] and the initial call is If use (T) consists simply of printing the array T on a new Problem 6. Since we already know that two queens can neither be in the same row nor in the same column. . it is.6. perm (1). which prevents us from ever trying to put two queens in the same row.6. n ] is a global array initialized to [ 1. only 2. If you have not yet solved the previous problem. Hence we now represent the board by a vector of eight different numbers between 1 and 8. it is natural to be equally systematic in our use of the columns. procedure perm (i) if i = n then use (T) { T is a new permutation } else for j F.i to n do exchange T [i] and T [ j ] perm (i +1) exchange T [i] and T [ j ] Here T [ 1 .6. This approach reduces the number of possible cases to 8! = 40. show the result of calling perm (1) when n = 4. we might put each value in turn in the leading position and generate recursively. For instance. to execute the call perm (1) ? Now rework the problem 0 assuming that use (T) takes a time in 0(n). that is. 2..3. the informa0 tion just given should be of considerable help ! Once we have realized that the chess-board can be represented by a vector. as a function of n . The algorithm becomes program Queens 2 try initial permutation while try # final-permutation and not solution (try) do try F.. on the other hand. easier in this case to verify whether a given position is a solution.next-permutation if solution (try) then write try else write "there is no solution" There are several natural ways to generate systematically all the permutations of the first n integers. . If the preceding algorithm is used to generate the permutations.2. for each of these leading values.. needed. 0 line. all the permutations of the n -1 remaining elements.4. by a permutation of the first eight integers.

. of positive diagonals (at 45 degrees). and U[i]=V[i] for every i E [1 . On the other hand. a vector V is k-promising if. Solutions to the eight queens problem correspond to vectors that are 8-promising. Depth-first search is the obvious method to use. Let G = <N. particularly if we only require one solution. As a first step. even the best of these algorithms makes 720 useless attempts to put the last six queens on the board when it has started by putting the first two on the main diagonal.. V is (k +l )-promising. To print all the solutions to the eight queens problem. For k <.320. 4. 0. First. to decide if some given permutation represents a solution.6 Implicit Graphs and Trees 187 Starting from a crude method that tried to put the queens absolutely anywhere on the chess-board. Its root is the empty vector (k =0).i ). call Queens (0. V [1]). (2. . however : rather. Its leaves are either solutions (k = 8) or else they are dead ends (k < 8) such as [1. 0. the number of nodes in the tree is less than 8! = 40. The solutions to the eight queens problem can be obtained by exploring this tree. We do not generate the tree explicitly so as to explore it thereafter. where of course they threaten one another ! Backtracking allows us to do better than this. V ) E A if and only if there exists an integer k. it suffices to explore 114 nodes to obtain a first solution. . V [k]) threatens any of the others. This check can be speeded up if we associate with each promising node the sets of columns. it is straightforward to count the nodes using a computer : #N = 2057. and then to a better method still where the only positions considered are those where we know that two queens can neither be in the same row nor in the same column. 0 5 k S 8. j .. knowing that it is an extension of a (k -1)-promising vector.1. it seems at first sight that we have to check each of the 28 pairs of queens on the board. k] of integers between I and 8 is k-promising. k]. This technique has two advantages over the previous algorithm that systematically tried each permutation. . such that U is k -promising. However. 6. where try [I . Although it is not easy to calculate this number theoretically. We say that a vector V [1 . any vector V is k -promising. This graph is a tree. For instance. nodes are generated and abandoned during the course of the exploration. 0. we progressed first to a method that never puts two queens in the same row. 5. for 0S k 5 8. 0). we have V[i] -V [j ] it (i . Let N be the set of k-promising vectors. V [2]) . for every i #j between 1 and k. 8] is a global array.. in order to decide whether a vector is k-promising. A > be the directed graph such that (U. (k. 8] where it is impossible to place a queen in the next row without her threatening at least one of the queens already on the board. all these algorithms share an important defect : they never test a position to see if it is a solution until all the queens have been placed on the board.Sec. 0 <_ k < 8. if none of the k queens placed in positions (1.11s (at 135 degrees) controlled by the queens already placed.j . In fact. Secondly. and of negative diago^. Mathematically. let us reformulate the eight queens problem as a tree searching problem. we only need to check the last queen to be added. 2.

diag 135) { try [ 1 . As we might expect.001. col={try [i]I 1!5 i:5k}. ** Problem 6. the tree explored by the backtracking algorithm contains only 856. Here is the general scheme.j } { try (L. as a function of the number n of queens.. procedure backtrack (v [ 1 . the advantage to be gained by using the backtracking algorithm instead of an exhaustive approach becomes more pronounced as n increases.1 to 8 do if j Ocol and j -k Odiag45 and j +k Odiag 135 then try [k +I] F. Analyse mathematically. col. and a solution is obtained when the 262nd node is visited.600 possible permutations to be considered. k +1]) The otherwise should be present if and only if it is impossible to have two different solutions such that one is a prefix of the other.. k ]) { v is a k -promising vector) if v is a solution then write v { otherwise } for each (k +1)-promising vector w such that w [1 . 6 procedure Queens (k. on the other hand. for n = 12 there are 479.. diag45 u { j -k }. and diag135={try [i]+i-1I 15i <-k} } if k = 8 then { an 8-promising vector is a solution } write try else { explore (k+1)-promising extensions of try for j F.5.6.. and the first solution to be found (using the generator given previously) corresponds to the 4. diag 135 u { j +k)) It is clear that the problem generalizes to an arbitrary number of queens : how can we place n queens on an n x n "chess-board" in such a way that none of them threatens any of the others ? Show that the problem for n queens may have no solution. diag45 = {try[i]-i +l I 1 !5i !5k). k ] is k -promising.. .546. diag45.Exploring Graphs 188 Chap. the number of nodes in the tree of k -promising vectors. k ] = v [I . k ] do backtrack (w [I .044th position examined . Find a more interesting case than n = 2.6. k +11 is (k +1)-promising } Queens (k +1.189 nodes. For example. How does this number compare to n! ? Backtracking algorithms can also be used even when the solutions sought do not necessarily all have the same length. Problem 6.6. col u { j }.

6. For simplicity.2 Graphs and Games: An Introduction Most games of strategy can be represented in the form of directed graphs. and that chance plays no part in the outcome (the game is deterministic). The graph is infinite if there is no a priori limit on the number of positions possible in the game. it may be necessary to use breadth-first search to avoid the interminable exploration of some fruitless infinite branch. 6. (This last constraint does not apply to the eight queens problem where each solution involves exactly the same number of pieces. your algorithm should say so rather than calculating forever. A node of the graph corresponds to a particular position in the game. For instance. Give other applications of backtracking. Breadth-first search is also appropriate if we have to find a solution starting from some initial position and making as few changes as possible.6. or white. The ideas we present can easily be adapted to more general contexts. Instant Insanity is a puzzle consisting of four coloured cubes. that the game is symmetric (the rules are the same for both players). and an edge corresponds to a legal move between two positions. some positions in the game offer no legal moves.) The two following problems illustrate these ideas. .8.Sec. on one of the front faces. and hence some nodes in the graph have no successors : these are the terminal positions. Problem 6. We further suppose that no instance of the game can last forever and that no position in the game offers an infinite number of legal moves to the player whose turn it is. each of whom moves in turn. red. Each of the 24 faces is coloured blue.7. The n-queens problem was solved using depth-first search in the corresponding tree. In particular.9. 6.10. we assume that the game is played by two players. Give an algorithm that determines the shortest possible series of manipulations needed to change one configuration of Rubik's Cube into another. Give an algorithm capable of transforming some initial integer n into a given final integer m by the application of the smallest possible number of transformations f(i) = 3i and g(i) = Li/21. What does your algorithm do if it is impossible to transform n into m in this way ? * Problem 6. Show how to solve this problem using backtracking. and on one of the rear faces. on one of the four bottom faces. green. Problem 6.6. 15 can be transformed into 4 using four function applications : 4 = gfgg (15).6. Some problems that can be formulated in terms of exploring an implicit graph have the property that they correspond to an infinite graph. The four cubes are to be placed side by side in such a way that each colour appears on one of the four top faces.6. If the required change is impossible. In this case.6 Implicit Graphs and Trees 189 Problem 6.

4. except that he must take at least one and he must leave at least one. If he takes only one match. but this is not necessarily the case (think of stalemate in chess). Can a player who finds himself in a winning position lose if his opponent makes an "error"? Problem 6. assuming that neither player will make an error. The labels assigned to terminal positions depend on the game in question. you may verify that a player who has the first move in a game with eight matches cannot win unless his opponent makes an error.6. The player who removes the last match wins. A nonterminal position is a winning position if at least one of its successors is a losing position. The labels are assigned systematically in the following way. Example 6. ii. On the other hand. my opponent may take one. if at the outset I choose to remove a single match. we need only attach to each node of the graph a label chosen from the set win.1. find a relationship between this method of labelling the nodes and topological sorting (Section 6. iii. at least two matches are placed on the table between two players. Problem 6.6. We illustrate these ideas with the help of a variant of Nim (also known as the Marienbad game). If I take two of them. In the case of an acyclic finite graph (corresponding to a game that cannot continue for an indefinite number of moves). i. If he takes more than one. if you find yourself in a terminal position. On the other hand. each player in turn must remove at least one match and at most twice the number of matches his opponent just took. leaving four matches on the table.6. For most games. I can in turn remove a single match. two. . then there is no legal move you can make. iv. and he cannot prevent me from winning on my next turn. The first player removes as many matches as he likes.Exploring Graphs 190 Chap.1). draw. A nonterminal position is a losing position if all of its successors are winning positions. or to remove more than two. Thereafter. The label corresponds to the situation of a player about to move in the corresponding position. lose. Initially.11. There are no draws. three. The player who has the first move in a game with seven matches is therefore cer- tain to win provided that he does not make an error. I can remove all the matches that are left and win. or four. Any remaining positions lead to a draw. and you have lost .12. Grasp intuitively how these rules arise. There are seven matches on the table initially. 6 To determine a winning strategy for a game of this kind. then you may verify that my opponent has a winning strategy.

Figure 6. < 6.13. whereas he does have such a strategy in the game with four matches. Similarly. n -1 >.6.j. 6 >. In general. There are no heavy edges leaving a losing position.1.1 shows part of the graph corresponding to this game. The square nodes represent losing positions and the round nodes are winning positions.6. 5 > and their descendants to the graph of Figure 6. For n >. j > go to the j nodes < i -k. 0 > for i > 0 are inaccessible. The nodes of the graph corresponding to this game are therefore pairs < i . is < n .i . n 2. indicates that i matches remain on the table and that any number of them between 1 and j may be removed in the next move. three.6. All the nodes whose second component is zero correspond to terminal positions. j > with j odd and j < i -1 cannot be reached starting from any initial position. 1 <_ j <. corresponding to the fact that such positions offer no winning move. Your characterization of n should be as simple as possible. The heavy edges correspond to winning moves : in a winning position. Problem 6. It is also necessary to know the upper limit on the number of matches that it is permissible to remove on the next move. j >. Prove your answer. 7 >. i -k)>. 1 k <.14. Part of a game graph. . give a necessary and sufficient condition on n to ensure that the player who has the first move in a game involving n matches have a winning strategy. nodes < i. The node corresponding to the initial position in a game with n matches. < 7. 6. j >. A position in this game is not specified merely by the number of matches that remain on the table.Sec. min(2k. 0 > is interesting : the nodes < i.6. The edges leaving node < i . Add nodes < 8.6.6 Implicit Graphs and Trees 191 * Problem 6. Figure 6. We observe that a player who has the first move in a game with two. choose one of the heavy edges in order to win.1. but only < 0. < i . or five matches has no winning strategy.2.

k <. 2).false for i f. j ]. do not read the following paragraphs yet !) The first approach consists of using dynamic programming to create a Boolean array G such that G [i. 6 Problem 6. j > is winning . min(2k.6. As usual with dynamic programming. are there positions in which several different winning moves are available? Can this happen in the case of a winning initial position <n. j ) { returns true if and only if the node < i. This algorithm suffers from the same defect as the algorithm fib! in Section 1. j > is a winning position.! while k < j and G [i -k. j ] = true if and only if < i. as well as all the values of G [i. Can a winning position have more than one losing position among its successors ? In other words.5: it calculates the same value over and over.k + 1 G [i . min(2k.1 to i do k E.7.17. i -k )] . but rec (3. G [i j] is set to true if and only if configuration < i. rec (3.k <.not G [i -k .i } for k F. 0] E. 3) also calls rec (2. For instance. i -k)] do k E. k ] for 1 <. we proceed in a bottom-up fashion. procedure dyn (n) {foreach1<-j <<-i n. i -k)) then return true return false Problem 6.n-1>? The obvious algorithm to determine whether a position is winning is the following. min(2k . Modify this algorithm so that it returns an integer k such that k = 0 if the position is a losing position and 1 <. calculating all the values of G [1.1 to n do for j f. 1). rec (2. rec (5.15.I < i . function rec (i. j > is winning } G [0.1 to j do if not rec (i -k. Find two ways to remove this inefficiency.6. 1). Problem 6. (If you want to work on this problem. j ] F.j if it is a winning move to take away k matches. 3).16.k < j. 2) and rec (1. k ] for 1 <. we assume that 0 j <.Exploring Graphs 192 Chap. 2) and rec (1. 4) returns false having called successively rec (4. before calculating G [i .6.

7). min(2k. n ] is global. function nim (i. . The game we have considered up to now is so simple that it can be solved without really using the associated graph.18.true for k . In an initial position with n matches. 0.6. j] E. j] then return G [i.1 to j do if not nim (i -k. In fact. The array T [0. determines in a time in O(1) the move to make in a situation where i matches remain on the table and the next player has the right to take at most j of them.6]. i -k)) then G [i. although the dynamic programming algorithm looks at 121 of them. only 27 nodes are really useful when we calculate G [15. j ). In this context dynamic programming leads us to calculate wastefully some entries of the array G that are never needed. initialized to false. The recursive algorithm given previously is inefficient because it recalculates the same value several times. however.true return true G [i.. 1 <. Thereafter a call on whatnow (i .14 > is a winning position as soon as we discover that its second successor < 13. there is no particular reason to favour this approach over dynamic programming. j ] whenever j is odd and j < i -1. n.l < i .Sec. For instance. 4 > is a losing position. 0. 1 <. 6 > is a winning or a losing position. G [1.. Here. It is no longer of any interest to know whether the next successor < 12. A solution that combines the advantages of both the algorithms consists of using a memory function (Section 5. since these nodes are never of interest.j <..k <. j] init [i. j] false return false At first sight.. Show how to improve its efficiency by also using the values of G [i . where n is an upper bound on the number of matches to be used.6 Implicit Graphs and Trees 193 The preceding algorithm only _ises G [0. j] .i.7. The initial call of precond (n) is an application of the preconditioning technique to be discussed in the next chapter. 6.. n. k].2 allows us. to avoid this initialization and to obtain a worthwhile gain in efficiency. without explanation. 14]. is an algorithm for determining a winning strategy that is more efficient than any of those given previously. first call precond (n). but there is no "bottom-up" reason for not calculating G [12. This involves remembering which nodes have already been visited during the recursive computation using a global Boolean array init [0. to calculate G [i. however. j ) if init [i. 0] and the values of Problem 6. k ] for 1 <. it never calculates an unnecessary value.k < j. because in any case we have to take the time to initialize the whole array init [0. About half this work can be avoided if we do not calculate G [i . n ]. j]. we know that < 15. Using the technique suggested in Problem 5. Because of its top-down nature. n ].

j ) if j < T [i] then { prolong the agony ! } return 1 return T [i] * Problem 6. this problem disappears on closer examination.k function whatnow (i. Remember first that in the game we just looked at. the graph contains so many nodes that it is quite out of the question to explore it completely. a winning position for Black. this graph allows us to play a perfect game of chess. say. as well as the older rule that makes a game a draw if the pieces return three times to exactly the same positions on the board. there are no cycles in the graph corresponding to chess. a position in chess is not defined simply by the positions of the pieces on the board. the International Chess Federation has rules that prevent games dragging on forever : for example.) Adapting the general rules given at the beginning of this section. and whether some pawn has just been moved two squares forward (to know whether a capture en passant is possible). or a draw. even with the fastest existing computers. since if two positions u and v of the pieces differ only by the legal move of a rook. to win whenever it is possible and to lose only when it is inevitable. Unfortunately (or perhaps fortunately for the game of chess). namely chess. Consider now a more complex game. For simplicity ignore the fact that certain positions are . a game is declared to be a draw after 50 moves in which no irreversible action (movement of a pawn. However. 6 procedure precond (n) T [0] t-for i F 1 to n do k<-1 while T [i -k ] <. which rooks and kings have moved since the beginning of the game (to know if it is legal to castle). Thus we must include in our notion of position the number of moves made since the last irreversible action. then we can move equally well from u to v and from v to u. we can therefore label each node as being a winning position for White.Exploring Graphs 194 Chap. the king not being in check. the graph associated with this game contains cycles. Thanks to such rules. Once constructed. At first sight. (For simplicity we ignore exceptions to the 50-move rule. Estimate the number of ways in which the pieces can be * Problem 6. placed on a chess-board. We also need to know whose turn it is to move.20. or a capture) took place. that is. a position is defined not merely by the number of matches on the table. Prove that this algorithm works correctly and that precond (n) takes a time in 0(n).6.6.19.2k do k F k + 1 T[i] . but also by an invisible item of information giving the number of matches that can be removed on the next move. Similarly. Furthermore.

This evaluation function must take account of many factors : the number and the type of pieces remaining on both sides. val F-oo for each configuration w that is a successor of u do if eval(w) >. Exploration of the graph is usually stopped before the leaves are reached. it would be easy to determine the best move to make. When applied to a terminal position. that is. control of the centre. If the static evaluation function were perfect. count 1 point for each white pawn.6 Implicit Graphs and Trees 195 impossible. The best move would be to go to the position v that maximizes eval (v) among all the successors w of u. Although it does not allow us to be certain of winning. and so on. For example. Since a complete search of the graph associated with the game of chess is out of the question. since it would not hesitate to sacrifice a queen in order to take a pawn ! . 5 points for each white rook. subtract a similar number of points for each black piece. 31/4 points for each white bishop or knight. 6. -oc if White has been mated. Suppose it is White's turn to move from position u. Here we give only an outline of the technique. and large negative values to positions that favour Black. it is not practical to use a dynamic programming approach. the evaluation function should return +oo if Black has been mated. we want the value of eval (u) to increase as the position u becomes more favourable to White. and the positions thus reached are evaluated heuristically. and 0 if the game is a draw. and that both kings must be on the board).val then val . it underlies an important heuristic called minimax. Then we make the move that seems to cause our opponent the most trouble. Ignore also the possibility of 11 having promoted pawns. The first step is to define a static evaluation function eval that attributes some value to each possible position. It is customary to give values not too far from zero to positions where neither side has a marked advantage. freedom of movement.W It is clear that this simplistic approach would not be very successful using the evaluation function suggested earlier. This technique finds a move that may reasonably be expected to be among the best moves possible while exploring only a part of the graph starting from some given position. they can never be obtained from the initial position by a legal series of moves (but take into account the fact that each bishop moves only on either white or black squares. and 10 points for each white queen . using one of several possible criteria. A compromise must be made between the accuracy of this function and the time needed to calculate it.eval(w) V f. The minimax principle. Ideally. In this situation the recursive approach comes into its own. an evaluation function that takes good account of the static aspects of the position but that is too simplistic to be of real use might be the following : for nonterminal configurations.Sec. This is in a sense merely a systematic version of the method used by some human players that consists of looking ahead a small number of moves.

which of course may be exactly the wrong rule to apply if it prevents White from finding the winning move : maybe if he looked further ahead the gambit would turn out to be profitable.Black (w. White should move to the position v given by val <. (Ideally. 6 If the evaluation function is not perfect. n) V f. . a better strategy for White is to assume that Black will reply with the move that minimizes the function eval.val then val F. on the other hand.min{eval(x) I x is a successor of w } if valw >. the better the position is supposed to be for him.valw v t-W There is now no question of giving away a queen to take a pawn. since the smaller the value taken by this function. and White. On the other hand. we are sure to avoid moves that would allow Black to mate immediately (provided we can avoid this). To add more dynamic aspects to the static evaluation provided by eval.-°° for each configuration w that is a successor of u do if Black (w.) We are now looking half a move ahead. n) if n = 0 or x has no successor then return eval(x) else return max { Black (w. n) if n = 0 or w has no successor then return eval(w) else return min { White (x. he would like a large negative value.val then val f. n -1) 1 x is a successor of w } function White (x.eval(w) else valw . val --°° for each configuration w that is a successor of u do if w has no successor then valw . To look n half-moves ahead from position U. tries to maximize the advantage he obtains from each move. n) >.196 Exploring Graphs Chap.W where the functions Black and White are the following : function Black (w. it is preferable to look several moves ahead. n -1) 1 w is a successor of x } We see why the technique is called minimax : Black tries to minimize the advantage he allows to White.

j > represent the j th node in the i th row of the tree.2 shows part of the graph corresponding to some game. . What can you say about White (u. he will choose the second of the three possible moves. This second type of improvement is generally known as alpha-beta pruning. Similarly. We give just one simple example of the technique. Example 6. 197 The minimax principle. Example 6.2.2. 1 > starting from the values calculated by the function eval for the leaves < 4.21. we carry out a bounded depth-first search in the tree.6.6.3.6. Figure 6.Sec.6. j >. If the values attached to the leaves are obtained by applying the function eval to the corresponding positions. 12800). Let < i. Look back at Figure 6.6. 6. In the example we suppose that player A is trying to maximize the evaluation function and that player B is trying to minimize it. The basic minimax technique can be improved in a number of ways.2. Problem 6. visiting the successors of a given node from left to right.6 Implicit Graphs and Trees Player Rule A max Figure 6. Alpha-beta pruning. the values for the other nodes can be calculated using the minimax rule. We want to calculate the value of the root < 1. If A plays so as to maximize his advantage. the exploration of certain branches can be abandoned early if the information we have about them is already sufficient to show that they cannot possibly influence the values of nodes farther up the tree.6. 1 <_ j 5 18. For example. Let u correspond to the initial position of the pieces. it may be worthwhile to explore the most promising moves in greater depth. This assures him of a value of at least 10. To do this. besides the fact that it would take far too long to calculate in practice? Justify your answer.

-3 10 ? ? Alpha-beta pruning. we arrive after evaluation of the leaf <4.22. the exact value of node < 3. 1 > has value 5. we are in the situation shown in Figure 6.6.4> at the situation illustrated in Figure 6. Since we already know that the value of < 1. there is no need to evaluate the other children of node < 2. To establish that the value of the root <1. 1 > (a node that minimizes eval) has value at most 5.6.3. Write a program that can beat the world backgammon champion.2> has value 5.1> is 10. we visit only 19 of the 31 nodes in the tree. 1> is evaluated. game of strategy.4. 1> has value -7 and that <3.3. (This has already been done !) ** Problem 6. 3 > cannot have any influence on the value of node < 2. 1 >. we know that <4.6.6. 1 > has value at most -3.23. Node < 2. Similarly.6. I> is at least 10. after evaluation of the leaf <4. 3 >. Write a program capable of playing brilliantly your favourite 11 ** Problem 6. we say that the corresponding branches of the tree have been pruned. ** Problem 6.Exploring Graphs 198 Chap. 1> (a node that maximizes eval) has value at least -7. whereas node < 2. 3 > has value at least 10. After evaluation of the second leaf <4.24. 6 If we want to abandon the exploration of certain branches because it is no longer useful.6. 11>. It is therefore unnecessary to evaluate the other descendants of node <3. Thus as soon as the first leaf <4. Since node <3. < 3. we know that <4. .3>. we have to transmit immediately to the higher levels of the tree any information obtained by evaluating a leaf. Continuing in this way. What modifications should be made to the principles set out in this section to take account of those games of strategy in which chance plays a certain part ? What about games with more than two players ? Player Rule A max B min A max B eva! -7 5 Figure 6.2>. and < 2. 3 > has value at most 1.

In general terms we may say that a depth-first search finishes exploring nodes in inverse order of their creation. We return to the travelling salesperson problem (see Sections .6). then we do not need to go on exploring this part of the graph. calculation of these bounds is combined with a breadthfirst or a depth-first search.6. and serves only. so that it can be explored first. 6. Again. but also to choose which of the open paths looks the most promising. as we have just explained.3 Branch-and-Bound Like backtracking. and a priority list to hold those nodes that have been generated but not yet explored. using a stack to hold those nodes that have been generated but not yet explored fully . Example 6. If the bound shows that any such solution must necessarily be worse than the best solution we have found so far. More often. An example illustrates the technique. 3.2 and 5. this graph is usually acyclic or even a tree.6. More alpha-beta pruning. Branch-and-bound uses auxiliary computations to decide at each instant which node should be explored next. the calculated bound is used not only to close off certain paths.6 Implicit Graphs and Trees Player Rule A max 199 It is not necessary to explore these branches B eval Figure 6. 6. In the simplest version.6. This time. we are looking for the optimal solution to some problem. a breadth-first search finishes exploring nodes in the order of their creation. using this time a queue to hold those that have been generated but not yet explored (see Section 6.5). however.4. to prune certain branches of a tree or to close certain paths in a graph.4.4.Sec. At each node we calculate a bound on the possible value of any solutions that might happen to be farther on in the graph. branch-and-bound is a technique for exploring an implicit directed graph.

for example.3). Returning to node 1 costs at least 2. 6 Let G be the complete graph on five points with the following distance matrix : 014 4 10 20 14 0 7 8 7 4 5 0 7 16 11 7 9 0 2 18 7 17 4 0 We are looking for the shortest tour starting from node 1 that passes exactly once through each other node before finally returning to node 1. the minimum of 14/2. and (1.3) corresponds to two complete tours : (1. Its length is therefore at least 2+6+4+3+3+2= 20.) Our search begins by generating (as though for a breadth-first search) the four possible successors of the root. a visit to each of the nodes 2. node (1. (1.4. 11/2. 10/2. and 18/2. 3. it was computed here for the sake of illustration. 2) must include The trip 1-2 : 14 (formally.5). For instance. is calculated as follows. suppose that half the distance between two points i and j is counted at the moment we leave i. Obviously. We have just calculated the lower bound shown for this node. 4. 4/2. 4. The successors of a given node correspond to paths in which one additional node has been specified. visiting node 2 costs us at least 6 (at least 5/2 when we arrive and at least 7/2 when we leave). 2). Notice that this calculation does not imply the existence of a solution that costs only 20. a complete tour must include a departure from node 1. 1). leaving node I costs us at least 2. To calculate this bound.2). 3. and 5 (not necessarily in this order) and a return to 1. A tour that begins with (1. namely the lowest of the values 14/2. 2.Exploring Graphs 200 Chap. this arbitrary choice of a starting point does not alter the length of the shortest tour.5 the root of the tree specifies that the starting point for our tour is node 1. or 5: minimum 2 .3. 5. To obtain a bound on the length of a path.6.2. nodes (1. The nodes in the implicit graph correspond to partially specified paths. 1) and (1. leaving I for 2 and arriving at 2 from 1: 7+7 ) A departure from 2 toward 3. it suffices to add elements of this kind. namely. and 20/2. For instance. or 5: minimum 7/2 A visit to 3 that neither comes from I nor leaves for 2: minimum 11/2 A similar visit to 4: minimum 3 A similar visit to 5: minimum 3 A return to 1 from 3. At each node we calculate a lower bound on the length of the corresponding complete tours. (This bound on the root of the implicit tree serves no purpose in the algorithm . 4/2.5. The bound for node (1. For instance. (1.4).4. 4. and the other half when we arrive at j. 4. In Figure 6. Similarly.

5) of this node are therefore generated. 2) as follows : The trip 1. 2). .4) and (1. The length of such a tour is therefore at least 31. whose bound is 24. 1) is 31. The three children (1. for instance. We find that the length of the tour (1. 2. The most promising node is now (1.3.2. 3).Sec.3. 5.2.6 Implicit Graphs and Trees 201 1 Bound 20 1. If we are only concerned to find one optimal solution.4).3 -2: 9 A departure from 2 toward 4 or 5: minimum 7/2 A visit to 4 that comes from neither 1 nor 3 and that leaves for neither 2 nor 3: minimum 3 A similar visit to 5: minimum 3 A return to 1 from 4 or 5: minimum 11/2. corresponds to exactly one complete tour (1. 6. we do not need to continue exploration of the nodes (1. The other bounds are calculated similarly. which gives a total length of at least 24.5) are generated. Next. we do not need to calculate a lower bound since we may calculate immediately its length 37. 4.2).3.3.3. 2). (1. 3.5.2. 3. as node (1. 5 Bound 41 Figure 6. Branch-and-bound. This time. and (1. 1).6. To give just one example.4. the most promising node seems to be (1. we calculate the bound for node (1.5.4).3. 3. 2 :Z Bound 31 1.2.3. Its two children (1.

4) to explore. Problem 6. 4.25. and all the extra work we have done is wasted. Even exploration of the node (1.Exploring Graphs 202 Chap. Nevertheless. The heap is an ideal data structure for holding this list. Implement this algorithm on a computer and test it on our example. Problem 6.27. 5. it almost always pays to invest the necessary. the bound at each node being obtained by solving a related problem in linear programming with continuous variables.6. 6 (1. the optimal solution does not come from there. After looking at the two complete tours (1.4) is pointless. How much time do your algorithms take? . 3.7 SUPPLEMENTARY PROBLEMS Write algorithms to determine whether a given undirected Problem 6.5. we find that the tour (1. 3. 5). 2.7. 4. we have looked at merely 15 of the 41 nodes that are present in a complete tree of the type illustrated in Figure 6. Solve the same problem using the method of Section 5. 5. 1) of length 30 is optimal.4. time in calculating the best possible bound (within reason).5). the technique is sufficiently powerful that it is often used in practical applications.3. one finds applications such as integer programming handled by branch-and-bound.6.6. but on the other hand. 2. The only child to offer interesting possibilities is (1. 1) and (1. (Why?) There remains only node (1. we shall most likely spend more time at each one calculating the corresponding bound. The need to keep a list of nodes that have been generated but not yet completely explored. 3. 4. 2. To obtain our answer. 1). 3. however. which cannot possibly lead to a better solution. no elegant recursive formulation of branch-and-bound is available to the programmer. situated in all the levels of the tree and preferably sorted in order of the corresponding bounds. In the worst case it may turn out that even an excellent bound does not allow us to cut any branches off the tree.26. It is next to impossible to give any idea of how well the technique will perform on a given problem using a given bound. graph is in fact a tree (i) using a depth-first search . Unlike depth-first search and its related techniques. (ii) using a breadth-first search. For instance. * Problem 6. There is always a compromise to be made concerning the quality of the bound to be calculated : with a better bound we look at less nodes. 3) was the most promising node. This example illustrates the fact that although at one point (1. 5) and (1. for problems of the size encountered in applications. makes branch-and-bound quite hard to program. In practice. 6.6. Show how to solve the same problem using a backtracking algorithm that calculates a bound as shown earlier to decide whether or not a partially defined path is promising. 5.6.1.

* Problem 6. Show how the problem of carrying out a syntactic analysis of a programming language can be solved in top-down fashion using a backtracking algorithm. Your algorithm should accept the graph represented by its adjacency matrix (type adjgraph of Section 1. from (1. n). Without being completely formal (for instance.5. Euler's problem.7. (This approach is used in a number of compilers. then you may not pass through point (i.7 Supplementary Problems 203 Problem 6. * Problem 6. For instance 10= 1x2x2x2x2/3x2.Sec.4. Give a backtracking algorithm that finds a path. any resulting fraction being dropped).7. 6. Figure 6. ii. j ). Repeat Problem 6.1 gives an example.7. indicate how to solve this problem by branch-and-bound. Operations are executed from left to right. j] is true. general. Notice that a running time in 0 (n) for this problem is remarkable given that the instance takes a space in Q(n 2) merely to v:'rite down..3. whereas the edge (p .2.4 for a directed graph. j] is false.9. If M [i.. and if so. to find the root.7. if one exists. The value 1 is available.7. Write an algorithm that can detect the presence of a sink in G in a time in O (n). Show how the problem can be expressed in terms of exploring a graph and find a minimum-length solution.) A Boolean array M [ 1 . . A node p of a directed graph G = <N.7.6. n. v) does not exist. A > is called a sink if for every node v E N .8. then you may pass through point (i. I . We want to express 13 in this way.7. To construct other values. An Euler path in a finite undirected graph is a path such that every edge appears in it exactly once.2). How much time does your algorithm take? *Problem 6. it is permissible to go to adjacent points in the same row or in the same column. v #p.7. j ). In Problem 6.7. you may use statements such as "for each point v that is a neighbour of x do "). 1) to (n. Without giving all the details of the algorithm. you have available the two operations x2 (multiplication by 2) and /3 (division by 3. the edge (v . starting from a given point. Problem 6. your algorithm must be clear and precise. i. Write an algorithm that determines whether or not a given graph has an Euler path.7. and prints the path if so. if M [i. p) exists. How much time does your algorithm take? * Problem 6. Write an algorithm to determine whether a given directed graph is in fact a rooted tree. n ] represents a square maze.

7.2.1. 6 17 Figure 6. Hopcroft. 1983). Lawler (1976). A solution of problem 6. 1970). The use of this technique to solve the travelling salesperson problem is described in Bellmore and Nemhauser (1968). Nievergelt. The mathematical notion of a graph is treated at length in Berge (1958.6. and alpha-beta pruning. Some algorithms for playing chess appear in Good (1968). Problem 6. A maze. Papadimitriou and Steiglitz (1982).Exploring Graphs 204 -+ 11 . Other algorithms based on depth-first search appear in Aho. Even (1980). Backtracking is described in Golomb and Baumert (1965) and techniques for analysing its efficiency are given in Knuth (1975a). The book by Nilsson (1971) is a gold mine of ideas concerning graphs and games. and Deo (1977).2 is given in Robson (1973). and Ullman (1974.10 is solved in Rosenthal and Goldner (1977).8 REFERENCES AND FURTHER READING There exist a number of books concerning graph algorithms or combinatorial problems that are often posed in terms of graphs. The latter is analysed in Knuth (1975b). 6. A linear time algorithm for testing the planarity of a graph is given in Hopcroft and Tarjan (1974). Several applications of depth-first search are taken from Tarjan (1972) and Hopcroft and Tarjan (1973).23) is given in Deyong (1977). and Tarjan (1983). A lively account of the first time a computer program beat the world backgammon champion (Problem 6. We mention in chronological order Christofides (1975). -+ Chap. For a more technical description of this feat. the minimax technique. consult Berliner (1980). Reingold. The branch-andbound technique is explained in Lawler and Wood (1966). Gondran and Minoux (1979).3. .

1 Introduction Let I be the set of instances of a given problem. Example 7.1. J might be a set of grammars in Backus-Naur form for such languages as Algol. A preconditioning algorithm for this problem is an algorithm A that accepts as input some element j of J and produces as output a new algorithm Bj . In this case I is the set of instances of the type "Is kEK a valid program in the language defined by the grammar j E J ? ".1.1. 205 . Suppose each instance jet can be separated into two components j E J and k E K (that is. Pascal.1 PRECONDITIONING 7. then the application of Bj on k gives the solution to the instance < j . For example. precomputation of auxiliary tables may lead to a more efficient algorithm. I c J x K ). k > of the original problem. Let K be a set of programs.7 Preconditioning and Precomputation If we know that we shall have to solve several similar instances of the same problem. k > E I. Let J be a set of grammars for a family of programming languages. it is sometimes worthwhile to invest some time in calculating auxiliary results that can thereafter be used to speed up the solution of each instance. This is preconditioning. The general problem is to know whether a given program is syntactically correct with respect to some given language. Even when there is only one instance to be solved. This algorithm Bj must be such that if k E K and < j. 7. Simula. and so on.

else. and on the other hand. . Example 7. and t2=a(j)+ n bj (k. k 1 >. Let J be a set of sets of keywords. In this case it is sometimes impractical to calculate and store ahead of time the #1 solutions to all the relevant instances.2. 7 One possible preconditioning algorithm for this example is a compiler generator : applied to the grammar j E J . k > from scratch consists of producing Bj from j and then applying it on k.) if we work without preconditioning.k. we simply apply the O compiler Bj to k. k). . <j. jusqu'a. for example to ensure a sufficiently fast response time for a real-time application. k2>. to stop a runaway reactor. alors. sinon. k) = the time required to solve < j. to.=1 with preconditioning. { for. Such an application of preconditioning may be of practical importance even if only one crucial instance is solved in the whole lifetime of the system : this may be just the instance that enables us. . pas 1 1.) . The time you spend studying before an exam may also be considered as an example of this kind of preconditioning. <j. then.. Obviously. Thereafter. It may. In this case the time taken to solve all the instances is ti = t(j. We have to solve a series of instances <j. by 1. k) 5 a ( j) + bb (k). finsi } . we are wasting our time using preconditioning if bj (k) > t ( j. Preconditioning can be useful in two situations. a. Whenever n is sufficiently large. We need to be able to solve any instance i E I very rapidly. Let a( j) = the time required to produce Bj given j b j (k) = the time required to apply Bj to k t ( j.1. be possible to calculate and store ahead of time #J preconditioned algorithms. it generates a compiler Bj for the language in question. { si. endif 1. k > directly It is usually the case that bj (k) <_ t( j. 1 pour. it often happens that t2 is much smaller than t I. one way of solving < j. on the other hand.. for example J = { { if. kn > with the same j. to know whether k E K is a program in language j. b. for example.Preconditioning and Precomputation 206 Chap.

1. If we expect to have several systems of equations to solve with the same matrix A but different vectors b.Sec. so that we can subsequently solve any particular instance in a time in 0(1). for example K = (Si. Example 7. If we solve each instance directly. a (j) E O(nj log nj) for the sort bi . and let K be the set of pairs < v . On the other hand. the ancestor of all the nodes of which its children are ancestors. . function}.5 suggests how to obtain an efficient greedy algorithm for making change. where nj is the number of elements in the set j. Example 7. w > of nodes. (k) E O( log nj ) for the search If there are many instances to be solved for the same j. where A is a non-singular square matrix and b is a column vector. any direct solution of this instance takes a time in S2(n) in the worst case. if we start by sorting j (this is the preconditioning). every node is its own ancestor and. jusqu'A.1 Preconditioning 207 Let K be a set of keywords. Why? It is.1.8.5. 7. we have t(j. k > by a binary search algorithm. and to multiply this inverse by each b. Example 7. (By definition.1. recursively.2 Ancestry in a rooted tree Let J be the set of all rooted trees. For a given pair k = < v. then we can subsequently solve < j. Problem 7. Creating an optimal search tree (Section 5.) If the tree j contains n nodes. possible to precondition the tree in a time in O(n).k)E8(nj) in the worst case. We are to solve the system of equations Ax = b . then it is probably worthwhile to calculate the inverse of A once and for all. 7. begin.1. Calculating the necessary values cj is an example of preconditioning that allows us subsequently to make change quickly every time this is required. Problem 5. then the second technique is clearly preferable.4. however.1.3.5) is a further example of preconditioning.1. w > and a given rooted tree j we want to know whether node v is an ancestor of node w in tree j. We have to solve a large number of instances of the type "Is the keyword kEK a member of the set j E J ? ".

For a node v.1. we traverse it first in preorder and then in postorder (see Sec- tion 6.1 these two numbers appear to the left and the right of the node.2. In Figure 7.1.1. To precondition the tree. 7 We illustrate this approach using the tree in Figure 7. which visits first a node and then its subtrees from right to left. and then we number the node itself.1. the required condition can be checked in a time in 0(1). that traversal of a rooted tree in inverted preorder requires more work than traversal in postorder if the representation of rooted trees suggested in Figure 1. however. Problem 7. A rooted tree with preorder and postorder numberings.) 11 .2).208 Preconditioning and Precomputation Chap.9.5 is used. and let postnum [v] be the number assigned during the traversal in postorder. respectively. Once all the values of prenum and postnum have been calculated in a time in 0(n). (Notice. for example.1. It contains 13 nodes. Let v and w be two nodes in the tree. In postorder we first number the subtrees of a node from left to right. D 1 2 B 5 4 E 3 I 2 C 6 F 4 8 12 . In preorder we first number a node and then we number its subtrees from left to right. that this can be done using a traversal in preorder followed by a traversal in inverted preorder. Show.1. Thus prenum [v] <_ prenum [w] b v is an ancestor of w or v is to the left of w in the tree. It follows that prenum [v] prenum [w] and postnum [v] >_ postnum [w] t* v is an ancestor of w. let prenum [v] be the number assigned to the node when we traverse the tree in preorder. numbering the nodes sequentially as we visit them.G6 10 J 9 711 K H 8121 L 913 M 10 Figure 7. Thus postnum [v] >_ postnum [w] a v is an ancestor of w or v is to the right of w in the tree. There exist several similar ways of preconditioning a tree so as to be able thereafter to verify rapidly whether one node is an ancestor of another.

but 10 additions. A naive method for evaluating this polynomial is to calculate first the series of values x2. We use the number of integer multiplications that have to be carried out as a barometer to measure the efficiency of an algorithm. we obtain successively q2 = -5. .1 209 7.1. q I = 4. we first express it in the form p(x) = (x(n+t)12+a)q(x) + r(x). For simplicity. and let K be the set of values this variable can take. In the preceding example we first express p (x) in the form (x4+a)(x3+82x2+qlx+qo)+(x3+r2x2+rlx+ro) . Example 7.13)x +3)x . Initially. Thus . (A typical 7th degree monic polynomial would require 5 multiplications as is the case here.. Finally. 1. qo=-13.1..I -1.Preconditioning Sec. if p (x) is a monic polynomial of degree n = 2k -1. we restrict ourselves to polynomials with integer coefficients.. 5x6 and finally p(x).a =2. where i is a power of 2. x7. from which we can obtain 5x. Consider the polynomial p(x)=x7-5x6+4x5. where a is a constant and q (x) and r (x) are monic polynomials of degree 2k .3 Repeated Evaluation of a Polynomial Let J be the set of polynomials in one variable x. . Better still. evaluated at integer values of x.rt=-3.10x2+5x . The problem consists of evaluating a given polynomial at a given point.r2=0.andro=9. .) In general. Equating the coefficients of x6.. 7. we can calculate p(x)=(x4+2)[(x2+3)(x-5)+(x+2)]+[(x2-4)x + (x +9)] using only 5 multiplications (of which two are to calculate x2 and x4) plus 9 additions.. lOx2.10)x +5)x -17 we need only 6 multiplications and 7 additions. If we evaluate p (x) as p (x) = ((((((x -5)x +4)x .I for some integer k ? 1. we restrict ourselves even further and consider only monic polynomials (the leading coefficient is 1) of degree n = 2k . taking no account of the size of the operands involved nor of the number of additions and subtractions. Next. p (x) is expressed entirely in terms of polynomials of the form x` + c.13x4+3x3. . It is easy to do better.17. .6. x5. we apply the same procedure recursively to q (x) and r (x). This method requires 12 multiplications and 7 additions (counting the subtractions as additions). x3. .

Generalize this method of preconditioning to polynomials that Problem 7. are not monic.8. Problem 7.. Analysis of the method.7. (n -3)/2+Ig(n + 1) multiplications are sufficient to evaluate a preconditioned polynomial of degree n = 2k -1. Further generalize the . * Problem 7.1. x 4. This expression is the preconditioned form of the polynomial. there does not exist an algorithm that can calculate p (x) using less than n -1 multiplications in the worst case.210 Preconditioning and Precomputation Chap. we find x3-5x2+4x . M(k) = 2k ' -1 for k >.13=(x2+3)(x-5)+(x+2) x3 . with no risk of rounding error due to the use of floating-point arithmetic. and hence M(k)=2k -1+k-2.6.1. Let M(k) be the number of multiplications required to evaluate p (x). the time invested in preconditioning the polynomial allows us to evaluate it subsequently using essentially half the number of multiplications otherwise required. tion M(k) = .3.k + 1 be the number of multiplications required if we do not count those used in the calculation of x 2. Let M(k) = M(k) . 7 p(x) = (x4+2)(x3-5x2+4x -13) + (x3-3x +9).1. Problem 7. In other words.5. Problem 7. Problem 7. Your generalization must give an exact answer. (Continuation of Problem 7. Express p (x) ax7 in preconditioned form.4. Prove that if the monic polynomial p (x) is given by its coefficients. x (n + 1)/2 We obtain the recurrence equa- 0 k=1 2M(k -1) + 1 k>2. Similarly. In other words. .. Show that evaluation of a preconditioned polynomial of degree n = 2k .1. a preconditioned monic polynomial of degree n = 2k -1.1)/2 additions in the worst case.1. Express p(x)=x7+2x6-5x4+2x3-6x2+6x -32 in preconditioned form.1 requires (3n . Consequently.1.1.7) method to polynomials of any degree.1.6.3x + 9 = (x2-4)x + (x +9) to arrive finally at the expression for p (x) given in Example 7.1.

2 PRECOMPUTATION FOR STRING-SEARCHING PROBLEMS The following problem occurs frequently in the design of text-processing systems (editors.). macroprocessors. Is the method appropriate for polynomials involving real coefficients and real variables ? Justify your answer.Precomputation for String-Searching Problems Sec. Given a target string consisting of n characters. S. etc. Problem 7. we use the number of comparisons between pairs of characters as a barometer to measure the efficiency of our algorithms.) The total number of comparisons to be made is therefore in S1(m (n -m )).1. for example.9. . (Think of S = "aaa aab". P=p1p2 p. we want to know whether P is a substring of S. and if so.2. which is in Q(mn) if n is much larger than in. information retrieval systems... . Suppose without loss of generality that n >.true j *. P = "aaaab". and that the pattern P.O to n -m do ok F. and a pattern consisting of m characters. . whereabouts in S it occurs. Show using an explicit example that the method described here does not necessarily give an optimal solution (that is.m do ifp[j]#s[i+j] then ok <--false else j F j + 1 if ok then return i + 1 return 0 The algorithm tries to find the pattern P at every position in S. if the Si are the lines in a text file S and we are searching for the lines in the file that contain P. S = s is 2 s ... m). Can we do better ? 7. for i E.1 Signatures Suppose that the target string S can be decomposed in a natural way into substrings.1 while ok and j <. r is the smallest integer such that Sr+i . In the worst case it makes m comparisons at each position to see whether or not P occurs there. In the analyses that follow. 2.10. 7. must occur entirely within one of these substrings (thus we exclude the possibility that P might straddle several consecutive substrings). It returns r if the first occurrence of P in S begins at position r (that is. This situation occurs. it does not necessarily minimize the required number of multiplications) even in the case of monic polynomials of degree n = 2k -1. S = S 1S 2 .m.2 211 Problem 7. 7. . if it occurs in S. i = 1. The following naive algorithm springs immediately to mind. and it returns 0 if P is not a substring of S.1 =Pi .1.

c. Si) is false. val ("z") = 25. x. Suppose that the character set used for the strings S and P is { a. it is possible that P might be a substring of Si . Si) is true. This is yet another example of preconditioning. This gives us the function T we need : T (P . m = 5). cr as a 32-bit word where M.2. If T (P . B ("o". using the naive algorithm given earlier). Here is one common way of defining a signature. if T (P.2. r") = B ("r". . T can be computed very rapidly once all the signatures have been calculated.. Calculating the signatures for S takes a time in O (n). 7 The basic idea is to use a Boolean function T (P. If signatures are calculated as described. and if the characters a. cr) are set to 1 and the other bits are 0. what is the probability that the signature of a random string of n characters contains a 1 in all the bit positions that contain a 1 in the signature of another random string of m characters ? Calculate the numerical value of this probability for some plausible values of m and n (for instance. where we have lumped all the non-alphabetic characters together. then P cannot be a substring of Si . however. B (c 1.1. s") = 29. but we have to carry out a detailed check to verify this (for instance. val ("b") = 1.. other) ...Preconditioning and Precomputation 212 Chap. n = 40. Define the signature sig (C) of a string C = c 1c2 the bits numbered B (c 1. Example 7. B (c 2. Define val ("a") = 0. * Problem 7. Si) = [(sig (P) and sig (Si )) = sig (P )J. B (cr_1. c 2). If Si contains the pattern P. If c1 and c2 are characters. o") = 27 x 2 + 14 mod 32 = 4. If the bits of a word are numbered from 0 (on the left) to 31 (on the right). . b. z. Suppose too that we are working on a computer with 32-bit words. . then all the bits that are set to 1 in the signature of P are also set to 1 in the signature of Si . s") = 27 x 17 + 18 mod 32 = 29. . y. If C is the string "computers". define .. B ("r". We calculate a signature for each substring Si and for the pattern P. . c 3). but from then on we hope that the preliminary test will allow us to speed up the search for P. m") = 27 x 14 + 12 mod 32 = 6. where the and operator represents the bitwise conjunction of two whole words. .. b. . c 2) = (27 val (c 1) + val (c 2)) mod 32. i... Only seven bits are set to 1 in the signature because B ("e". the signature of this string is the word 0000 1110 0100 0001 0001 0000 0000 0100 . ii.. Signatures offer a simple way of implementing such a function.. we calculate B ("c". For each pattern P we are given we need a further time in O (m) to calculate its signature. . z and other are equiprobable. val (other) = 26. . The improvement actually obtained in practice depends on the judicious choice of a method for calculating signatures. Si) that can be calculated rapidly to make a preliminary test.1. .

4. In the preceding example the function B takes two consecutive characters of the string as parameters.2 The Knuth-Morris-Pratt Algorithm We confine ourselves to giving an informal description of this algorithm (henceforth : the KMP algorithm). Example 7. If the character set contains the 128 characters of the ASCII code. we might define B by B (c 1. and so on.2. which finds the occurrences of P in S in a time in 0 (n).2. Is the method illustrated of interest if the target string is very long and if it cannot be divided into substrings ? Problem 7.2 213 Problem 7.Precomputation for String-Searching Problems Sec. and if the computer in use has 32-bit words. 7.3. what do you suggest instead ? 7.2. is it useful ? Problem 7.2. To find P in S we slide P along S from left to right.2. Si) is true with probability E > 0 even if Si does not contain P. The number of bits in the signature can also be changed. Is this to be recommended ? If not. Initially.2. If T (P. In this case there is only one comparison.2. looking at the characters that are opposite one another. Problem 7. After this failure we try S P babcbabcabcaabcabcabcacabc abcabcacab TTTT . It is easy to invent such functions based on three consecutive characters. what is the order of the number of operations required in the worst case to find P in S or to confirm that it is absent ? Many variations on this theme are possible. The arrows show the comparisons carried out before we find a character that does not match.5. Can we define a function B based on a single character? If this is possible. Let S = "babcbabcabcaabcabcabcacabc" and P = "abcabcacab". c 2) = (128 val (c 1) + val (c 2)) mod 32.2. we try the following configuration : S P babcbabcabcaabcabcabcacabc abcabcacab T We check the characters of P from left to right.

This time. Sliding P one or two places along cannot be right . sliding P four places along might work. where x is not a "b". Up to now. we have proceeded exactly as in the naive algorithm. We must instead line up P with the first character of S what . two.Preconditioning and Precomputation 214 Chap. but the fourth does not match. S P babcbabcabcaabcabcabcacabc abcabcacab T There is no need to recheck the first four characters of P : we chose the movement of P in such a way as to ensure that they necessarily match. we know that the last eight characters examined in S are abcabcax where x # "c". Without making any more comparisons with S. m ]. or three characters along : such an alignment cannot be correct. However we now know that the last four characters examined in S are abcx where x #"a". It suffices to start checking at the current position of the pointer.. To implement this algorithm. This array tells us to do when a mismatch occurs at position j in the pattern. If next [ j ] = 0. (A three-place movement is not enough: we know that the last characters examined in S are ax. we can conclude that it is useless to slide P one.) S P babcbabcabcaabcabcabcacabc abcabcacab TTTTTTTT Yet again we have a mismatch. and this time a three-place movement is necessary. it is useless to compare further characters of the pattern to the target string at the current position. S P babcbabcabcaabcabcabcacabc abcabcacab TTTTTT We complete the verification starting at the current position time the correspondence between the target string and the pattern is complete. In this case we have a second mismatch in the same position. So let us try sliding P four characters along. S P babcbabcabcaabcabcabcacabc abcabcacab TTTTTTTT Following this mismatch. we need an array next[ I. 7 This time the first three characters of P are the same as the characters opposite them in S. however moving it three places might work.

In all cases.2. it is correct to talk of precomputation. or the variable k in the algorithm) or the pattern P.2. The time required by the algorithm is therefore in O (n). including the search for a single pattern in a single target. Find a way to compute the array next [ 1 .8.2 Precomputation for String-Searching Problems 215 that has not yet been examined and start checking again at the beginning of P. or else 0 if P is not a substring of S.Sec.6. If next [ j ] = i > 0..2. In both cases we slide P along j . which does happen in some applications. In the preceding example we have j 1 p[j] 2 3 4 5 8 9 10 b c a b 6 c 7 a a c a b next[ j] 0 1 1 0 1 1 0 5 0 1 Once this array has been calculated. Precomputation of the array next [ 1 . in ] can be carried out in a time in 0(m). Modify the KMP algorithm so that it finds all the occurrences of P in Sin a total time in 0 (n). which can be neglected since m <_ n. here is the algorithm for finding P in S. m ] in a time in Problem 7.next [ j ] characters to the right with respect to S. the execution time is thus in O (n).m else return 0 It returns either the position of the first occurrence of P in S. Problem 7.7.next[j] k-k+1 j-j+1 if j >m then return k . 0 (M). After each comparison of two characters. * Problem 7. we move either the pointer (the arrow in the diagrams.1 while j <_ m and k 5 n do while j > O ands [k] # p [j ] do j'.2. Overall. On the other hand. 7. Follow the execution of this algorithm step by step using the strings from Example 7.k <. It is correct to talk of preconditioning in this case only if the same pattern is sought in several distinct target strings.2. The pointer and P can each be moved a maximum of n times. function KMP j. we should align the i th character of P on the current character of S and start checking again at this position.. . preconditioning does not apply if several distinct patterns are sought in a given target string.

we slide P along S from left to right. we slide the pattern just to the right of the arrow. Example 7. increases. S P This is a delicate topic cat T Again we examine P from right to left. We use two rules to decide how far we should move P after a mismatch. Let S = "This is a delicate topic" and P = "cat". let c be the character opposite p [m]. If c does not appear in the pattern. If a certain number of characters at the end of P correspond to the characters in S. the BM algorithm tends to become more efficient as m. the characters of P are checked from right to left after each movement of the pattern. There is an immediate mismatch in the position shown by the arrow. we try .3. As with the KMP algorithm. and again there is an immediate mismatch. then we use this partial knowledge of S (just as in the KMP algorithm) to slide P along to a new position compatible with the information we possess. and the number of comparisons carried out can be less than n. Since "i" does not appear in the pattern. 7 7.2. since the KMP algorithm examines every character of the string S at least once in the case when P is absent. checking corresponding characters. In the best case the BM algorithm finds all the occurrences of P in S in a time in O (m + n /m). however. We know that c #p [m].216 Preconditioning and Precomputation Chap. However. If we have a mismatch immediately after moving P. the algorithm due to Boyer and Moore (henceforth : the BM algorithm) finds the occurrences of P in S in a time in O (n) in the worst case. we slide the latter along in such a way as to align the last occurrence of c in the pattern with the character c in the target string. is often sublinear : it does not necessarily examine every character of S. we align the latter just after the occurrence of c in the target string.2. The target string and the pattern are initially aligned as follows : S This is a delicate topic P cat T We examine P from right to left. The character opposite p [m] is "i". The BM algorithm. Since the pattern does not include this character. the number of characters in the pattern P.3 The Boyer-Moore Algorithm Like the KMP algorithm. ii. If c appears elsewhere in the pattern. This time. Furthermore. i. on the other hand. it makes at least n comparisons.

The left-hand arrow shows the position of the first mismatch. will confirm this. We slide P along to align the "c" found in S with the last "c" in P. Example 7.Precomputation for String-Searching Problems Sec. 7. when we slide P along one position to align the "a" in the target string with the "a" in the pattern. We have made only 9 comparisons between a character of P and a character of S. In our example when we start over checking P from right to left. S This is a delicate topic P cat T After two more immediate mismatches we are in this situation. this information is not contradicted.2. We know that starting at this position S contains the characters xcab where x #"a".4. S This is a delicate topic P cat T Now. always from right to left. S P Consider the same strings as in Example 7. there is an immediate mismatch. and start checking again (at the right-hand end of P). In this example we have found P without ever using rule (ii). A final check.) S P babcbabcabcaabcabcabcacabc abcabcacab T Unlike the KMP algorithm. we check all the positions of P after moving the pattern. If we slide P five places right. We slide P one place along to align the occurrences of the letter "a". (Underscores show which characters were aligned. Some unnecessary checks (corresponding to the underscored characters in P) may be made at times.2. P is correctly aligned. . but this time the character "a" that appears opposite p[m] also appears in P.2 S 217 This is a delicate topic P cat T There is once again an immediate mismatch.2: babcbabcabcaabcabcabcacabc abcabcacab TTTT We examine P from right to left.

Since we always examine the characters of P from right to left. We shall not give the details here.if c does not appear in p [ 1 . we need two arrays d1 [ { character set } ] and d2[1 . but only an example.218 Preconditioning and Precomputation S P Chap.. Example 7. The interpretation of d2 is the following : after a mismatch at position i of the pattern. is easy to compute. we again have a mismatch. the former to implement rule (i) and the latter for rule (ii). The array d 1. To implement the algorithm. at the right-hand end) of the pattern and d2[i] characters further along S. m -1]. m ] then m elsem -max{i I p[i]=c] .. where x # "e" : . Suppose the pattern is P = "assesses". A second application of rule (ii) gives us S P babcbabcabcaabcabcabcacabc abcabcacab T We apply rule (i) once to align the letters "a": S P babcbabcabcaabcabcabcacabc abcabcacab T and one last time to align the letters "c": S P babcbabcabcaabcabcabcacabc abcabcacab TTTTTTTTTT We have made 21 comparisons in all to find P. indexed by the character set we are using. Suppose further that at some moment during our search for the string P in S we have a mismatch in position p [7]. This is the distance to move P according to rule (i) when we have an immediate mismatch.2. carried out as usual from right to left. For every character c dl[c] F. begin checking again at position m (that is. we know that starting at the position of the mismatch the characters of S are xs.5. It is more complicated to compute d2. 7 babcbabcabcaabcabcabcacabc abcabcacab TTTT After four comparisons between P and S (of which one is unnecessary).

Starting from the position of the mismatch. that is. the characters of S are xsses. It is therefore impossible to align P under these characters. As a third instance. suppose now that we have a mismatch at position p [6]. 3 characters further on in S than the previous comparison: thus d2[7] = 3. 10 characters further on in S than the previous comparison: thus d2[6] = 10. where x # "e": S P ???xsses???????? assesses TTTTT x "e" In this case it may be possible to align P with S by sliding it three places right : S P ???xsses???????? assesses T xe . Starting from the position of the mismatch.2 S P ??????xs???????? assesses 219 x "e" TT The fact that x * "e" does not rule out the possibility of aligning the "s" in p [6] with the "s" we have found in S. Similarly. and we must slide P all the way to the right under the characters of S that we have not yet examined : S P ?????xes???????? assesses X "s T We start checking again at the end of P. It may therefore be possible to align P as follows : S P ??????xs???????? assesses X #"e T We start checking again at the end of P. 7.Precomputation for String-Searching Problems Sec. the characters of S are xes. that is. suppose we have a mismatch at position p [4]. where x # "s" : S P ?????xes???????? assesses TTT X "s" The fact that x # "s" rules out the possibility of aligning the "e" and the "s" in p [4] and p [5] with the "e" and the "s" found in S.

2. Here finally is the BM algorithm. Problem 7. 7 Now we start checking at the end of P. or else 0 if P is not a substring of S. even if j < m . so k is increased by d2[71 = 3 to obtain .k F. Continuing from Example 7.2.m while k <.10. Calculate d 1 and d2 for the pattern in Example 7. d 1 ["a"] = 7 and d 1 [any other character ] = 8.n and j > 0 do while j > 0 and s [k] = p [j ] do k+.9. In this algorithm.2.k-1 j *.k + d 1[s [k]] elsek k+d2[j] jFm if j = 0 then return k + 1 else return 0 It returns either the position of the first occurrence of P in S. d 1 ["e"] =1. Problem 7.2. However. Calculate d 1 and d2 for P = "abracadabraaa". Note that d 1 ["s"] has no significance.j . For this example we find i 1 2 3 4 5 6 7 8 p[i] a s s e s s e s d2[i] 15 14 13 7 11 10 3 We also have d 1 ["s"] = 0. the choice between using rule (i) and rule Problem 7.4. consider the following situation : S P ??????ts???????? assesses TT The failure of the match between "t" and "e" is of the second kind. 7 characters further on in S than the previous comparison. so d2[4] = 7.2.Preconditioning and Precomputation 220 Chap.11. it is possible that rule (i) might allow k to advance more than rule (ii).1 if j # 0 then if j = m then k <-. (ii) depends on the test "j = m ? ". function BM j.5. because an immediate mismatch is impossible at a position where S contains "s".

2. of course) for the pattern P = "assesses" in S = "I guess you possess a dress fit for a princess".2 221 ??????ts???????? assesses S P T However. For a character set of reasonable size (say. ten figures and about a dozen other characters) and a pattern that is not too long. provided we define d 2[m] = 1 and d 1[ p [m]] = 0.15. ** Problem 7. 52 letters if we count upper. How many comparisons are made altogether before the failure to match is discovered. Find a way to calculate the array d2 in a time in 0 (m).14. d I [c] is equal to m for most characters c. 7.and lowercase separately.m by k <.k+d2[j] j E.2. As long as m stays small . d2[ j ]) j4-m .13.2.k + max(d I [s [k] ]. Thus we look at approximately one character out of every m in the target string. the fact that "t" does not appear in P should have allowed us to increase k directly by d 1["t"] = 8 positions. Modify the BM algorithm so that it will find all the occurrences of P in S in a time in 0 (n). and how many of these comparisons are redundant (that is. Problem 7. ??????te???????? assesses S P T Show that the algorithm is still correct if we replace if j= m then k E.k+ d I [s [k]] else k<. Show the progress of the algorithm if we search (unsuccessfully.2.12. Prove that the total execution time of the algorithm (computation of d I and d2 and search for P) is in 0 (n). Problem 7.Precomputation for String-Searching Problems Sec. they repeat comparisons previously made) ? * Problem 7. This modification corresponds to the algorithm usually known by the name Boyer-Moore (although these authors also suggest other improvements). It is easy to see intuitively why the algorithm is often more efficient for longer patterns.

about 20% of the characters are examined when m = 6. Signatures are discussed in Harrison (1971). 7 compared to the size of the character set. can be used to introduce the KMP algorithm in an intuitively appealing way.3 come from Knuth. 7. only 15% of the characters in S are examined.2. and Pratt (1977) and Boyer and Moore (1977). For a probabilistic string-searching algorithm (see Chapter 8). consult Aho and Corasick (1975).2 and 7.2. the number of characters examined goes down as m goes up. for example. Finite automata. Boyer and Moore give some empirical results : if the target string S is a text in English. Morris. Baase (1978).3 REFERENCES AND FURTHER READING Preconditioning polynomials for repeated evaluation is suggested in Belaga (1961). Morris. The KMP and BM algorithms of Sections 7. see. as described for instance in Hopcroft and Ullman (1979). when m = 12. Rytter (1980) corrects the algorithm given in Knuth. read Karp and Rabin (1987). For an efficient algorithm capable of finding all the occurrences of a finite set of patterns in a target string. and Pratt (1977) for calculating the array d2 to be used in the Boyer-Moore algorithm.222 Preconditioning and Precomputation Chap. .

or to travel from one of them to the other. which are. however. You estimate that it will take four more days' computation to solve the mystery of the map and thus to know with certainty where the treasure is hidden. You have managed to reduce the search to two possible hiding-places. you would immediately know whether it was the right one.9y if you wait four days to finish deciphering the map. you can expect to come home with x .1. Suppose that x is the value of the treasure remaining today. Problem 8. If you were at one or the other of these two places. you can set out 223 . The problem is complicated by the fact that a dragon visits the treasure every night and carries part of it away to an inaccessible den in the mountains. Remembering that it will take you five days to reach the hiding-place. A treasure is hidden at a place described by a map that you cannot quite decipher. If you are willing to take a calculated risk. however. Leaving out of consideration the possible risks and costs of setting off on a treasure-hunting expedition. a considerable distance apart. Suppose further that x > 9y. should you accept the elf's offer? Obviously it is preferable to give three nights' worth of treasure to the elf rather than allow the dragon four extra nights of plunder. An elf offers to show you how to decipher the map if you pay him the equivalent of the treasure that the dragon can carry away in three nights. It takes five days to get to either of the possible hiding-places. If you accept the elf's offer.1. and that y is the value of the treasure carried off every night by the dragon.8 Probabilistic Algorithms 8.1 INTRODUCTION Imagine that you are the hero (or the heroine) of a fairy tale. you can do better. but if you set out on a journey you will no longer have access to your computer.

not the time incurred if the worst possible probabilistic choices are unfortunately taken.6 describes an algorithm that can find the k th small- est of an array of n elements in linear time in the worst case.Probabilistic Algorithms 224 Chap.5y.7.4.4. By contrast. you will thus have x . for instance.1. the expected execution time of a probabilistic algorithm is defined on each individual instance : it refers to the mean time that it would take to solve the same instance over and over again. One fundamental principle of the divide-and-conquer technique suggests that the nearer the pivot is to the median of the elements. it is sometimes preferable to choose a course of action at random. Such a situation arises when the time required to determine the optimal choice is prohibitive. Despite this. On the other hand. Thus we choose a suboptimal so-called pseudomedian. but at the price of a quadratic worst case : it simply decides to use the first element of the array as the pivot. The latter. without making the algorithm catastrophically bad for the worst-case instances. This fable can be translated into the context of algorithmics as follows : when an algorithm is confronted by a choice. and one chance out of two of coming home with x -10y . The only algorithms they had seen were those in Sec- . but choosing the pseudomedian still takes quite some time. The average execution time of a deterministic algorithm was discussed in section 1. Recall that this algorithm begins by partitioning the elements of the array on either side of a pivot. This avoids the infinite recursion. the probabilistic algorithm can only be more efficient with respect to its expected execution time. journeying on to the other if you find you have decided wrong. refers to the expected time taken by the worst possible instance of a given size. It is always possible that bad luck will force the algorithm to explore many unfruitful possibilities.5y.3). and that it then calls itself recursively on the appropriate section of the array if need be. We once asked the students in an algorithmics course to implement the selection algorithm of their choice. 8 immediately and bring back x . Clearly.6.1. We shall see in Section 8. we saw another algorithm that is much faster on the average. Section 4. It refers to the average time taken by the algorithm when each possible instance of a given size is considered equally likely. We make an important distinction between the words "average" and "expected". compared to the time that will be saved on the average by making this optimal choice. This makes it meaningful to talk about the average expected time and the worst-case expected time of a probabilistic algorithm. A better strategy is to toss a coin to decide which possible hidingplace to visit first. This is like buying a ticket for a lottery that has a positive expected return. Your expected profit is therefore x . Example 8. rather than to spend time working out which alternative is the best.8y left. of which 3y will go to pay the elf. the more efficient the algorithm will be. there is no question of choosing the exact median as the pivot because this would cause an infinite recursion (see Problem 4.1 that choosing the pivot randomly gives a substantial improvement in the expected execution time as compared to the algorithm using the pseudomedian. This gives you one chance out of two of coming home with x -5y.

This idea allowed them to beat their colleagues hands down : their programs took an average of 300 milliseconds to solve the trial instance. we can improve the backtracking technique by placing the first few queens at random. a < b . To generate random integers. Similarly. and independently among the elements of X. where i and j are integers. If we are content with finding one solution rather than all of them. and successive calls on the generator yield independent values of x.. Show how the effect of uniform (i . uniform (X). Section 8.5. The analysis of probabilistic algorithms is often complex. b) is available. and number theory beyond the scope of this book. Throughout this chapter we suppose that we have available a random number generator that can be called at unit cost. Section 6.2 describes an efficient probabilistic algorithm to solve this problem provided that one is willing to accept an arbitrarily small probability of error. we extend the notation to include uniform (i . thought of using a probabilistic approach.6. j) can be obtained if only uniform (a. For another example of the same phenomenon consider the problem : "Find a nontrivial factor of a given composite integer.. j ). returns an element chosen randomly. and independently in the interval i 5 k S j. 8. requiring an acquaintance with results in probability. Three students. statistics. whereas the majority of the deterministic algorithms took between 1500 and 2600 milliseconds. where X is a nonempty finite set. Using the same probabilistic algorithm on the same instance. i <_ j.1 goes into this more thoroughly. . but in this case the choice of algorithm determines uniquely which solution will be obtained whenever the algorithm is applied on any given instance.1 Introduction 225 tion 4.1.8). a number of results are cited without proof in the following sections. Let a and b.6.2 raises an important consideration concerning probabilistic algorithms.1. For this reason.6.1 describes a systematic way of exploring an implicit tree to solve the eight queens problem. They are sometimes used to solve problems that allow several correct solutions. such problems can also be handled by deterministic algorithms.2. time whether a given integer with several hundred decimal digits is prime or composite. be real numbers. No known deterministic algorithm can decide in a reasonable Example 8. consult the references suggested in the last section. and the function returns an integer k chosen randomly. none of them took the risk of using a deterministic algorithm with a quadratic worst case.2. Nevertheless. however. For more details. Since the students did not know which instances would be used to test their programs (and suspecting the worst of their professors). Example 8.1.1. Section 8. The distribution of x is Iniform on the interval. b) returns a real number x chosen randomly in the interval a <_ x < b .Sec. uniformly. we may obtain different correct solutions on different occasions." Of course. Problem 8.3. uniformly. A call on uniform (a. Example 8. This problem has important applications in cryptology (Section 4.

Using a good pseudorandom generator.Probabilistic Algorithms 226 Chap. the period can be made very long. independent choice of an element of X. j. and the sequence may be for most practical purposes statistically indistinguishable from a truly random sequence of elements of Y. However. Finally. so to obtain different sequences. 8 Example 8. but a simple example will illustrate the general idea. 8. uniform. Y2. We shall not worry about the fact that such a concept conflicts with the definition of "algorithm" given at the beginning of the first chapter.8 } Problem 8. It is thus the cardinality of X = { a i mod p I j > 1 } . may vary considerably . This sequence is necessarily periodic.. The index of a modulo p is the smallest strictly positive integer i such that a i = 1 (mod p ). To start a sequence. Let p be a prime number. Give other examples of sets in which there is an efficient way to choose an element randomly. an index modulo p always divides p -1 exactly. the impractical hypothesis that a genuinely random generator is available is crucial when we carry out the analysis. and that of 5 is 3. p ) j F.1. Most generators are based on a pair of functions S : X -+ X and R : X -> Y.p-1) return dexpoiter (a . the theoretical results obtained in this chapter concerning the efficiency of different algorithms can generally be expected to hold.uniform(1. and let a be an integer such that 1 5 a < p . function draw (a . where X is a sufficiently large set and Y is the domain of pseudorandom values to be generated. uniformly. This suggests one way of making a random. By Fermat's theorem. The execution time. However.1) for i > 0. the function R allows us to obtain the pseudorandom sequence YO. Suggestions for further reading are given at the end of the chapter. the index of 2 modulo 31 is 5. a seed that depends on the date or time.. The theory of pseudorandom generators is complex. with a period that cannot exceed #X..2 CLASSIFICATION OF PROBABILISTIC ALGORITHMS By definition. that of 3 is 30. defined by yi = R (xi ). and even the result obtained. . Most programming languages include such a generator. this seed defines a sequence : x0 = g and xi = S (xi . The same seed always gives rise to the same sequence. if S and R (and sometimes g) are chosen properly. y 1 . p) { Section 4. we must supply an initial value called a seed. Most of the time pseudorandom generators are used instead : these are deterministic procedures that are able to generate long sequences of values that appear to have the properties of a random sequence.3. i >_ 0. The fundamental characteristic of these algorithms is that they may react differently if they are applied twice to the same instance. we may choose. Let g r= X be a seed. Using the function S. and independently. Truly random generators are not usually available in practice.1.4. For example. although some implementations should be used with caution. a probabilistic algorithm leaves some of its decisions to chance. for example.

It is not a case of preventing the occasional occurrence of the algorithm's worst-case behaviour. 8. this difference between good and bad instances. but the answer is not necessarily right. computation of an exact solution is not possible even in principle. Las Vegas algorithms never return an incorrect answer. but sometimes they do not find an answer at all. but rather of breaking the link between the occurrence of such behaviour and the particular . the probability of success (that is. a precise answer exists but it would take too long to figure it out exactly. the probability of success increases as the time available to the algorithm goes up. The principal disadvantage of such algorithms is that it is not in general possible to decide efficiently whether or not the answer given is correct. are used when there is no question of accepting an approximate answer. on the other hand. As with Monte Carlo algorithms. of getting a correct answer) increases as the time available to the algorithm goes up. Randomness was first used in algorithmics for the approximate solution of numerical problems. but catastrophic for a few instances. such as the simplex algorithm for linear programming. Probabilistic algorithms can be divided into four major classes : numerical. They are used when some known deterministic algorithm to solve a particular problem runs much faster on the average than in the worst case. Incorporating an element of randomness allows a Sherwood algorithm to reduce. Las Vegas. since only two answers are possible. In the case of a decision problem. Similarly. or maybe because a digital computer can only handle binary or decimal values while the answer to be computed is irrational. and the answer is always correct.) For certain real-life prob- lems. and in particular for those we call "numerical". and sometimes even to eliminate. (The error is usually inversely proportional to the square root of the amount of work performed. Simulation can be used. A way to put down seven queens on the chess-board is little help in solving the eight queens problem.Sec. Monte Carlo algorithms. perhaps because of uncertainties in the experimental data to be used. These algorithms should not be confused with those. Some authors use the term "Monte Carlo" for any probabilistic algorithm. for example.2 Classification of Probabilistic Algorithms 227 from one use to the next. However. Finally. Monte Carlo. and Sherwood. that are extremely efficient for the great majority of instances to be handled. it is of little interest to know that such-and-such a value is "almost a factor". The answer obtained by such a probabilistic algorithm is always approximate. and only an exact solution will do. if we are trying to factorize an integer. For other problems. A Monte Carlo algorithm always gives an answer. Sherwood algorithms always give an answer. but its expected precision improves as the time available to the algorithm increases. it is hard to see what an "approximation" might be. Thus a certain doubt will always exist. Whatever the instance to be salved. any answer that is obtained is necessarily correct. for example. Sometimes the answer is given in the form of a confidence interval. to estimate the mean length of a queue in a system so complex that it is impossible to get closed-form solutions or to get numerical answers by deterministic methods. the probability of failure can be made arbitrarily small by repeating the same algorithm enough times on this instance.

3.1.4).1. each toothpick has one chance in it of falling across a crack. If you know that there were 355 toothpicks in the box. a Sherwood algorithm is less vulnerable to an unexpected probability distribution of the instances that some particular application might give it to solve (see the end of Section 1. and this uncertainty is typical of probabilistic algorithms. Why 113 ? Prove it. Show that the problem of finding a nontrivial factor of a composite integer (Section 8. the average number of toothpicks expected to fall across a crack can be calculated : it is almost exactly 113. A problem is well-characterized if it is always possible to Problem 8.3. each one independently of all the others.2. Since it reacts more uniformly than the deterministic algorithm. how many toothpicks will fall across a crack between two planks ? Clearly any answer between 0 and 355 is possible. 8. However.5. verify efficiently the correctness of a proposed solution for any given instance. show how to obtain a Monte Carlo algorithm for any problem whatsoever given that you already have a Las Vegas algorithm for the same problem. Needless to say.3.2. this method is not used in practice since better methods of calculating the decimal expansion of it are known.Probabilistic Algorithms 228 Chap. and that each one is exactly half as long as the planks in the floor are wide (we realize that this gets unlikelier every minute ! ). Why Buffon ? In fact. 8 instance to be solved. Problem 8. Show how to obtain a Las Vegas algorithm to solve a wellcharacterized problem given that you already have a Monte Carlo algorithm for the same problem. This suggests a probabilistic "algorithm" for estimating the value of it by spilling a sufficiently large number of toothpicks onto the floor. do you think the problem of finding the smallest nontrivial factor of a composite integer is well-characterized? Problem 8.3) is well-characterized. Problem 8. You should realize this in no way implies that the problem is easy to solve. .2. as Georges Louis Leclerc showed. do you think ? 8.2. The toothpicks spread out on the ground in random positions and at random angles. Contrariwise.3 NUMERICAL PROBABILISTIC ALGORITHMS Remember that it is a question of finding an approximate answer for a numerical problem. Why "Sherwood".1 Buffon's Needle You spill a box of toothpicks onto a wooden floor. Intuitively.

the precision of your estimate of it would be limited by the precision of the ratio of the length of the toothpicks to the width of the planks. The following algorithm simulates this experiment.3. This allows us to estimate it = 4k/n.001 ? Supposing that you have available a random generator of the Problem 8. length of the toothpicks. 8. / Throwing darts to compute it.in radians . Your algorithm should count the number k of toothpicks that fall across a crack. where 28 darts have been thrown.3. In our example. We suppose that every point in the square has exactly the same probability of being hit by a dart. except that it only throws darts into the upper right quadrant of the target.1 illustrates the experiment. But then 0 nobody said this was a practical method !) Consider next the experiment that consists of throwing n darts at a square target and counting the number k that fall inside a circle inscribed in this square.) If the radius of the inscribed circle is r. so the average proportion of the darts that fall inside the circle is nr2/4r2 = it/4. type discussed previously. give an algorithm Buffon (n) that simulates the experiment of dropping n toothpicks.000. and return n / k as its estimate of it.of each toothpick that falls. Supposing that the width of the planks is exactly twice the * Problem 8. whereas that of the square target is 4r2. At AF r M r r At Si r 4 r Figure 8. where we expect to see on average 28n/4 = 22. we are not surprised to find 21 of them inside the circle.2. how many of the latter should you drop in order to obtain with probability at least 90% an estimate of it whose absolute error does not exceed 0.3 Numerical Probabilistic Algorithms 229 Furthermore. using a pseudorandom generator.required.Sec.2. (It is much easier to simulate this experiment on a computer than to find a darts-player with exactly the degree of expertise-or of incompetence .2. Try your algorithm on a computer with n = 1000 and n = 10. Figure 8.1. . What are your estimates of it ? (It is likely that you will need the value of it during the simulation to generate the random angle .3. then its area is nr 2.

then the area of the surface bounded by the curve y = f (x). Prove that if ! is the correct value of the integral and if h is the value returned by the preceding algorithm.1) if y <-f(x)then k Fk+1 return k In Thus the algorithm using darts to estimate it is equivalent to the evaluation of 45(1-x 2)Zdx 0 by hit-or-miss Monte Carlo.3.5.uniform (0.uniform (0. then Prob[ I h . (This name is unfortunate. * Problem 8. What value is estimated if we replace "x . the y-axis.1) y <. 1) .1)" by "x t.1) if x2+y2<-lthen k-k+1 return 4k In Problem 8. because in our terminology it is not an example of a Monte Carlo algorithm.uniform (0. 1] -* [0. y F. Consider two real constants a and S strictly between 0 and 1.S whenever the number n of iterations is at least I (1-I)/E2S. function hitormiss (f .1 to n do x F-. we could throw a sufficient number of darts at the unit square and count how many of them fall below the curve.3.I I < E ] > 1.1) . the x-axis.uniform (0. n) k+-0 for i E. and the line x =1 is given by if (x)dx 0 To estimate this integral.) Recall that if f : [0. 8 function darts (n) k -0 for i F.2 Numerical Integration This brings us to the best known of the numerical probabilistic algorithms : Monte Carlo integration.uniform (0.3.1] is a continuous function.4.230 Probabilistic Algorithms Chap.1 to n do x t. Therefore it is sufficient to use n = 11/4c281 (because I (1-I) <-'/a) to reduce below 8 the probability of an absolute error exceeding E.1) y F uniform (0. y F x " in this algorithm ? 8. .uniform (0.

that crude always outperforms hit-or-miss.Sec. and the distribution of the estimate is approximately normal when n is large.1 to n do x F.(b -a)l(n -1) sum <-. each iteration of crude requires the computation of a square root. 8. and let f : [a. one of the simplest of which is the trapezoidal algorithm. The estimated value of the integral is obtained by multiplying the width of the interval by the arithmetic mean of the values of the function at these points. the variance of the estimate calculated by this algorithm is inversely proportional to the number of points generated randomly. One should not immediately conclude.a. c. for instance. Problem 8. and b.b) sum E-0 for i . As presented thus far.sum + f (x) return (b -a) x (sum/n) fh fab Provided f (x) dx and f 2(x) dx exist. If both are used to compute it as previously described. Moreover. Monte Carlo integration is of little practical use. a Your algorithm should accept as parameters. b ) we assume n >-21 delta .6.3 Numerical Probabilistic Algorithms 231 Notice that this is not very good : one more decimal digit of precision requires one hundred times more computation. b. the number n of iterations to make and the values of c and d.n. and d be four real numbers such that a < b and c S d. The simplest consists of generating a number of points randomly and uniformly inside the interval concerned.(f (a)+f (b))/2 for x . a. its variance is never worse than that of the hit-or-miss algorithm. Let a. besides f . b I . function crude(f . A better estimate of the integral can generally be obtained by various deterministic methods. Generalize the preceding algorithm to estimate h Jf(x)dx . for any fixed number of iterations.[c. because hit-or-miss can sometimes make more iterations than crude in a given amount of time. n . b) sum .3.uniform (a. however.a +delta step delta to b -delta do sum<-sum+f(x) return sum x delta . d ] be a continuous function. function trapezoidal (f . a. Usually more efficient probabilistic methods of estimating the value of a definite integral exist. which hit-or-miss can do without by proceeding as in darts.

uniformly. Why is it called the trapezoidal algorithm ? Problem 8. the trapezoidal algorithm needs many less iterations than does Monte Carlo integration to obtain a comparable degree of precision. one million points will be needed for a triple integral. In practice. to achieve the same precision when a double integral is evaluated.8. Compare experimentally the trapezoidal algorithm and the two probabilistic algorithms we have seen. numeric probabilistic algorithms are used to approximate a real number. Suppose. on the other hand. However. there correspond continuous functions that can be constructed expressly to fool the algorithm.3 Probabilistic Counting In the preceding examples. Consider for example the function f (x) = sin2((100!)itx). If a deterministic integration algorithm using some systematic method to sample the function is generalized to several dimensions. Monte Carlo integration is used to evaluate integrals of dimension four or higher. that is.3. to every deterministic integration algorithm. In practice. although the amount of work for each iteration is likely to increase slightly with the dimension. 8 Try to grasp intuitively why this algorithm works. that we are able to choose an element from X randomly. but the number of elements is too large for it to be practical simply to count them one by one. A classic brain-teaser helps to explain how this ability to choose random elements from X allows us to estimate its cardinality. a technique not discussed here (but Section 8.Probabilistic Algorithms 232 Problem 8. The same technique can also be used to estimate the value of an integer. The precision of the answer can be improved using hybrid techniques that are partly systematic and partly probabilistic. 10. If the dimension is fixed. We would like to know the cardinality of X. Chap. and independently (see Example 8. n .3. and so on.000 points. Any call on trapezoidal (f .7. 8.4). on the other hand. then it will probably be necessary to use all the points of a 100 x 100 grid.101 returns the value zero.3. estimate the value of it by calculating fo 4(1-x2)zdx.1. Monte Carlo integration is of interest when we have to evaluate a multiple integral. it may even be preferable to use quasi Monte Carlo integration. In each case. No function can play this kind of trick on the Monte Carlo integration algorithm (although there is an extremely small probability that the algorithm might manage to make a similar kind of error. 0. the number of sample points needed to achieve a given precision grows exponentially with the dimension of the integral to be evaluated.1) with 2 5 n <. If 100 points are needed to evaluate a simple integral. even though the true value of this integral is z . This is typical of most of the natural functions that we may wish to integrate. In Monte Carlo integration. Let X be a finite set. . even the most sophisticated. the dimension of the integral generally has little effect on the precision obtained. even when f is a thoroughly ordinary function). In general.7 gives a reference for further reading).

When n is large. it is when k z a4n-.12. the probability that k objects chosen randomly and uniformly from n (with repetition allowed) are all distinct is n!/(n -k )!n k . Problem 8.13.3.3 Numerical Probabilistic Algorithms 233 A room contains 25 randomly chosen people. there are n!/(n-k)! different ways of choosing k distinct objects from among n objects. It is harder to determine the average value of k corresponding to the first repetition. Calculate 365!/340!36525 to four significant figures.3. n! = x _X2 / 2 when x is near zero. Would you be Problem 8. Problem 8. willing to bet that at least two of them share the same birthday? (Do not read the following paragraphs if you wish to think about this.k'/n')) provided that 1 << k << n.9. . taking into account the order in which they are chosen. uniformly and independently choose elements with replacement. n! E I (n le )" [1 + l/12n + O(n-2)] ln(l+x) Ex -x2/2+x3/3-8(x4) when -l <x < 1 to conclude that n /(n-k) nk E e-k(k-1)12n-k'/6n'±0(max(k'/n'. 8. ** Problem 8.177.3. where a = 21n 2 z 1. and the approximation ln(l+x) Z Stirling's approximation.3. allow us to estimate this probability.10 does not correspond exactly to the puzzle in problem 8.253. that the probability of having a repetition exceeds 50%.. The calculation in problem 8. the probability that you would win your bet is greater than 56%. Use the more accurate formulas .14.3. Show that n!/(n -k )!n k = e-k'/2n * Problem 8. What about leap years ? (n le )" .9. This suggests the following probabilistic algorithm for estimating the number of elements in a set X . because births are not uniformly distributed through the year.Sec. In particular.3. Let X be a set of n elements from which we randomly.11.3. show that the expected value of k tends to where (3 = = 1. Nevertheless. Since there are n k different ways of choosing k objects if repetitions are allowed.) The intuitive answer to the preceding question is almost invariably "of course not". More generally. Problem 8.10. Let k be the number of choices before the occurrence of the first repetition.3. Does this make it more or less likely that you would win your bet? Justify your answer intuitively.

Let p stand for k . This quantity of space can be prohibitive if n is large. Because X is a finite set. Incorporate all these ideas into a simple algorithm that is capable of finding the values of q and p. function of the value of n. = x. way.q +p. and thus 2k2/it is our estimate on #X. xi = xj whenever j >_ i >. For this reason {xi } Q_o is called the tail of the walk and {xi } 4 q is its period.. }1. with t = q +p only possible if q = 0. . This is one of the rare instances where using a truly random generator would be a hindrance rather than a help. We have not merely to choose an ele- ment at random from X.q and j -i = 0 (mod p ).3.N In I < c ] >. The space can be reduced to a constant with only a linear increase in the execution time by using a pseudorandom generator.1r. e.t <.3. and hence deterministic. x 1. provided operations on the set S are counted at unit cost. where xi = f (xi _ 1). hence in an expected time in 0 ( X ). and deduce that the smallest integer j such that xj = x1++ is precisely q. hence k = q +p.O defined by yt =X2.o a <--uniform (X ) repeat k F-k+1 S <-S u (a} a . through X. we also have xq+i = xk+i for every nonnegative integer i. Let f : X -* X be a pseudorandom function and let xoE X be a randomly chosen starting point. This defines a walk xo. i > 0.Probabilistic Algorithms 234 Chap. Show that the smallest integer t > 0 such that y. Show also that t 0 (mod p). and let k be the smallest integer larger than q such that xq = xk . Consider the sequence {y. 6) that returns an estimate N of the number n of elements in X such that Prob[ 11. Does the function count provide an unbiased estimator of n? Give an algorithm count 2(X. must eventually repeat itself. .q. but also to step through X in a pseudorandom. Because the walk is pseudorandom.. Let q be the smallest integer such that xq appears more than once in the sequence. in a time in 0 (k) and in constant space. The algorithm count (X) estimates the number n of elements in X in an expected time and space that are both in O(X ). and more generally. is such that q <.15. the length of the tail.uniform (X) until a E S return 2k2/1t Carry out a statistical analysis of the random variable k2 as a ** Problem 8. * Problem 8. . since this corresponds precisely to the first repetition. 8 function count (X :set) kF-0 S F. the sequence {x. We are interested in computing the value of k.16.S.1. X2. The following exercise shows how to obtain both q and p in constant space and in a time in 0 (k).

It suffices to choose k I and k2 randomly in K and to calculate Ek (Ek. keys. Such a system is closed (an undesirable property) if (dki. An endomorphic cryptosystem consists of a finite set K of Example 8. Probabilistic counting is used to estimate a lower bound on the cardinality of Xm . Let N be the total number of words on the tape. Let m be chosen randomly from M. where a << b .3. 8.3. see Problem 8.11) The probabilistic counting algorithm no longer works if the generation of elements in X is not uniform.4 More Probabilistic Counting You have the complete works of Shakespeare on a magnetic tape.(Ek. be used unchanged to estimate a lower bound on the number of elements in X.) Implemented using specialized hardware. For every m r= M consider the set X. and so on) as distinct items? Two obvious solutions to this problem are based on the techniques of sorting and searching. On the other hand. if some elements of X are favoured to the detriment of others. (Continuation of Problem 8.. the probabilistic algorithm was able to arrive at this conclusion in less than a day. and let n be the number of different words.17. nevertheless. if the system is not closed. a finite set M of messages and two permutations Ek : M -* M and Dk : M -* M for each key k E K such that Dk (Ek (m)) = m for every m E M and k E K.3. although this may not imply a uniform distribution on Xm .3 Numerical Probabilistic Algorithms 235 Problem 8. = { Ek (Ek.Sec.3. which rules out any possibility of an exhaustive verification of the hypothesis that #X. (Even 256 microseconds is more than two millennia. In this application #K = 256 and #M = 264. The following example shows that it can nonetheless be useful if we simply need to know whether X contains less than a elements or more than b . How can you determine the number of different words he used. The first approach might be to sort the words on the tape so as to . The variance of the estimate obtained from this algorithm is unfortunately too high for most practical applications (unless the solution to Problem 8.(m)] .(m))=Ek.(m)) I k I .15 is used).3. It is clear that #Xm <_ #K if the system is closed. > 256.1. k 2 E K }. All this suggests a probabilistic approach to testing whether or not a cryptosystem is closed. Show that it can. possessives. (We can only estimate a lower bound since there is no reason to believe that elements are chosen uniformly from Xm . A similar approach was used to demonstrate that the American Data Encryption Standard is almost certainly not closed.k2EK)(3k3EK)(b'mEM)[Ek.3. 8. that is.. counting different forms of the same word (plurals. it is reasonable to hope that #Xm >> #K provided that #M >> #K.) It is improbable that the system is closed if this estimate is significantly greater than the cardinality of K.17.(m)) in order to choose a random element from Xm .

but also whether we want to count such sequences as "jack-rabbit".a -eh'' . in fact.) Let U be the set of such sequences. Let k be the value returned by a call on wordcnt. that the value returned by this algorithm is 4. Let m be a parameter somewhat larger than lg M (a more detailed analysis shows that m = 5 + Fig MI suffices).string of (m + 1) bits set to zero { sequential passage through the tape } for each word x on the tape do i -7t(h(x). the smallest i such that y [i ] =b. it is unlikely that there could be less than 4 distinct words on the tape. two. function wordcnt { initialization } y . then there exists a probabilistic algorithm for solving this problem that is efficient with respect to both time and space. assuming h has sufficiently random behaviour .4. If we are willing to tolerate some imprecision in the estimate of n. Prob[ k = 41 n =16 ] = 313/4 %. ). Consequently.1 } . for example. 1 5i 5k . this second method requires a quantity of central memory in S2(n). 1) = 3 for at least one value of xi among 4 different values is 1-(7/8)4 = 41.6% = e-1. 0) Suppose. or four words. ) This crude rea- . respectively. but there is no word x4 such that h (x4) begins with 0001. Since the probability that a random binary string begins with 0001 is 2-4. Prob[ k = 41 n = 4 ] = 183/4 %. 1) # 4 for 16 different values of xi is (15/16)16 = 35. x2 and x3 on the tape such that h (x1) begins with 1. (The probability that n(h (xi). The required time is thus in 0 (N) on the average. but it is in S2(Nn) in the worst case. there are words x 1. 01.1) y[i]<-l return 7t(y. which will most likely prove prohibitive. on the value of N ).1 }'" be a hash function that transforms a sequence from U in a pseudorandom way into a string of bits of length m. This method takes a time in O(N log N) but requires a relatively modest amount of space in central memory if a suitable external sorting technique is used. three. Moreover. (The probability that it(h (x. denote by 7t(y.4 % = 1. If y is a string of bits of length k. "jacko'lantern". and if we already know an upper bound M on the value of n (or failing this. ) Conversely.) The second approach consists of making a single pass over the tape and constructing in central memory a hash table (see Section 8. since the probability that a random binary string begins with 001 is 2-3. denote by y [i] the i th bit of y. or k+1 if none of the bits of y is equal to b. (This may depend not only on the character set we are using. 8 bring identical forms together. and "jack-in-the-box" as one. b ).Probabilistic Algorithms 236 Chap.4) holding a single occurrence of each form so far encountered. and then to make a sequential pass over the sorted tape to count the number of different words. This means that the final y begins with 1110. (Such techniques are not covered in this book. Consider the following algorithm. Let h : U -* {0. it is unlikely that there could be more than 16 distinct words on the tape. and 001. b E { 0. We must first define what sequences of characters are to be considered as words. in fact.

These algorithms are only applicable if the matrices concerned are well conditioned. 8. for instance. Prove that the expected value of R is in Ig n + O(1).18.62950 when n is sufficiently large.54703. Among those are matrix multiplication. An intriguing feature of these probabilistic algorithms is their ability to compute independently the various entries in the result. . 1 }' is randomly chosen with uniform distribution among all such functions (this last assumption is not reasonable in practice). an n x n nonsingular matrix A. the solution of a system of simultaneous linear equations. Y 22 . Let R be the random variable returned by this algorithm ** Problem 8. there are probabilistic algorithms that are capable of estimating the value of B.) 0 Notice that this approximate counting algorithm is completely insensitive to the order in which the words appear on the tape and to the number of repetitions of each of them. Consider. Show how to obtain an arbitrarily precise estimate by using a little more space but with no appreciable increase in execution time.64) . ** Problem 8. Unfortunately.3. the standard deviation of R shows that this estimate may be in error by a factor of 2.3.--.n . It is far from obvious how to carry out a more precise analysis of the unbiased estimate of n given by k. which is unacceptable. typically requiring that 1-A have only small eigenvalues. where the hidden constant in 0(1) fluctuates around 0.78/-J provided t is sufficiently large (t >.yt of m bits. Classic deterministic inversion algorithms compute its inverse B as a whole or perhaps column by column. By contrast. matrix inversion. 8.5 Numerical Problems in Linear Algebra Many classic problems in linear algebra can be handled by numerical probabilistic algorithms. you can obtain a relative precision of about 0. .3. your hash function should produce strings of m +lg t bits. for any given 1 <_ i < n and 1 < j <. and the computation of eigenvalues and eigenvectors. The reader is referred to the literature for further discussion. in about 1/n2 of the time they would require to compute the whole inverse. provided n is sufficiently large. when the tape contains n different words and the function h : U ---)10.19. This offers a first approach for estimating the number of different words : calculate k using the algorithm wordcnt and estimate n as 2k/1. Prove further that the standard deviation of R fluctuates around 1. (Hint : by using t strings y 1 . We do not discuss any of them in detail here because it is only for very specialized applications that they perform better than the obvious deterministic algorithms. where I stands for the identity matrix.Sec.3 Numerical Probabilistic Algorithms 237 soning indicates that it is plausible to expect that the number of distinct words on the tape should lie between 2k-2 and 2"`.12127.

4.1 Selection and Sorting We return to the problem of finding the kth smallest element in an array T of n elements (Section 4. but this fortuitous behaviour is only due to the probabilistic choices made by the algorithm. If we define Fe(n) = I te(a)l#X xex' the average expected time taken by algorithm B on a random instance of size n.1). it is clear that to (n) = tA(n)+s (n). A hypothesis that is correct for a given application of the algorithm may prove disastrously wrong for a different application. The Sherwood algorithm thus involves only a small increase in the average execution time if s (n) is negligible compared to to (n).Probabilistic Algorithms 238 Chap. that quicksort (Section 4. Thus there are no longer worst-case instances. tA(x)/#X xEX This in no way rules out the possibility that there exists an instance x of size n such that to (x) >> FA (n). for example. but only worst-case executions. independently of the specific instance x to be solved. We wish to obtain a probabilistic algorithm B such that tB (x) = 1A (n) + s (n) for every instance x of size n. Supposing that every instance of a given size is equiprobable. 8.4 mentions that analysing the average efficiency of an algorithm may sometimes give misleading results. This analysis no longer bears any relation to reality if in fact we tend to give the algorithm only instances that are already almost sorted.4 SHERWOOD ALGORITHMS Section 1. Suppose. Let A be a deterministic algorithm and let to (x) be the time it takes to solve some instance x. 8 8.6 and Example 8. be the set of instances of size n. the average time taken by the algorithm to solve an instance of size n is tA(n)= Y. For every integer n let X.5) is used as a subalgorithm inside a more complex algorithm. where tB (x) is the expected time taken by algorithm B on instance x and s (n) is the cost we have to pay for this uniformity. Sherwood algorithms free us from the necessity of worrying about such situations by evening out the time required on different instances of a given size. The reason is that any analysis of the average case must be based on a hypothesis about the probability distribution of the instances to be handled.1. Analysis of this sorting method shows that it takes an average time in O(n log n) to sort n items provided that the instances to be sorted are chosen randomly. Algorithm B may occasionally take more time than tA(n)+s(n) on an instance x of size n. The heart of this algorithm is the choice of a .

let S. Rather than express this time as a function solely of n. we assume that 1 <_ k <_ n } iE. The simplified algorithm is generally faster : for every n . a). n ]. We have the following equations : 6ESn (3cp)(3n. we can express it as a function of both n and a. we must make sure that the instances to be solved are indeed chosen randomly and uniformly. be the set of n! permutations of the first n integers. (n) = I t . (n .a) is occasionally much greater than tp (n .239 Sherwood Algorithms Sec.. n ] but (3c.(Y)] . a) and t. Despite this prohibitive worst case. Define F. (5) be the times taken by the algorithm that uses the pseudomedian and by the simplified algorithm. More precisely. and that we are looking for the median. respectively. If we decide to aim for speed on the average thanks to the simpler deterministic algorithm. n2 >> cp n >_ tp(n. t. a) < tp (n . the permutation of the first n integers that corresponds to the relative order of the elements of the array. For the execution time to be independent of the permutation a. the simpler algorithm has the advantage of a much smaller hidden constant on account of the time that is saved by not calculating the pseudomedian.j f. the simplified algorithm is sometimes disastrous : t. (n . On the other hand. using the first element of the array as the pivot assures us of a linear execution time on the average. (n . (Y) for most values of a. On the other hand.4 pivot around which the other elements of the array are partitioned. Let tp (n . (n . The decision whether it is more important to have efficient execution in the worst case or on the average must be taken in the light of the particular application. k) { finds the kth smallest element in array T .6. with the risk that the algorithm will take quadratic time in the worst case (Problems 4.6). which forces us to distinguish between the worst case and an average case.5 and 4. but only on their relative order. it suffices to choose the pivot randomly among the n elements of the array T.6.n . even though finding this pivot is a relatively costly operation. Using the pseudo- median as the pivot assures us of a linear execution time in the worst case.a)/n! .6 do not depend on the values of the elements of the array.l. Suppose that the elements of T are distinct.ElN)(`dn?n1)(Vo (acs << cp)(3n2E lNT)(Vn ? n2)[ts(n) <_ c. The execution times of the algorithms in Section 4. function selectionRH (T [ 1 . The resulting algorithm resembles the iterative binary search of Section 4. 8. The fact that we no longer calculate a pseudomedian simplifies the algorithm and avoids recursive calls.3.)(3n3E IN)(Vn >C.

j .4. Stochastic preconditioning allows us to obtain a Sherwood algorithm without changing the deterministic algo- ..a)] To sum up.5).. (Y) be the average time taken by the Sherwood algorithm to determine the median of an array of n elements arranged in the order specified by a. var u. we have transformed this algorithm into a Sherwood algorithm that is efficient (with high probability) whatever the instance considered. var v) pivots the elements of T [i . to repeat.1). j ] around the value m .(Y) < tp(n. however.5. Using the probabilistic approach. (Y) is independent of a. those of T [u . m. 8 while i < j do m <-. u -I] are less than m .v) ifk < u <-else if k > v then i -v+1 else i.4.. and. v ] are equal to m. Problem 8. The probabilistic nature of the algorithm ensures that tRH (n . This happens. but the probability that this will happen becomes increasingly negligible as n gets larger. we started with an algorithm that is excellent when we consider its average execution time on all the instances of some particular size but that is very inefficient on certain specific instances.1.u. j i.k return T[i] Here partition (T.T [uniform (i .j.m. j ] are greater than m.Probabilistic Algorithms 240 Chap. Its simplicity ensures that (3noEN)(f/n >.4.6. Let tRH (n .6. after this operation the elements of T [i . and those of T [v +1 . A similar analysis to that of Problem 4. There are.2 Stochastic Preconditioning The modifications we made to the deterministic algorithms for sorting and for selection in order to obtain Sherwood algorithms are simple. It is always possible that some particular execution of the algorithm will take quadratic time.i. Notice that quicksort must first be modified along 0 the lines of Problem 4. 8. Thus its efficiency is not affected by the peculiarities of the application in which the algorithm is used.5 shows that the expected time taken by this probabilistic selection algorithm is linear. Show how to apply the Sherwood style of probabilistic approach to quicksort (Section 4. i . j)] partition (T. this probability is independent of the instance concerned.no)(VaESf)[tRH(n. independently of the instance to be solved.. occasions when we are given a deterministic algorithm efficient on the average but that we cannot reasonably expect to modify. for instance. if the algorithm is part of a complicated. badly documented software package. The values of u and v are calculated and returned by the pivoting algorithm (see Problem 4..

function RH (x) { computation of f (x) by Sherwood algorithm } let n be the size of x r .) y E. and let A be a set with the same number of elements.1. Simply call the following procedure before the deterministic sorting or selection algorithm.f (y) { solved by the deterministic algorithm } return v (r. and H. procedure shuffle (T [I . the solution to this random instance allows us to recover the solution of the original instance x..2. and then to deduce the solution to the original instance. For every integer n. r) { random instance of size n } s . f(u(x. Assume random sampling with uniform distribution is possible efficiently within A . the first property ensures that this instance is transformed into an instance y chosen randomly and uniformly from all those of the same size. Thanks to the second property. The trick is to transform the instance to be solved into a random instance. to use the given deterministic algorithm to solve this random instance. No posttreatment (function v) is needed to recover the solution in these cases.8). n ] ) for i f.Y such that i.4 Sherwood Algorithms 241 rithm. iii. let X be the set of instances of size n. We thus obtain the following Sherwood algorithm. Suppose for the purposes of illustration that someone were to discover an algorithm efficient on the average but prohibitively slow in the worst case. s ) Whatever the instance x to be solved. Recall that no efficient algorithm is known for calculating discrete logarithms (Section 4. The stochastic preconditioning required for selection or for Example 8.4.uniform (i n) interchange T [i] and T [ j ] Example 8.r)))]. Suppose the problem to be solved consists of the computation of some function f : X -i Y for which we already have an algorithm that is efficient on the average. Denote the discrete logarithm of x modulo p to the base g by . Let A be the union of all the A .u (x. Stochastic preconditioning consists of a pair of functions u : X x A -* X and v : A x Y . 8.1 to n -1 do j .uniform (A. )[f (x) = v(r. the functions u and v can be calculated efficiently in the worst case. sorting is the same : it is simply a question of randomly shuffling the elements of the array in question.4.Sec.

function dlogRH (g . ii.bx mod p s f. The rank of a key is the number of keys in the list that are less than or equal to the given key. Assume.. perhaps for a fee.p(gr modp)=r for 05r Sp-2.3.4. p ) r <--uniform (0.p x. What should you do if you are unwilling to divulge your actual request x? The solution is easy if stochastic preconditioning applies to the computation of f : use the function u to encrypt x into some random y. r) is independent of x as long as r is chosen randomly with uniform probability. logg. The following equations allow us to transform our hypothetical algorithm into a Sherwood algorithm : i. p -2) b Fdexpoiter (g. have f (y) computed for you. except for its size. logs. 8.2.p x + logg.p (xy mod p) = (logg. and then use the function v to deduce f (x). if val [i] is not the largest key. r . p) { Section 4. furthermore. Here is the Sherwood algorithm.Probabilistic Algorithms 242 Chap. The smallest key is in val [head ]. and so on. here is one way to represent the list 1.8) a F. This process yields no information on your actual request.r) mod (p -1) 11 Problem 8. 8 logg. Find other problems that can benefit from stochastic precondi- tioning. 21 . because the probability distribution of u (x. For instance. n ] and an integer head. The end of the list is marked by ptr [i] = 0. that some other party is capable of carrying out this computation and willing to do so for you.logg..p a { using the assumed algorithm } return (s .p y) mod (p .4.. 13. x . then ptr [i] gives the index of the following key. Stochastic preconditioning offers an intriguing possibility : computing with an encrypted instance. Why does the algorithm dlogRH work? Point out the functions corresponding to u and v. the next smal- lest is in val [ptr [head ]].4. Problem 8. . 5. 8. Assume that you would like to compute f (x) for some instance x but that you lack the computing power or the efficient algorithm to do so. 2. n ] and ptr [ 1 .1). In general. 3.3 Searching an Ordered List A list of n keys sorted into ascending order is implemented using two arrays val [ 1 .

We can use binary search to find a key efficiently in a sorted array. which would correspond to the first step in binary search.}. (Y) denotes the expected value of this time. (Y) denotes the time taken by this algorithm to find the key of rank k among the n keys in the list when the order of the latter in the array val is specified by the permutation a. k. there is no obvious way to select the middle of the list.(Y)I 1 <_k <_n and aeS. The following algorithm finds a key x starting from some position i in the list. Thus WA(n)=max{tA(n. Let Sn be the set of all n! permutations. As usual. Suppose for the moment that the required key is always in fact present in the list. and mA(n)= E EtA(n. .4. provided that x ? val [i] and that x is indeed present. k .k.(Y) nxn! vES k=I Problem 8.k.Sec. Here.) Despite this inevitable worst case.4 Sherwood Algorithms i val [i] ptr [i] 243 1 2 3 4 5 6 7 2 2 3 13 1 5 21 8 5 6 1 7 0 3 In this example head = 4 and the rank of 13 is 6. tA (n . any deterministic algorithm takes a time in Q (n) in the worst case to find a key in this kind of list. 8. but it does not have such a thing as a worst-case instance. Prove the preceding assertion. the Sherwood algorithm is no faster on the average than the corresponding deterministic algorithm. Whether the algorithm is deterministic or probabilistic. there exists a deterministic algorithm that is capable of carrying out such a search in an average time in 0 (I ). the problem is thus to find that index i. Given a key x. 1 5 i <_ n .4. In the case of a probabilistic algorithm. In fact. (Hint : show how a worst-case instance can be constructed systematically from the probes made into the list by any given deterministic algorithm. From this we can obtain a Sherwood algorithm whose expected execution time is in 0 (') whatever the instance to be solved. respectively. Any instance can be characterized by a permutation a of the first n integers and by the rank k of the key we are looking for. wA (n) and mA (n) denote its worst-case and its mean time. If A is any deterministic algorithm.4 implies that wA (n) E S2(n) for every deterministic algorithm A. to (n . We want a deterministic algorithm B such that mB (n)E 0 ('/n) and a Sherwood algorithm C such that we (n) = mB (n). such that val [i] = x.4. Problem 8. and that all the elements of the list are distinct. however.

k) ? Compare wD (n) and mA (n). function B (x) i .) The quantities F. val made by the algorithm A to find the key of rank k in a list of n keys. n) y . function D (x) i . i ) while x > val [i] do i <.4. times in algorithm B ? Intuitively.ptr [i ] return i Here is the obvious deterministic search.head max <.) Define WA (n) and mA (n) similarly. w.7. 8 function search (x. thus apparently contradicting Problem 8. Determine 1A (n.4. that they do not tell the whole story by exhibiting a deterministic algorithm E such that WE (n) E 0 (log n). (See Problem 8.val [i] for j f. why should we choose to execute the for loop . The following deterministic algorithm is efficient on the average. and m. i ) Problem 8. k) for every integer n and for every k between 1 and n.val [i ] case x < y : return search (x. however. Determine WA (n) and mA (n) for every integer n .4.8.l to L'u J do y <--val[j] ifmax<y <_x theni F j. As a function of n. what values of k maximize t D (n.4. function A (x) return search (x.4. ptr [i ]) otherwise : return i Problem 8.max .y return search (x.6. (The order a of the keys is irrelevant for this algorithm.uniform (L. w. Give explicitly a function f (n) such that ?D (n .4.Probabilistic Algorithms 244 Chap. k) if and only if k > f (n). lems facilitate our analysis. k) be the exact number of references to the array Problem 8. Show.4. k) < iA (n .5.5 for the definition of i . head ) Let iA (n. head ) x > y : return search (x . k) for every integer n and for every k between 1 and n. Here is a first probabilistic algorithm. and m introduced in the previous probProblem 8. Determine wD (n) and mD (n) for every integer n. Determine 1D (n .

Use the structure and the algorithms we have just seen to obtain a Sherwood sorting algorithm that is able to sort n elements in a worst-case expected time in 0 (n 312).4.10. Problem 8.4. A hash function is a function h : X --* { 1. 2..4 Sherwood Algorithms 245 * Problem 8.15. Thus we see that increasing the value of N reduces the average search time but increases the space occupied by the table. 8.14. where the meaning of w is given in Problem 8. k.9. where n is the number of distinct identifiers in the table. are legion. . Show that the when expected value of M1. Problem 8. if h (x) # h (y) for most of the pairs x * y that are likely to be found in the same program. .4. 2. besides the use of a table of lists as outlined here. Give an efficient Sherwood algorithm that takes into account the possibility that the key we are seeking may not be in the list and that the keys may not all be distinct.12. To do this..4. however. or simply hashing.) If we suppose that every identifier and every pointer occupies a constant amount of space.. Analyse your algorithm.4. give explicitly a permutation a and a rank k such that tB (n . and let N be a parameter chosen to obtain an efficient system. N }.. (The ratio (x may well be greater than 1. (Continuation of Problem 8. that w8 (n) E S2(n).. [Hint : Let M1. Find a link between this random variable and the average-case analysis of algorithm B. Problem 8.4. is about n /(l + 1) when 1 is a constant and about Problem 8. 8. that is.11) that wC (n) E 2' + O(1). Is this better than 0 (n log n) ? Justify your answer. we say that there is a collision between x and y . Let X be the set of possible identifiers in the language to be compiled. which is unavoidable from Problem 8. and independently with replacement from the set ( 1.4. give a Sher- wood algorithm C such that wC (n) E O Show more precisely Problem 8. N ] of lists in which T [i ] is the list of those identifiers x found in the program such that h (x) = i. Prove that m8 (n) E 0 (' ).Sec.4.4. When x * y but h (x) = h (y). . Problem 8.13..4. . The load factor of the table is the ratio a = n IN. Show. the table takes space in O(N+n) and the average length of the lists is a. Suggest a few.4. 0) E U(n). ... Such a function is a good choice if it efficiently disperses all the probable identifiers. uniformly. The hash table is an array T [I . be the random variable that corresponds to the minimum of I integers chosen randomly..11.5.4. Starting from the deterministic algorithm B. is used in just about every compiler to implement the symbol table. Other ways to handle collisions.4 Universal Hashing Hash coding. n }.

16.4. Let S c X be a set of n identifiers already in the table. Show that n calls on the symbol table can take a total time in f2(n2) in the worst case. What do you think of the "solution" that consists of ignoring the problem? If we are given an a priori upper bound on the number of identifiers a program may contain. does it suffice to choose N rather larger than this bound to ensure that the probability of a collision is negligible ? (Hint : solve Problem 8. Prove that the average number of collisions between x and the elements of S (that is.9 before answering. . In a sense they are paying the price for all other programs to compile quickly. The following problem generalizes this situation. Let H be a universa12 class of functions f r o m X to { 1.4.. .246 Probabilistic Algorithms Chap. N } . How many functions f :A . Let xEX \ S be a new identifier.17. the probability of a collision between x and y is therefore at most 1/N. This technique is very efficient provided that the function h disperses the identifiers properly.2. . . Unfortunately.3.4. now is the time to think some more about it!) The basic idea is to choose the hash function randomly at the beginning of each compilation. however. * Problem 8. Problem 8. there are f a r too many functions from X into { 1. Several efficient universal2 classes of functions are known. the average length of the list T [h (x)]) is less than or equal to the load factor a.) Problem 8.. it is inevitable that certain programs will cause a large number of collisions. N } for it to be reasonable to choose one at random. that #X is very much greater than N. If we choose a hash function h randomly and uniformly in H. 2. 2. y E A such that x #y.4. (If you have not yet solved Problem 8.19. without arbitrarily favouring some programs at the expense of others. Prove further that the probability that the number of colli- sions will be greater than t a is less than 1/t for all t >_ 1. These programs will compile slowly every time they are submitted. and let x and y be any two distinct identifiers. We content ourselves with mentioning just one.3. 8 Problem 8. .18. respectively ? This difficulty is solved by universal hashing. A Sherwood approach allows us to retain the efficiency of hashing on the average. If we suppose. A program that causes a large number of collisions during one compilation will therefore probably be luckier next time it is compiled... By definition a class H of functions from A to B is universal2 if # (h E H (h (x) = h (y)) < #H/#B for every X.B are there if the cardinalities of A and B are a and b.

.) I * Problem 8. . Find applications of universal hashing that have nothing to do with compilation.. 2. success)..4. 8. the Sherwood version of quicksort (Problem 8. However. The distinguishing characteristic of Las Vegas algorithms is that now and again they take the risk of making a random decision that renders it impossible to find a solution. (Remarks : In practice we take N to be a power of 2 so that the second mod operation can be executed efficiently.S for every instance x. .. a -11. where y is a return parameter used to receive the solution thus obtained whenever success is set to true. It may be able to solve in practice certain problems for which no efficient deterministic algorithm is known even on the average. A Las Vegas algorithm. a Sherwood algorithm is no faster on the average than the deterministic algorithm from which it arose. (x) = ((mx + n) mod p) mod N.21. even though the expected time required for each instance may be small and the probability of encountering an excessive execution time is negligible.20. . .. Contrast this to a Sherwood algorithm. In the latter case it suffices to resubmit the same instance to the same algorithm to have a second. l . Las Vegas algorithms usually have a return parameter success. which is set to true if a solution is obtained and false otherwise.n < p E is a universal2 class of functions. Let s (x) and e (x) be the expected times taken by the algorithm on instance x in the case of success and of failure. on the other hand. For example. Let p (x) be the probability of success of the algorithm each time that it is asked to solve the instance x. y .1) never takes a time in excess of 0 (n2) to sort n elements.Sec. Now consider the following algorithm.5 Las Vegas Algorithms 247 Let X be { 0.. independent chance of arriving at a solution.5 LAS VEGAS ALGORITHMS Although its behaviour is more uniform. N-1 ) by h. Define hm n :X -* (0. nor even with management of a symbol table. allows us to obtain an increase in efficiency. not less than N. Thus these algorithms react by either returning a correct solution or admitting that their random decisions have led to an impasse.. .4. The overall probability of success therefore increases with the amount of time we have available. respectively. For an algorithm to be correct.. where we are able to predict the maximum time needed to solve a given instance. 1.4. The typical call to solve instance x is LV (x . there is no upper bound on the time that may be required to obtain a solution. whatever happens. Better still is the existence of some constant 8 > 0 such that p (x) >. Let m and n be two integers.. 1 5 m < p and 0 <. sometimes for every instance. 8. Prove that H = { hm. and let p be a prime number * Problem 8. we require that p (x) > 0 for every instance x. It is also more efficient to carry out all the computations in a Galois field whose cardinality is a power of 2.

e (x) and p (x). taking an expected time e (x).) in the case of success and of failure. The resulting algorithm is not recursive.. This is not bad. 8 function obstinate (x) repeat LV (x . but the algorithm does not take into account one important fact : there is nothing systematic about the positions of the queens in most of the solutions.5. before starting all over again to solve the instance. s (x).057 nodes in the tree. . What is the probability that the algorithm obstinate will find a correct solution in a time not greater than t. This follows because the algorithm succeeds at the first attempt with probability p (x). Using this technique. . which still takes an expected time t (x). With probability 1. and e (x) if we want to minimize t (x). success) until success return y Let t (x) be the expected time taken by the algorithm obstinate to find an exact solution to the instance x. respectively. Problem 8. thus taking an expected time s (x).p (x)) (e (x) + t (x)). Neglecting the time taken by the control of the repeat loop. s (x). The algorithm ends either successfully if it manages to place all the queens on the board or in failure if there is no square in which the next queen can be added. we obtain the following recurrence : t (x) = p (x) s (x) + (1. it may be preferable to accept a smaller probability of success if this also decreases the time required to know that a failure has occurred. On the contrary. but that they are in fact the exact times taken by a call on LV (x .248 Probabilistic Algorithms Chap.5. For example. 8. we obtain the first solution after examining only 114 of the 2.1 The Eight Queens Problem Revisited The eight queens problem (Section 6. for any t >. Recall that the backtracking technique used involves systematically exploring the nodes of the implicit tree formed by the k-promising vectors. however. The recurrence is easily solved to yield t (x) = s (x) + 1 .s (x) ? Give your answer as a function of t.6. y .1) provides a nice example of this kind of algorithm. taking care.p (x) it first makes an unsuccessful attempt to solve the instance. This observation suggests a greedy Las Vegas algorithm that places queens randomly on successive rows.1.( )x) WW e (x) There is a compromise to be made between p (x). that the queens placed on the board do not threaten one another. the queens seem more to have been positioned haphazardly. Suppose that s (x) and e (x) are not just expected times..

When there is more than one position open for the (k + I )st Problem 8.971 time out of eight by proceeding in a completely random fashion ! The expected number of nodes explored if we repeat the algorithm until a success is finally obtained is given by the general formula s + (1-p )e/p = 55.(nb > 0) To analyse the efficiency of this algorithm.2. k ] is k-promising } nb .diag 135 u { j + k } { try [ 1 ..k } diag 135 F. k + 1 ] is (k + 1)-promising } k-k+1 until nb = 0 or k =8 success .5. The backtracking . The Las Vegas algorithm is too defeatist : as soon as it detects a failure it starts all over again from the beginning.0 k -0 repeat { try [1 .I to 8 do if i 0 col and i -k e diag45 and i +k 0 diag 135 then ( column i is available for the (k + I )st queen } nb -nb+1 if uniform (1 . 8] (a global array) contains a solution to the eight queens problem } col. diag45.. We can do better still.5 Las Vegas Algorithms 249 procedure QueensLV (var success) { if success = true at the end. the same probability of being chosen.diag45 u { j . the average number s of nodes that it explores in the case of success. the algorithm QueensLV chooses one at random without first counting the number nb of possibilities.colu{j} diag45 . Clearly s = 9 (counting the 0-promising empty vector). Show that each position has. less than half the number of nodes explored by the systematic backtracking technique. and e = 6. diag 135 f. nevertheless. nb) = 1 then ( maybe try column i } j4-i if nb > 0 then ( amongst all nb possibilities for the (k + 1)st queen. we need to determine its probability p of success. Using a computer we can calculate A solution is therefore obtained more than one p = 0.. and the average number e of nodes that it explores in the case of failure. then try [1 .. 8.927 . queen.0 for i E.Sec. it is column j that has been chosen (with probability 1 / nb) } try [k +I] '.1293 .j col .

93 -- We tried these different algorithms on a CYBER 835.false . diag 135.98 6. and then uses backtracking to try and add the remaining queens.00 39.Probabilistic Algorithms 250 Chap. A judicious combination of these two algorithms first places a number of queens on the board in a random way. The pure backtracking algorithm finds the first solution in 40 milliseconds. whereas an average of 10 milliseconds is all that is needed if the first two or three queens are placed at random.8750 0. the expected number s of nodes explored in the case of success. 8 algorithm. and the expected number t = s +(I -p)e/p of nodes explored if the algorithm is repeated until it eventually finds a solution.48 1 2 3 4 5 6 7 8 10. This happens.53 13.4931 0.63 22. The following table gives for each value of stopVegas the probability p of suc- cess.1 except that it has an extra parameter success and that it returns immediately after finding the first solution if there is one.01 35. stopVegas p s 0 1. makes a systematic search for a solution that we know has nothing systematic about it. on the other hand. success) else success *. respectively.29 6. if the first two queens are placed in positions 1 and 3. for instance.1624 0.33 9. which places all the queens in a random way. except that the last two lines are replaced by until nb = 0 or k = stopVegas if nb > 0 then backtrack (k .1293 0. The resulting algorithm is similar to QueensLV.31 9.2618 0.05 9. takes on the average 23 milliseconds to find a solution.00 39.6. An unfortunate random choice of the positions of the first few queens can make it impossible to add the others.10 8.00 e t 39. reconsidering the positions of the queens that were placed randomly. The latter looks like the algorithm Queens of Section 6.93 55.97 6.97 114.1293 114. but the greater is the probability of a failure.0000 0. The case stopVegas = 0 corresponds to using the deterministic algorithm directly. diag45.63 28. without. the smaller is the average time needed by the subsequent backtracking stage. The more queens we place randomly. This is a fraction more than .10 46.50 55.00 9. the expected number e of nodes explored in the case of failure.1357 0.67 15. however.0000 1.20 29. where 1 < stopVegas S 8 indicates how many queens are to be placed randomly before moving on to the backtracking phase.79 7. The original greedy algorithm QueensLV.92 53. col.

t stopVegas p s e 0 5 12 1.3] and [1. First. This is one reason why it is more efficient to place the first queen at random rather than to begin the systematic search immediately.4] are explored to no effect. 36 different solutions were found in about five and a half minutes. As for the greedy Las Vegas algorithm. For the eight queens problem. ** Problem 8. it wastes so much time making its pseudorandom choices of position that it requires essentially the same amount of time as the pure backtracking algorithm.000 times faster per solution than the deterministic algorithm. this can be understood intuitively in terms of the lack of regularity in the solutions (at least when the number of queens is not 4k + 2 for some integer k).5039 0. The deterministic backtracking algorithm took more than 2 hours to find a first solution. Even when the search starting from node [1. try to solve the problem systematically. this time placing the first five queens randomly. If we want a solution to the general n queens problem. Thus the probabilistic algorithm turned out to be almost 1.21 and [1. An empirical study of the twenty queens problem was also carried out using an Apple II personal computer. is that a solution can be obtained more rapidly on the average if several queens are positioned randomly before embarking on the backtracking phase. Here are the values of p. First the trees below the 2-promising nodes [1. s.5 Las Vegas Algorithms 251 half the time taken by the backtracking algorithm because we must also take into account the time required to make the necessary pseudorandom choices of position.00 33. 8. the same corner is a better than average starting point for the problems with five or twelve queens. Problem 8. If you are still not convinced of the value of this technique.5. Using the probabilistic approach and placing the first ten queens at random. it is obviously silly to analyse exhaustively all the possibilities so as to discover the optimal .00 80.88 13. What is significant.5.39 222. a systematic search that begins with the first queen in the fifth column is astonishingly quick. Once again. a systematic search for a solution beginning with the first queen in the first column takes quite some time.5.5.5] begins.Sec. however.11 On the CYBER 835 the Las Vegas algorithm that places the first five queens randomly before starting backtracking requires only 37 milliseconds on the average to find a solution.3. we suggest you try to solve the twelve queens problem without using a computer.71.23 10. and t for a few values of stopVegas in the case of the twelve queens problem.0000 0. For instance.0465 262.00 - 47. we waste time with [1. and then try again.4.20 262. On the other hand. whereas the pure backtracking algorithm takes 125 milliseconds. (Try it!) This unlucky characteristic of the upper left-hand corner is nothing more than a meaningless accident. e.

that no quadratic residue has more than two distinct square roots.2 Square Roots Modulo p Let p be an odd prime. Combining this with Problem 8. If no solution exists. Prove or disprove : the n queens problem can be solved for every n >.x <.5. Given an odd prime p and a quadratic residue x .4. In fact. An integer x is a quadratic residue modulo p if 1 <. Prove further that x is a quadratic residue modulo p if and only if x(P.p .p . but not necessarily optimal.5.2py + y 2 = y 2 (mod p). 63 is a square root of 55 modulo 103. (Hint: one direction follows immediately from Fermat's theorem : x n = 1 (mod p ).7. This is the case if and only if there exists at least one solution. value of stopVegas to be determined rapidly as a function of n.5.y <. Prove that p . and then to apply the corresponding Las Vegas algorithm.) Problem 8.8 to calculate x(p . Problem 8.y # y and that 1 <_ p . can you find a constant S > 0 such that the probability of success of the Las Vegas algorithm to solve the n queens problem is at least S for every n ? 8.1. (Hint : assuming that a 2 ° h 2 (mod p). Prove that x(P-1)/2 =± 1 (mod p) for every integer 1<. Prove.1.5.8. For instance.) The preceding problem suggests an efficient algorithm for testing whether x is a quadratic residue modulo p : it suffices to use the fast exponentiation algorithm of Section 4.5.y <. Problem 8. Conclude from the preceding results that exactly half the integers between 1 and p -1 are quadratic residues modulo p. Such a y is a square root o x modulo p provided 1 :. the number of queens.6.Probabilistic Algorithms 252 Chap.1)/2 = + 1 (mod p). ** Problem 8. on the other hand.1)/2 mod p. 8 value of stopVegas.b 2. Any quadratic residue has at least two distinct square roots since (p _Y)2 = p 2 . the general algorithm obtained using the previous problem (first determine stopVegas as a function of n.x < p -1 and if there exists an integer y such that x = y2 (mod p). * Problem 8. consider a 2 .9. and then try to place the queens on the board) can only be considered to be a Las Vegas algorithm if its probability of success is strictly positive for every n. the other direction requires some 0 knowledge of group theory.5.p -l and every odd prime p. (We needed more than 50 minutes computation on the CYBER to establish that stopVegas = 5 is the optimal choice for the twelve queens problem !) Find an analytic method that enables a good. An integer z is a quadratic nonresidue modulo p if 1 < z <.I and z is not a quadratic residue modulo p.4.p . determining the optimal value of stopVegas takes longer than a straightforward search for a solution using backtracking. the obstinate proba- bilistic algorithm will loop forever without realizing what is happening.5. Technically.5.

9). . Let p = 53 = 1 (mod 4) and x = 7. culation (2+I7)26 = 0+41J (mod 53) in detail.1 (mod 4).11.and c + d are integers between 0 and p .1)/2. however.5. we find 2d/ . that c +d 1 _. Consequently.) (1 + 7)2 = (1 + -0) (1 + 17) =8+ (1+V)3 =(1+1)(8+2J) =22+10.c < 52.5 Las Vegas Algorithms 253 modulo p. A preliminary computation shows that x is a quadratic residue modulo p since 726 = 1 (mod 53). where a. The symbolic exponentiation (a +b-6.1. What happens if we calculate symbolically (a + / )26 == c +dI (mod 53) in a case when one of (a +') mod 53 and (a -i7) mod 53 is a quadratic residue modulo p and the other is not? for instance.5. does there exist an efficient algorithm for calculating the two square roots of x modulo p ? The problem is easy when p . and the symbolic calculation that we just carried out is valid regardless of which of them we choose to call '. Suppose that p .10.3 (mod 4). Let us calculate symbolically (1 mod 53.8.1 (mod 53) and c =. we obtain 2c = 0 (mod 53) and hence c = 0 since 0 <. But 7 has two square roots modulo p.)" can be calculated efficiently by adapting the algorithms of Section 4.2 (mod 53). Prove that ±x(p+ 1)/4 mod p are the two square roots of x modulo p. Suppose. we conclude that (1+') mod 53 is a quadratic nonresidue modulo 53 (Problem 8.F7_ =(22+101)(22+101)=18+16I (1+J)12=(18+161)(18+l61)=49+461 (1+x)13=(1+x)(49+46-.3 (mod 4) and let x be a quadratic residue Problem 8. an efficient Las Vegas algorithm to solve this problem when p = 1 (mod 4).1 as a guide. modulo p.1 (mod 53). c -d Using Example 8. Let us decide arbitrarily to denote by the smaller of the two square roots of x. There exists. Since 26 = (p .F7-) =0+421 (1+ 41 )26 = -1 (mod 53). Calculate 5526 mod 103 and verify that its square modulo 103 is indeed 55. c. and d product is ((ac +bdx) mod p) + ((ad + bc) mod p Note the similarity to a product of complex numbers. Subtracting them. Adding these two equations.5. but no efficient deterministic algorithm is known to solve this problem when p =.5. 8. (1-1) mod 53 is also a quadratic nonresidue modulo 53. it is possible to carry out the symbolic multiplication of a +b-6.1. (All the following calculations are modulo 53. carry out the symbolic calProblem 8. b. Example 8. Even if the value of is unknown.I (mod 53).5. This modulo p. and hence d / .Sec.

Then we need only take a = b' and b = a' -b' Lu l v j.Probabilistic Algorithms 254 Chap. The other square root is 53 . Your algorithm should not calculate d before starting to work on a and b. Give an efficient iterative algorithm for calculating d.1 (mod 53). [Hint : Suppose without loss of generality that u >.a p -1. var y. This suggests the following Las Vegas algorithm for calculating square roots. and let d be their greatest common divisor. and let x be a quadratic residue * Problem 8.y < p -1 and dy = l (mod p ) It remains to determine the probability of success of this algorithm. and b from u and v.5.5.52 and 41y . and x is a quadratic residue modulo p } a F. p -1) if a 2 = x (modp) { very unlikely } then success f. we find y = 22 because 41 x 22 . iii.12. Prove that there exist integers a and b such that au + by = d.p . 8 To obtain a square root of 7. now let a' and b' be such that a'v + b'w = d. prove that there exists a unique y such that 1 <.7.0 <d p-1 and (a +')(r-1)12-c+dL (modp) if d = 0 then success F false else { c = 0) success f.] ii. the preceding problem shows that we need only find the unique integer y such that 1 <.true compute y such that 1 <. Prove that .22 = 31. Let u and v be two positive integers. p. First show that d is also the greatest common divisor of v and w.. * Problem 8.4).uniform (I. i. var success) { may find some y such that y 2 = x (modp ) assuming p is a prime. This can be done efficiently using a modification of Euclid's algorithm for calculating the greatest common divisor (Section 1.1. modulo p.y <.5. procedure rootLV (x . I (mod p). p . This is indeed a square root of 7 modulo 53 since 222 = 7 (mod 53).) By mathematical induction. 1 <. gives the key to ' if (a2-x) mod p is not a quadratic residue modulo P.11).true y Fa else compute c and d such that 0 <-c p-1. If p is prime and 1 a <.1 (mod 4). the proof is trivial (a = 0 and b = 1). Give an efficient algorithm for calculating y In our example (following Problem 8. (This is the heart of Euclid's algorithm.v. Otherwise.p -1 and ay given p and a.y <. An integer a. Let p = 1 (mod 4) be prime. If v = d. let w = u mod v.1 (mod 53).13. a.

Sec. 8.5

Las Vegas Algorithms

255

i. The Las Vegas algorithm finds a square root of x if and only if it randomly
chooses an a that gives the key to ' ; and
ii. Exactly (p + 3)/2 of the p -1 possible random choices for a give the key to x_.
[Hint : Consider the function

f:

11

,2,

.

.

(

.

, p ,p-

}{2,3,...,p-21

defined by the equation (a - NFx_ ) f (a) = a +L (mod p ). Prove that this function is one-to-one and that f (a) is a quadratic residue modulo p if and only if a

0

does not give the key to '.]

This shows that the Las Vegas algorithm succeeds with probability somewhat
greater than one half, so that on the average it suffices to call it twice to obtain a
square root of x. In view of the high proportion of integers that give a key to ', it is
curious that no known efficient deterministic algorithm is capable of finding even one
of them with certainty.

The previous problem suggests a modification to the algoProblem 8.5.14.
rithm rootLV : only carry out the symbolic calculation of (a +' )(p- 1)12 if
(a2-x) mod p is a quadratic nonresidue. This allows us to detect a failure more
rapidly, but it takes longer in the case of a success. Give the modified algorithm explicitly. Is it to be preferred to the original algorithm ? Justify your answer.
* Problem 8.5.15.
if p

1 (mod 8)

The following algorithm increases the probability of success

.

procedure rootLV 2(x, p, var y, var success)
I assume that p is a prime and p = 1 (mod 4) }

a F-uniform(1..p-1)
if a 2 = -x (mod p)

I very unlikely and unfortunate }

then success - false
else let odd t and k >- 2 be such that p = 2k t + 1

compute c and d such that 0 <c 5p-1,0<-d <p-1
)t =-c + d

and (a +

(mod p)

ifc =0ord =0
then success - false
else while c 2

d 2x (mod p) do

b F-(c2-d2x)modp
d E-- 2cd mod p

c -h

compute y such that 1 5 y 5 p- 1 and yd = 1 (mod p)

y - cy mod p
success F- true
(1/2)k-1 and that the
Prove that the probability of failure of this algorithm is exactly
while loop is executed at most k - 2 times, where k is specified in the algorithm.

Probabilistic Algorithms

256

Chap. 8

Problem 8.5.16.
An even more elementary problem for which no efficient
deterministic algorithm is known is to find a quadratic nonresidue modulo p where
p = 1 (mod 4) is a prime.
i. Give an efficient Las Vegas algorithm to solve this problem.

ii. Show that the problem is not more difficult than the problem of finding an
efficient deterministic algorithm to calculate a square root. To do this, suppose
that there exists a deterministic algorithm root 2(x , p) that is able to calculate
efficiently ' mod p, where p =- 1 (mod 4) is prime and x is a quadratic residue
modulo p. Show that it suffices to call this algorithm less than L19 P I times to
be certain of obtaining, in a way that is both deterministic and efficient, a quadratic nonresidue modulo p. (Hint : Let k be the largest integer such that 2k

divides p -1 exactly. Consider the sequence x i = p - 1, xi = x7 mod p for
2<- i <- k. Prove that xi is a quadratic residue modulo p for 1 <- i < k , but that
xk is not.)

** Problem 8.5.17.
The converse of Problem 8.5.16. Give an efficient deterministic algorithm rootDET (x , p , z) to calculate a square root of x modulo p, provided

p is an odd prime, x is a quadratic residue modulo p, and z is an arbitrary quadratic
nonresidue modulo p.
The two preceding problems show the computational equivalence between the
efficient deterministic calculation of square roots modulo p and the efficient deterministic discovery of a quadratic nonresidue modulo p. This is an example of the
technique called reduction, which we study in Chapter 10.

8.5.3 Factorizing Integers
Let n be an integer greater than 1. The problem of factorizing n consists of finding the
unique decomposition n = p 1 ' p 2
such that m I , M2, ... , Mk are positive
integers and p I < P 2 <
< p k are prime numbers. If n is composite, a nontrivial
factor is an integer x, 1 < x < n , that divides n exactly. Given composite n, the
problem of splitting n consists of finding some nontrivial factor of n.
pk.

Problem 8.5.18.
Suppose you have available an algorithm prime (n), which
tests whether or not n is prime, and an algorithm split(n), which finds a nontrivial
factor of n provided n is composite. Using these two algorithms as primitives, give an
algorithm to factorize any integer.

Section 8.6.2 concerns an efficient Monte Carlo algorithm for determining primality. Thus the preceding problem shows that the problem of factorization reduces to
the problem of splitting. Here is the naive algorithm for the latter problem.

Sec. 8.5

Las Vegas Algorithms

257

function split (n)
{ finds the smallest nontrivial factor of n if n is composite
or returns 1 if n is prime }

for i F- 2 to LJ j do
if (n mod i) = 0 then return i
return I
Problem 8.5.19.

Why is it sufficient to loop no further than ' ?

The preceding algorithm takes a time in Q(J) in the worst case to split n. It is
therefore of no practical use even on medium-sized integers : it could take more than 3

million years in the worst case to split a number with forty or so decimal digits,
counting just
microsecond for each trip round the loop. No known algorithm,
whether deterministic or probabilistic, can split n in a time in 0 (p (m)) in the worst
case, where p is a polynomial and m = [log(l +n)l is the size of n. Notice that
Z 10"'12, which is not a polynomial in m. Dixon's probabilistic algorithm is
1

nevertheless able to split n in a time in 0 (20( m '09 - ))

Problem 8.5.20.
Prove that 0 (mk ) c O (2014"' kg n )) C O
the values of the positive constants k and b.

whatever

The notion of quadratic residue modulo a prime number (Section 8.5.2) generalLet n be any positive integer. An integer x,
1 5 x S n -1, is a quadratic residue modulo n if it is relatively prime to n (they have
no nontrivial common factor) and if there exists an integer y, 1 <- y < n - 1, such that
izes to composite numbers.

x = y 2 (mod n). Such a y is a square root of x modulo n. We saw that a quadratic
residue modulo p has exactly two distinct square roots modulo p when p is prime.
This is no longer true modulo n if n has at least two distinct odd prime factors. For
instance, 82 = 132 = 222 = 272 = 29 (mod 35).

* Problem 8.5.21.
Prove that if n = pq, where p and q are distinct odd primes,
then each quadratic residue modulo n has exactly four square roots. Prove further that

exactly one quarter of the integers x that are relatively prime to n and such that
I

<- x <- n - 1 are quadratic residues modulo n.

Section 8.5.2 gave efficient algorithms for testing whether x is a quadratic residue

modulo p, and if so for finding its square roots. These two problems can also be
solved efficiently modulo a composite number n provided the factorization of n is
given. If the factorization of n is not given, no efficient algorithm is known for either

of these problems. The essential step in Dixon's factorization algorithm is to find two
integers a and h relatively prime to n such that a2 = b2 (mod n) but a ±b (mod n).
This implies that a2 - h2 = (a - b)(a + b) = 0 (mod n). Given that n is a divisor nei-

ther of a +h nor of a -b, it follows that some nontrivial factor x of n must divide
a + b while nix divides a -b. The greatest common divisor of n and a + b is thus a

Probabilistic Algorithms

258

Chap. 8

nontrivial factor of n. In the previous example, a = 8, b = 13, and n = 35, and the
greatest common divisor of a +b = 21 and n = 35 is x = 7, a nontrivial factor of 35.
Here is an outline of Dixon's algorithm.

procedure Dixon (n, var x, var success)
{ tries to find some nontrivial factor x of composite number n }

if n is even then x <- 2, success - true
else for i - 2 to Llog3n J do
if n 11' is an integer then x <- n
success - true
return
{ since n is assumed composite, we now know that it has
at least two distinct odd prime factors }
a, b <- two integers such that a 2 = b 2 (mod n)

if a °±b (mod n) then success <- false
else x <- gcd(a+b, n) { using Euclid's algorithm }
success F- true

So how we can find a and b such that a2 = b2 (mod n)? Let k be an integer to
be specified later. An integer is k -smooth if all its prime factors are among the first k
prime numbers. For instance, 120 = 23 x 3 x 5 is 3-smooth, but 35 = 5 x 7 is not.
When k is small, k-smooth integers can be factorized efficiently by an adaptation of
the naive algorithm split (n) given earlier. In its first phase, Dixon's algorithm chooses
integers x randomly between 1 and n - 1. A nontrivial factor of n is already found if
by a lucky fluke x is not relatively prime to n . Otherwise, let y = x 2 mod n . If y is
k-smooth, both x and the factorization of y are kept in a table. The process is repeated
until we have k + 1 different integers for which we know the factorization of their
squares modulo n.

Let n = 2,537 and k = 7. We are thus concerned only with
Example 8.5.2.
the primes 2, 3, 5, 7, 11, 13, and 17. A first integer x = 1,769 is chosen randomly.
We calculate its square modulo n : y = 1,240. An attempt to factorize
1,240 = 23 x 5 x 31 fails since 31 is not divisible by any of the admissible primes.
A second attempt with x = 2,455 is more successful : its square modulo n is
y = 1,650 = 2 x 3 x 52 x 11. Continuing thus, we obtain
x,= 2 , 455 y I= 1 ,650 = 2 x 3 x 52 x 11

y2=2,210 =2x5x 13x 17
X3=1,105 y3= 728 =23x7x13

x2= 970

X4 = 1,458

y4 = 2,295 = 33 x 5 x 17

x5= 216

y5= 990 =2x32x5x11

X6 =

80

y6= 1,326 = 2 x 3 x 13 x 17

X7 = 1,844

y7= 756 =22x33x7

xs =

ys= 2,288 = 24 x 11 x 13

433

Sec. 8.5

Las Vegas Algorithms

259

Problem 8.5.22. Given that there are 512 integers x between 1 and 2,536 such
that x2 mod 2537 is 7-smooth, what is the average number of trials needed to obtain
eight successes like those in Example 8.5.2 ?

The second phase of Dixon's algorithm finds a nonempty subset of the k + I
equations such that the product of the corresponding factorizations includes each of the
k admissible prime numbers to an even power (including zero).

Example 8.5.3.

There are seven possible ways of doing this in Example 8.5.2,

including

y1y2y4y8=26x34x54x70 x112x132x172
Y1Y3Y4Y5Y6Y7=28 x310x54x72x 112x 132x 172.

Problem 8.5.23.

Find the other five possibilities.

Problem 8.5.24. Why is there always at least one solution ? Give an efficient
algorithm for finding one. [Hint : Form a (k + 1) x k binary matrix containing the parities of the exponents. The rows of this matrix cannot be independent (in arithmetic
modulo 2) because there are more rows than columns. In Example 8.5.3, the first
dependence corresponds to
(1,1,0,0,1,0,0) + (1,0,1,0,0,1,1) + (0,1,1,0,0,0,1) + (0,0,0,0,1,1,0)

(0,0,0,0,0,0,0) (mod 2) .

Use Gauss-Jordan elimination to find a linear dependence between the rows.]

This gives us two integers a and b such that a2 = b2 (mod n). The integer a is
obtained by multiplying the appropriate xi and the integer b by halving the powers of
the primes in the product of the y; If a 4t±b (mod n), it only remains to calculate the
greatest common divisor of a + b and n to obtain a nontrivial factor. This occurs with
probability at least one half.
.

Example 8.5.4.
The first solution of Example 8.5.3 gives
a = x1x2x4x8 mod n = 2,455 x 970 x 1,458 x 433 mod 2,537 = 1,127 and

b = 23 x 32 x 52 x 11 x 13 x 17 mod 2,537 = 2,012 *±a (mod n).
The greatest common divisor of a +b = 3,139 and n = 2,537 is 43, a nontrivial factor
of n. On the other hand, the second solution gives
a =xlx3x4x5x6x7 mod n = 564 and

b=24x35x52x7x11x13x17mod 2,537=1,973=-a (mod n),
which does not reveal a factor.

Problem 8.5.25.
Why is there at least one chance in two that
a t ± b (mod n)? In the case when a
b (mod n), can we do better than simply
starting all over again ?

Probabilistic Algorithms

260

Chap. 8

It remains to be seen how we choose the best value for the parameter k. The
larger this parameter, the higher the probability that x 2 mod n will be k -smooth when
x is chosen randomly. On the other hand, the smaller this parameter, the faster we can
carry out a test of k -smoothness and factorize the k -smooth values y; , and the fewer
such values we need to be sure of having a linear dependence. Set L = e
and
let h E IR+. It can be shown that if k = L b , there are about L 112h failures for every success when we try to factorize x2 mod n. Since each unsuccessful attempt requires k
divisions and since it takes k + 1 successes to end the first phase, the latter takes an
average time that is approximately in 0 (L 2h +1/2h ), which is minimized by b = z . The
second phase takes a time in O (k 3) = 0(L31) by Problem 8.5.24 (it is possible to do
better than this), which is negligible when compared to the first phase. The third phase
can also be neglected. Thus, if we take k = E , Dixon's algorithm splits n
with probability at least one half in an approximate expected time in 0(L2)
) and in a space in O (L).
= O (e 2 I
Several improvements make the algorithm more practical. For example, the proI

bability that y will be k-smooth is improved if x is chosen near [I 1, rather than
being chosen randomly between I and n - 1. A generalization of this approach,
known as the continued fraction algorithm, has been used successfully. Unlike
Dixon's algorithm, however, its rigorous analysis is unknown. It is therefore more
properly called a heuristic. Another heuristic, the quadratic sieve, operates in a time in
0 (L 9is) and space in 0 (LD). In practice, we would never implement Dixon's algorithm because the heuristics perform so much better. More recently, H. W. Lenstra Jr.
has proposed a factorization algorithm based on the theory of elliptic curves.

Problem 8.5.26.

Let n = 1040 and L = e 1° n In Inn

.

Compare L 9/8, L 2, and

microseconds. Repeat the problem with n = 10s0p
8.5.4 Choosing a Leader

A number of identical synchronous processors are linked into a network in the shape of

a ring, as in Figure 8.5.1. Each processor can communicate directly with its two
immediate neighbours. Each processor starts in the same state with the same program
and the same data in its memory. Such a network is of little use so long as all the processors do exactly the same thing at exactly the same time, for in this case a single
processor would suffice (unless such duplication is intended to catch erratic behaviour
from faulty processors in a sensitive real-time application). We seek a protocol that

allows the network to choose a leader in such a way that all the processors are in
agreement on its identity. The processor that is elected leader can thereafter break the
symmetry of the network in whatever way it pleases by giving different tasks to different processors.
No deterministic algorithm can solve this problem, no matter how much time is
available. Whatever happens, the processors continue to do the same thing at the same
time. If one of them decides that it wants to be the leader, for example, then so do all

Sec. 8.5

Las Vegas Algorithms

Figure 8.5.1.

261

A ring of identical processors.

the others simultaneously. We can compare the situation to the deadlock that arises
when two people whose degree of courtesy is exactly equal try to pass simultaneously
through the same narrow door. However, if each processor knows in advance how
many others are connected in the network, there exists a Las Vegas algorithm that is
able to solve this problem in linear expected time. The symmetry can be broken on the
condition that the random generators used by the processors are independent. If the
generators are in fact pseudorandom and not genuinely random, and if each processor
starts from the same seed, then the technique will not work.

Suppose there are n processors in the ring. During phase zero, each processor
initializes its local variable m to the known value n, and its Boolean indicator active to
the value true. During phase k, k > 0, each active processor chooses an integer randomly, uniformly, and independently between 1 and m. Those processors that chose 1
inform the others by sending a one-bit message round the ring. (The inactive processors continue, nevertheless, to pass on messages.) After n -1 clock pulses, each processor knows the number I of processors that chose 1. There are three possibilities. If
/ = 0, phase k produces no change in the situation. If l > 1, only those processors that
chose 1 remain active, and they set their local variable m to the value 1. In either case
phase k + I is now begun. The protocol ends when l = 1 with the election of the
single processor that just chose 1.
This protocol is classed as a Las Vegas algorithm, despite the fact that it never
ends by admitting a failure, because there is no upper bound on the time required for it

to succeed. However, it never gives an incorrect solution: it can neither end after
electing more than one leader nor after electing none.

Let 1(n) be the expected number of phases needed to choose a leader among n
processors using this algorithm (not counting phase zero, the initialization). Let
p (n , j) _ [ ] n-i (1- lln)" -i be the probability that j processors out of n randomly
choose the value I during the first stage. With probability p (n, 1) only a single phase
is needed ; with probability p (n , 0) we have to start all over again ; and with

the preceding problems show that the choice of a leader also takes expected linear time. * Problem 8. Since each phase thus takes linear time. 30030) = 1 then return true else return false ( using Euclid's algorithm } .1. n>_2.0)l(n)+J:p(n.5.n)) i=2 * Problem 8. that it is not a Monte Carlo algorithm by exhibiting an instance on which the algorithm systematically gives a wrong answer.28. Nevertheless. given an arbitrary parameter p > 0. 2 <_ j <_ n. ** Problem 8.27. A Monte Carlo algorithm occasionally makes a mistake. l (j) subsequent phases will be necessary on the average. only failing now and again in some special cases. Problem 8. j ).Probabilistic Algorithms 262 Chap.5. Consequently. j=2 A little manipulation gives n-I I(n)=(I+ F.6 MONTE CARLO ALGORITHMS There exist problems for which no efficient algorithm is known that is able to obtain a correct solution every time.j)l(j))/(1-p(n. whether it be deterministic or probabilistic. n l(n)=1+p(n. This is not the same as saying that it works correctly on a majority of instances.) Show. Prove that the following algorithm decides correctly whether or not an integer is prime in more than 80% of the cases. Show that l (n) < e = 2.5. 8. give a Monte Carlo algorithm that is able to determine n exactly with a probability of error less than p whatever the number of processors.718 for every n >_ 2. Prove that no protocol (not even a Las Vegas protocol) can solve the problem of choosing a leader in a ring of n identical processors if they are not given the value n . n-)oo 11 One phase of this protocol consists of single bit messages passed round the ring. 8 probability p (n .6. (It performs much better on small integers. function wrong (n) if gcd(n .442.p(n.0)-p(n.29. on the other hand. but it finds a correct solution with high probability whatever the instance considered. No warning is usually given when the algorithm makes a mistake. Show that lim I (n) < 2.j)l(j).

Let MC (x) be a Monte Carlo algorithm that is consistent and (z + e)-correct. This allows us to amplify the advantage of the algorithm. It suffices to call MC (x) at least [cElg 1/81 times and to return the most frequent answer (ties can be broken arbi- trarily) to obtain an algorithm that is consistent and (1. 8.6 263 What constant should be used instead of 30. . Let MC(x) be a consistent. The repetitive algorithm finds the correct answer if the latter is that the ('+F-)-correct algorithm is called. To prove the preceding claim. we need only call it several times and choose the most frequent answer.4c2 ). obtained at least m times. The time taken by such algorithms is then expressed as a function of both the size of the instance and the reciprocal of the acceptable error probability.cE lg 1/8 be the number of times Let m = Ln / 2j+ 1. u . Let x be some instance to be solved. though 75%-correct. Problem 8. Show on the other hand that it might not even be 71%-correct if MC. The algorithm is consistent if it never gives two different correct solutions to the same instance. Consider the following algorithm.2. function MC 3(x) MC (x) .MC (x) t if t = u or t = v then return t return v Prove that this algorithm is consistent and 27/32-correct. were not consistent. More generally. A Monte Carlo algorithm is p-correct if it returns a correct solution with probability not less than p.030 to bring the proportion of successes above 85% even if very large integers are considered? Let p be a real number such that 1/2 < p < 1. so as to obtain a new algorithm whose error probability is as small as we choose.Monte Carlo Algorithms Sec.6. = -2 / Ig (1. Let c. The advantage of such an algorithm is p -'/2.2. whatever the instance considered. Its error probability is therefore at most m-1 E Prob[ i correct answers in n tries ] i=0 < M-1 1: i=0 n []piqnl i z+e. let a and S be two positive real numbers such that e + S < -1. v . This increases our confidence in the result in a way similar to the world series calculation of Section 5.S)-correct. hence 84%-correct. To increase the probability of success of a consistent. p-correct algorithm. let n >.MC (x) . Some Monte Carlo algorithms take as a parameter not only the instance to be solved but also an upper bound on the probability of error that is acceptable. p = and q = I -p = z -e. however small it may be. 75%-correct Monte Carlo algo- rithm.

and that repeating it 600 times yields an algorithm that is better that 99%-correct. most Monte Carlo algorithms that occur in practice are such that we can increase our confidence in the result obtained much more rapidly. a good upper bound on this number of repetitions is quickly obtained from the second part: find x such that e'x . Assume for simplicity that we are dealing with a decision problem and that the original Monte Carlo algorithm is biased in the sense that it is always correct whenever it returns the answer true. where m-1 S= 2-E v 2i i =0 i (4 -£2)i < 1 (1-4E2)m 4E nm The first part of this formula can be used efficiently to find the exact number of repetitions required to reduce the probability of error below any desired threshold S.0 i=0 (pq)n/2 ` (nl (Pq)ni22n = 2)n i2 _ (4pq )n i2 = (1-4F (1. The probability of success of the repetitive algorithm is therefore at least 1. n i=0 (pq )n12 (qlp) Chap. errors being possible only when it returns the answer false.4E2)( = 2-1g 1/6 12) lg 1/8 since 0 < 1.] Repeating an algorithm several hundred times to obtain a reasonably small probability of error is not attractive.3. A more precise calculation shows that it is enough to repeat the original algorithm 269 times. For example. 8 -i m-1 since q /p < 1 and 2. the resulting algorithm is (1-S)-correct. As we shall see shortly. it suffices to repeat such an . A more complicated argument shows that if a consistent (Z + E)-correct Monte Carlo algorithm is repeated 2m -1 times.1/(2SI) and then set m = Fx/4E21.6. Problem 8. it would be silly to return the most frequent answer : a single true outweighs any number of falses. we wish to go from a 55%-correct algorithm to a 95%-correct algorithm). The preceding theorem tells us that this can be achieved by calling the original algorithm about 600 times on the given instance. Prove that cE < (In 2)/2E2.i i >. suppose we have a consistent Monte Carlo algorithm whose advantage is 5% and we wish to obtain an algorithm whose probability of error is less than 5% (that is.S. [This is because some of the inequalities used in the proof are rather crude. Fortunately.4E2 < 1 =6 since aliisa = 2 for every a > 0. Alternatively.Probabilistic Algorithms 264 =(Pq)ni2 I. If we repeat such an algorithm several times to increase our confidence in the final result.

Let x be an instance. we may conclude that yo is a correct solution. the restriction that the original algorithm be p-correct for some p > i no longer applies : arbitrarily high confidence can be obtained by repeating a biased p-correct algorithm enough times. the solution returned by the algorithm is always correct whenever the instance to be solved is not in X. and if y. or 6 times to obtain a 99%-correct algorithm. What can we conclude if y =yo? If x e X. . and let y be the solution returned by MC (x). the only possible explanation is that x EX (because the algorithm is consistent by assumption). 8.p )k . A Monte Carlo algorithm is yo-biased if there exists a subset X of the instances such that i. :t. and if x EX. What happens if on the other handy # y o ? If x e X. y is indeed correct . Let MC be a Monte Carlo algorithm that is consistent. . yo-biased and p-correct. so yo is indeed correct . the algorithm always returns the correct solution. but the algorithm may not always return the correct solution to these instances. = y # yo for all the i. the preceding argument shows that this is indeed the correct solution . Yk- If there exists an i such that y. Moreover. In both cases. the probability of such an error occurring is not greater than 1-p given that the algorithm is p-correct. it is still possible that the correct solution is yo and that the algorithm has made a mistake k times in succession on x E X. Although the distinguished answer yo is known explicitly. and ii. = yo. The following paragraph shows that this definition is tuned precisely to make sure that the algorithm is always correct when it answers yo. and let yo be some distinguished answer. Now suppose that we call MC (x) k times and that the answers obtained are y. More formally. it is not required that an efficient test be known for membership in X. the correct solution is necessarily yo. let us return to an arbitrary problem (not necessarily a decision problem). if there exist i # j such that y.6 265 algorithm 4 times to improve it from 55%-correct to 95%-correct.Monte Carlo Algorithms Sec. and therefore the correct solution is yo. even if p < z (as long as p > 0). the algorithm has made a mistake since the correct solution is y o .yj . the correct solution to all the instances that belong to X is yo. and if x E X. but the probability of such an occurrence is at most (1.

We are interested in generating a random member of some subset S S 1.true while ans and i < k do if-i+1 ans F. It may be tempting to conclude that "the probability that y is an incorrect answer is at most (1-p )k ". k repetitions of a consistent. or not.MC (x) return ans . whether x E S. consider a nonempty finite set I of instances. The correct interpretation is as follows : "I believe that y is the correct answer and if you quiz me enough times on different instances. not only over those occurrences in which the algorithm actually provides such probabilistic answers.Probabilistic Algorithms 266 Chap. but it causes no problems with a biased algorithm). is averaged over the entire sequence of answers given by the algorithm. Indeed. In general. (As a practical example.) Let MC be a false-biased. or extremely confident that the solution obtained on every one of the trials is correct (since otherwise the probability of obtaining the results observed is less than one chance in a million). k) i-0 ans E. Prob[ MC (x) = true ] = I for each instance x E S and Prob[ MC (x) = true ] <_ q for each instance x 0 S. it will always be wrong whenever it "believes" otherwise. however. The "proportion of errors" in question. It suffices to repeat the algorithm at most 20 times to be either sure that the correct solution is yo (if either of the first two cases previously described is observed).(1-p )k )-correct and still consistent and yo-biased. this is not allowed for general Monte Carlo algorithms. but it is in fact crucial if the probabilistic algorithm is used to generate with high probability some random instance x on which the correct answer is a specific Y #Yo To illustrate this situation. By definition of a false-biased algorithm. Such a conclusion of course makes no sense because either the correct answer is indeed y. my proportion of errors should not significantly exceed (1-p )k ". p-correct. This last remark may appear trivial. 8 Suppose. p-correct Monte Carlo algorithm to decide.2. function repeatMC (x . Consider the following algorithms. p-correct. The probability in question is therefore either 0 or 1. given any x E I. for example. if you systematically quiz the algorithm with instances for which the correct solution is yo. yo-biased algorithm yield an algorithm that is (1 . Let q =1-p . yo-biased Monte Carlo algorithm has yielded k times in a row the same answer y # yo on some instance x. It is important to understand how to interpret such behaviour correctly.see Section 8.6. despite the fact that we cannot tell for sure which it is. that p = s (once again. Assume your consistent. we may be interested in generating a random prime of a given length .

then the probability that a call on genrand (k) returns some x 0 S is about q k /r if q k<< r<< 1. var success) to solve the same problem. but we are in fact interested in Prob[ Y I X ]. whereas algorithm B is q-correct and false-biased. and nearly 1 if r << g k This can be significant when the confidence one gets in the belief that x belongs to S from running MC several times must be weighed against the a priori likelihood that a randomly selected x does not belong to S. the only possible answers that they can return are true and false. 8. What is the best value of r you can deduce so that your Las Vegas algorithm succeeds with probability at least r on each instance ? El . solving the same decision problem. They all involve the solution of a decision problem. about z2 if q k= r << 1. Prove that the probability that a call on genrand (k) erroneously returns some x 0 S is at most 1 1+ 1r q-' r k In particular. This is wrong in general.Sec. that is.6 267 Monte Carlo Algorithms function genrand (k) repeat x . Give an efficient Las Vegas algorithm LV (x . if the error probability of MC on instances not in S is exactly q (rather than at most q). Let A and B be two efficient Monte Carlo algorithms for Problem 8. Algorithm A is p-correct and true-biased. We are not aware of any unbiased Monte Carlo algorithm sufficiently simple to feature in this introduction.6. * Problem 8.4. k) return x It is tempting to think that genrand(k) returns a random member of S with a probability of failure at most q k . Thus the section continues with some examples of biased Monte Carlo algorithms. It is correct that Prob[X I Y ] <_ q k. This situation illustrates dramatically the difference between the error probability of genrand and that of repeatMC. Let r denote the probability that x E S given that x is returned by a call on uniform (I). k) returns true" and Y stands for "x 0 S ". The problem is that we must not confuse the conditional probabilities Prob[ X I Y ] and Prob[ Y I X ].5.uniform (I) until repeatMC (x. var y.6. where X stands for "repeatMC (x . To calculate this pro- bability. we need an a priori probability that a call on uniform (I) returns a member of S.

each call of maj(T) is certain to return false.Probabilistic Algorithms 268 Chap. If the array does indeed contain a majority element. on the other hand. consider function maj 2(T) if maj (T) then return true else return maj (T) If the array does not have a majority element. which happens with probability 1-p . if the answer returned by maj (T) is false. Consider the following algorithm. too. 8 8. The algorithm maj2 is therefore also true-biased. If. If the answer returned is true. On the other hand.11.6. First. the probability that the first call of maj (T) will return true is p > z .5. n ] has a majority element (see Problems 4. the probability of choosing an element that is in a minority is less than one-half. the probability that maj 2(T) will return true if the array T has a majority element is p+(I-p)p=I-(1-p)2>3/4.1 Majority Element in an Array The problem is to determine whether an array T [I . An error probability of 50% is intolerable in practice. Summing up. and hence trivially there is a majority element in T. with probability > z T has no majority element = maj (T) = false. 4. we may reasonably suspect that T does indeed have no majority element. it is nonetheless possible that T contains a majority element. n ]) i F uniform (1 .. and if one of its elements is chosen at random.11. in which case maj 2(T) also returns true. and 4. hence so does maj 2(T ). the element chosen is a majority element. with certainty .. If the array does have a majority element.11. In sum. since majority elements occupy more than half the array.. function maj (T [1. and then checks whether this element forms a majority in T. and in this case maj 2(T) returns true. n) x -T[i] k F.0 forj f. but 3/4-correct. the second call of maj (T) may still with probability p return true.ltondoifT[jI=x thenk '-k +1 return (k > n / 2) We see that maj (T) chooses an element of the array at random. The general technique for biased Monte Carlo algorithms allows us to reduce this probability efficiently to any arbitrary value. z that is T has a majority element = maj (T) = true. Therefore. the answer returned is false. if the first call of maj (T) returns false.7). although in this case the element chosen randomly is in a minority. The probability of error decreases because the successive calls of maj (T) are independent : the fact that .6. this algorithm is true-biased and 4correct.

In 98% of calls the algorithm will inform us incorrectly that n is prime. The problem is to decide whether a given integer is prime or composite. as soon as any call returns true.6. using Euclid's algorithm. Consider for example n = 2. but it is still unsatisfactory. El The following Monte Carlo algorithm solves the problem of detecting the presence of a majority element with a probability of error less than E for every E > 0.2 Probabilistic Primality Testing This classic Monte Carlo algorithm recalls the algorithm used to determine whether or not an array has a majority element. Unfortunately.11. (It is currently possible to establish with certainty the primality of numbers up to 213 decimal digits within approximately 10 minutes of computing time on a CDC CYBER 170/750. . function majMC (T. where n is the number of elements in the array and E is the acceptable probability of error.6.. Show that the probability that k successive calls of maj(T) all Problem 8.6 Monte Carlo Algorithms 269 maj (T) has returned false on an array with a majority element does not change the probability that it will return true on the following call on the same instance. On the other hand. Thus there is only a meagre 2% probability that it will happen on d = 43 and hence return false.uniform (2 . LJ J) return ((n mod d) # 0) If the answer returned is false. 8. 8. The algorithm can be improved slightly by testing whether n and d are relatively prime.6). This is interesting only as an illustration of a Monte Carlo algorithm since a linear time deterministic algorithm is known (Problems 4.11.I to k do if maj (T) then return true return false The algorithm takes a time in 0 (n log(1/E)).5 and 4.623 = 43 x 61. and we can be certain that n is composite.Sec. E) k rlg(1/E)1 for i <-. return false is less than 2-k if T contains a majority element. the algorithm has been lucky enough to find a nontrivial factor of n. we can be certain that T contains a majority element. the answer true is returned with high probability even if n is in fact composite.6. No deterministic or Las Vegas algorithm is known that can solve this problem in a reasonable time when the number to be tested has more than a few hundred decimal digits.) A first approach to finding a probabilistic algorithm might be function prime (n) d . For larger values of n the situation gets worse. The algorithm chooses an integer randomly between 2 and 51.

in which case we obtain the correct answer with cer- . Let a be an integer such that 2 5 a <.8.1)/2 mod n with dexpoiter from Section 4.2 different bases. There exist however composite numbers that are strong pseudoprimes to some bases.n . If n is prime. The situation is even better if n is composed of a large number r of distinct prime factors : in this case it cannot be a strong pseu- doprime to more than 4(n)/2r-' . 1584"9 = 1102 = 251 (mod 289).2). We say that n is a strong pseudoprime to the base a if a' _ 1 (mod n) or if there exists an integer i such that 0 <. where t is odd.2. The theorem is generally pessimistic. This Monte Carlo algorithm for deciding primality is falsebiased and 3/4-correct.9)/4 different bases. and let s and t be positive integers such that n -1 = 2s t. Give an efficient algorithm for testing whether n is a strong pseudoprime to the base a.6. 1582"9 = 1312 = 110 (mod 289). Let n be an odd integer greater than 4. an integer a is chosen at random. function Rabin (n) { this algorithm is only called if n > 4 is odd } a <--uniform (2. where 4(n) < n -1 is Euler's totient function. This test of primality has several points in common with the Las Vegas algorithm for finding square roots modulo p (Section 8. Prove this theorem. 1589 = 131 (mod 289). Problem 8. If n is composite.i < s and art _ -1 (mod n). whereas 737 does not even have one.. Such a base is then a false witness of primality for this composite number. and 1588 9 = 2512 =_ I (mod 289). 158 is a false witness of primality for 289 because 288 = 9 x 25.8. it is a strong pseudoprime to any base. we may begin to suspect that n is prime if Rabin (n) returns true. Your algorithm should not take significantly longer (and sometimes even be faster) than simply calculating a (" . For instance.5. we need a theorem whose proof lies beyond the scope of this book. n -2) if n is strongly pseudoprime to the base a then return true else return false For an odd integer n > 4 the theorem ensures that n is composite if Rabin (n) returns false. This certainty is not accompanied however by any indication of what are the nontrivial factors of n. Contrariwise. For example. This suggests the following algorithm. finally.7. ** Problem 8. In both cases.6. 8 To obtain an efficient Monte Carlo algorithm for the primality problem. The theorem assures us that if n is composite.Probabilistic Algorithms 270 Chap. there is at least a 75% chance that n will not be a strong pseudoprime to the base a. it cannot be a strong pseudoprime to more than (n . 289 has only 14 false witnesses of primality.

One might also expect composite numbers to be produced erroneously once in a while. prove that the probability is better than 99% that not even a single composite number larger than 100 will ever be produced.2. . "I believe this number to be prime .9. if n is a strong pseudoprime to the base a. * Problem 8. At any given moment the question "Does Si = Si ?" may be asked.5).6.2. otherwise I have observed a natural phenomenon whose probability of occurrence was not greater than E".) ** Problem 8. Prove that this is unlikely to happen. A philosophical remark is once again in order: the algorithm does not reply "this number is prime with probability 1-E". Similarly there is better than a 50% chance that a will provide a key for finding .6. Notice that this combines with the false-biased algorithm described previously to yield a Las Vegas algorithm (by Problem 8. 8. every prime number will eventually be printed by this program.6.3 A Probabilistic Test for Set Equality We have a universe U of N elements. but rather. program print primes print 2. and a collection of n sets. all of which are empty at the outset. The first reply would be nonsense.3 n 5 repeat if repeatRahin (n. We suppose that N is quite large while n is small. Llg n j) then print n n -n + 2 ad museum Clearly.6.6 271 tainty. not necessarily disjoint. 8.Monte Carlo Algorithms Sec. this can be due either to the fact that n is indeed prime or to the fact that a is a false witness of primality for the composite number n. More precisely. where xE U \ Si and I S i _< n. regardless of how long the program is allowed to run. Find a true-biased Monte Carlo algorithm for primality testing whose running time is polynomial in the logarithm of the integer being tester. Nevertheless the algorithm for testing primality is only a Monte Carlo algorithm whereas the one for finding square roots is Las Vegas. (Note : this figure of 99% is very conservative as it would still hold even if Rabin (n) had a flat 25% chance of failure on each composite integer. This difference is explained by the fact that the Las Vegas algorithm is able to detect when it has been unlucky : the fact that a does not provide a key for is easy to test. As usual. Consider the following nonterminating program.10. the probability of error can be made arbitrarily small by repeating the algorithm. since every integer larger than I is either prime or composite. On the other hand. The difference can also be explained using Problem 8. The basic operation consists of adding x to the set Si .

8 The naive way to solve this problem is to keep the sets in arrays. The table is used to implement a random function rand : U -+ (0. For a set S c U. let . The algorithm to test the equality of Si and S1 is: function test (i .7. Each call of rand (x) returns a random string chosen with equal probability among all the strings of length k. Let k = Ilg(max(m.some random k -bit string add x to the table and associate y to it return y Notice that this is a memory function in the sense of Section 5. Two different calls with the same argument return the same value. j ) ifv[i]=v[jI then return true else return false It is obvious that Si # Sj if v [i] # v [ j ]. For any e > 0 fixed in advance. each call of rand (x) takes constant expected time. in the opposite case its probability of error does not exceed E. the set of k -bit strings. To each set Si we associate a variable v [i] initialized to the binary string composed of k zeros.1 }k .Probabilistic Algorithms 272 Chap. 1/E))]. Let e > 0 be the error probability that can be tolerated for each request to test the equality of two sets. Whatever structure is chosen. The Monte Carlo algorithm first chooses a function at random in this class and then initializes a hash table that has U for its domain. each test of equality will take a time in S2(k).1 }k as follows. We suppose that x is not already a member of Si .v[i]®rand(x) The notation t ® u stands for the bit-by-bit exclusive-or of the binary strings t and u. lists. where k is the cardinality of the larger of the two sets concerned. The algorithm never makes an error when Si = S/ . and two calls with different arguments are independent. What is the probability that v [i ] = v [ j ] when Si # Sj ? Suppose without loss of generality that there exists an xo a Si such that xa 9 S/ .4. search trees. Here is the algorithm for adding an element x to the set Si . Thanks to the use of universal hashing. Let Si = Si \ (x0}. Let H be a universal2 class of functions from U into (0. or hash tables. procedure add (i . function rand (x) if x is in the table then return its associated value y E.4). x) v[i] F. This algorithm provides an interesting application of universal hashing (Section 8. there exists a Monte Carlo algorithm that is able to handle a sequence of m questions in an average total time in 0 (m). if indeed it is not in S2(k log k ).

Two calls p (i) and p Q) that are not separated by a call of init should therefore return two different answers if and only if i # j. but no request may consult or modify more than a constant number of memory locations : thus it is not possible to create the whole permutation when init is called. and if an application of the algorithm replies incorrectly that Si = Sj .4. which does not worry us when we want to test set equality. the different tests of equality are not independent.6. . then it will also reply incorrectly that Sk=S1. The possibility that rand (x 1) = rand (X2) even though x I * X2. Si = Sj u {x }. For instance. x) is only permitted when x is already in Si . v) takes constant time for 1 <_ u <_ v <_ N. the probability of this happening is only 2-k since the value of rand (xo) is chosen independently of those values that contribute to yo.6 Monte Carlo Algorithms 273 XOR (S) be the exclusive-or of the rand (x) for every x E S .1). It is only possible to increase our confidence in the set of answers obtained to a sequence of requests by repeating the application of the algorithm to the entire sequence..Sec. j ). Problem 8. Suppose that a call of uniform (u . whatever happens. Show how you could also implement a procedure elim (i. 2. x) is made when x is already in Si ? Problem 8. By definition. Two such calls separated by a call of init should on the other hand be independent. which removes the element x from the set Si .13. x). x) is made when xE Si . ) ®XOR (Si ). A call of elim (i. x). Also implement a request member (i . may be troublesome for other applications. . Modify the algorithm so that it will work correctly (with probability of error c) even if a call of add (i. What happens with this algorithm if by mistake a call of Problem 8. Let yo = XOR (S. This Monte Carlo algorithm differs from those in the two previous sections in that our confidence in an answer "Si = Si " cannot be increased by repeating the call of test (i .6. . x 9 Si u SS . Notice the similarity to the use of signatures in Section 7. which decides without ever making an error whether x E Si .1}". A sequence of m requests must still be handled in an expected time in O (m). A call of p (i) returns the value 7t(i) for the current permutation.2 and Example 8. Show how to implement a random permutation. The fact that v [i ] = v [ j ] implies that rand (xo) = yo. let N be an integer and let U = { 1.14. A call of init initializes a new permutation tt : U -* U. Your implementation should satisfy each request in constant time in the worst case.6. v [i] = XOR (Si) = rand (xo) ®XOR (S. You may use memory space in O (N). 8.7. You must accept two kinds of request : init and p (i) for 1 <_ i <_ N.11. Moreover.6. (Hint: reread Problem 5..1. if Si # Sj .12. add (i.) and v[j] = XOR (SS ). N 1. More precisely.2. rand : U -*{O. Universal hashing allows us to implement a random function ** Problem 8.. Sk = Si u {x }.

Given three polynomials p (x). but no such algorithm is known that only takes a time in 0 (n).1 ton do X [i ] <. (In the next chapter we shall see a deterministic algorithm that is capable of computing the symbolic product of two polynomials of degree n in a time in 0 (n log n ).) Given two n x n matrices A and B.4 Matrix Multiplication Revisited You have three n x n matrices A.376). 8 8. function goodproduct (A. Here is an intriguing false-biased. whenever AB = C.9).5. correct Monte Carlo algorithm that is capable of solving this problem in a time in d (n2). adapt this algorithm to Problem 8. which only computes the product approximately. B . Problem 8. give a false-biased. and with the probabilistic algorithm mentioned in Section 8. B .17. C . and r (x) of degrees n.6.15. decide probabilistically whether B is the inverse of A. respectively. B. but it was already in use in the secret world of atomic research during World War 11. correct Monte Carlo algorithm to decide whether r (x) is the symbolic product of p z (x) and q (x).6. providing a dramatic example of the topic discussed in Section 5. n) returns true Problem 8. provided 11 AB #C. Compare this with the fastest i known deterministic algorithm to compute the product AB (Section 4.7 REFERENCES AND FURTHER READING The experiment devised by Leclerc (1777) was carried out several times in the nineteenth century. It is no doubt the earliest recorded probabilistic algorithm.11) if ABX = CX then return true else return false In order to take a time in 0 (n 2). The term "Monte Carlo" was introduced into the literature by Metropolis and Ulam (1949). It is obvious that goodproduct (A . we must compute ABX as A times BX. (1495). The term "Sherwood" is our own. The term "Las Vegas" was introduced by Babai' (1979) to distinguish probabilistic algorithms that occasionally make a mistake from those that reply correctly if they reply at all. in particular in Los Alamos.uniform ({ -1. q (x). n and 2n. New Mexico.) 8.. see for instance Hall (1873).3. which takes a time in 0 (n 2.2. Your algorithm should run in a time in 0 (n ).6. see Anon. . (Hint: consider the columns of the matrix AB -C and show that at least half the ways to add and subtract them yield a nonzero column vector. n) array X [ 1 .Probabilistic Algorithms 274 Chap. Prove that it returns false with probability at least z whenever AB # C. For the solution to Problem 8.16. C.3.3. Recall that it is often used to describe any probabilistic algorithm. and C and you would like to decide whether AB = C. n ] { to be considered as a column vector) for i F.6.

For more information on numeric probabilistic algorithms. A probabilistic algorithm that is capable of finding the i th smallest among n elements in an expected number of comparisons in n +i +O(J) is given in Rivest and Floyd (1973). Computation with encrypted instances (end of Section 8.3. including Problem 8. The application of probabilistic counting to the Data Encryption Standard (Example 8. The solution to . The technique for searching in an ordered list and its application to sorting (Problem 8.3. this article and the one by Yao (1982) introduce the notion of an unpredictable generator. Numeric probabilistic algorithms designed to solve problems from linear algebra are discussed in Curtiss (1956). The point is made in Fox (1986) that pure Monte Carlo methods are not specially good for numerical integration with a fixed dimension : it is preferable to choose your points systemat- ically so that they are well spaced. Classic hash coding is described in Knuth (1968). Hammersley and Handscomb (1965). consult Sobol' (1974).12. including the one from Problem 8. 8.15. An early (1970) linear expected time probabilistic median finding algorithm is attributed to Floyd : see Exercise 5.21.4 follows Flajolet and Martin (1985). Rivest.6. and Carasso (1971). is due to Peralta (1986). Under the assumption that it is infeasible to factorize large numbers.3.14) come from Janko (1976). and Ullman (1974). Feigenbaum.5.3. Vickery (1956). Stanat.4. Blum.12) is given in Bentley. The former includes tests for trying to distinguish a pseudorandom sequence from one that is truly random.4. General techniques are given in Vazirani (1986. and Steele (1981).3. Problem 8. see Wegman and Carter (1981). For solutions to Problem 8. a more efficient unpredictable pseudorandom generator is proposed in Blum. The experiments on the twenty queens problem were carried out by Pierre Beauchemin.2 for finding square roots modulo a prime number.1) is described in Kaliski.4.13 in Knuth (1973). Section 8. Rabin (1980a) gives an efficient probabilistic algorithm for com- puting roots of arbitrary polynomials over any finite field.5. several universal2 classes are described there.4.7 References and Further Reading 275 Two encyclopaedic sources of techniques for generating pseudorandom numbers are Knuth (1969) and Devroye (1986).15 appear there. Universal hashing was invented by Carter and Wegman (1979). and Kilian (1987). 1987) to cope with generators that are only "semirandom". As a generalization. many solutions to Problem 8. which can pass any statistical test that can be carried out in polynomial time. It predates the classic worst-case linear time deterministic algorithm described in Section 4. the same reference gives the statement of Problem 8. The algorithm of Section 8.4.Sec.4. Brassard. and Robert (1988).4. and Sherman (1988). Fox. and Bennett. An analysis of this technique (Problem 8.4. Hopcroft. consult Aho.2) is an idea originating in Feigenbaum (1986) and developed further in Abadi. Early algorithms to solve this problem are given in Lehmer (1969) and Berlekamp (1970). and Schrage (1983).14 is solved in Klamkin and Newman (1967). A guide to simulation is provided by Bratley. A more interesting generator from a cryptographic point of view is given by Blum and Micali (1984). The probabilistic approach to the eight queens problem was suggested to the authors by Manuel Blum. For a solution to Problem 8.5. a technique known as quasi Monte Carlo.20. More references on this subject can be found in Brassard (1988). and Shub (1986).

17 are given in Freivalds (1979).6. Manders. For efficiency considerations in factorization algorithms. Rabin (1980a) gives an efficient probabilistic algorithm for factorizing polynomials over arbitrary finite fields. More information on number theory can be found in the classic Hardy and Wright (1938). consult Montgomery (1987). Kranakis (1986). refer to Pomerance (1982). and Cohen and Lenstra (1987).29. and Miller (1977).6. and Rumely (1983). Adleman. 1980b). An efficient probabilistic algorithm is given in Karp and Rabin (1987) to solve the stringsearching problem discussed in Section 7.4) and the solution to Problem 8. and Pomerance (1988).2. For an anthology of probabilistic algorithms. Our favourite unbiased Monte Carlo algorithm for a decision problem. including Problem 8. Goutier. The probabilistic test of primality presented here is equivalent to the one in Rabin (1976.5. 8 Problem 8. read Valois (1987). is described in Bach. The algorithm based on elliptic curves is discussed in Lenstra (1986). Cre peau.10 is given in Goldwasser and Kilian (1986) and Adleman and Huang (1987). Miller. for a comparison with other methods. Rabin (1976) gives an algorithm that is capable of finding the closest pair in expected linear time (contrast this with Problem 4. Pomerance. A Monte Carlo algorithm is given in Schwartz (1978) to decide whether a multivariate polynomial over an infinite domain is identically zero and to test whether two such polynomials are identical.4 for the generation of random numbers that are probably prime is explained in Beauchemin. The integer factorization algorithm of Pollard (1975) has a probabilistic flavour. The expected number of false witnesses of primality for a random composite integer is investigated in Erdos and Pomerance (1986) .6. The solution to Problem 8. The probabilistic test for set equality comes from Wegman and Carter (1981). The test of Solovay and Strassen (1977) was discovered independently. The probabilistic integer factorization algorithm discussed in Section 8. which also gives a fast probabilistic splitting algorithm whose probability of success on any given composite integer is at least as large as the probability of failure of Rabin's test on the same integer. consult Williams (1978). For more information on tests of primality and their implementation.14 is in Brassard and Kannan (1988). see also Monier (1980). Given the cartesian coordinates of points in the plane. Consult Zippel (1979) for sparse polynomial interpolation probabilistic algorithms.6.14). . they also give a cryptographic application of universal hashing. Brassard. and for finding irreducible polynomials.17 is given by the algorithm of Shanks (1972) and Adleman. also read Freivalds (1977). and Shallit (1986).5.5. The algorithm for electing a leader in a network. Lenstra (1982).11. Several interesting probabilistic algorithms have not been discussed in this chapter. which allows us to decide efficiently whether a given integer is a perfect number and whether a pair of integers is amicable.276 Probabilistic Algorithms Chap. The Monte Carlo algorithm to verify matrix multiplication (Section 8. The implication of Problem 8. A theoretical solution to Problem 8.6. We close by mentioning a few of them. comes from Itai and Rodeh (1981). Amplification of the advantage of an unbiased Monte Carlo algorithm is used to serve cryptographic ends in Goldwasser and Micali (1984).3 originated with Dixon (1981) .

a(xt ))) xt in the domain D . The most important transformation used before the advent of computers resulted from the invention of logarithms by Napier in 1614. and an exponentiation. . an addition.1.--- Example 9. Kepler found this discovery so useful that he dedicated his Tabulae Rudolphinae to Napier. for all x 1 .1 illustrates this principle. v) = u x v . an invertible transformation function a : D -+ R and a transformed function g : R' -> R such that PX1 .--C02).. f (u . If you were asked. a(u) = In u and g (x . this idea is only of interest when tables of logarithms are . R = R. you would probably begin by translating them into Arabic notation. and if the transformations a and 6-1 can also be computed efficiently. In this case. x 2 . D = IN+ or R+. with this word's original meaning!) More generally. An algebraic transformation consists of a transformed domain R.1. Let f :Dt -4 D be a function to be calculated. xt) = a' ' (g (a(x 1). y) =x +y.1 INTRODUCTION It is sometimes useful to reformulate a problem before trying to solve it. .. This allows a multiplication to be replaced by the calculation of a logarithm. Such a transformation is of interest if g can be calculated in the transformed domain more rapidly than f can be calculated in the original domain. let D be the domain of objects to be manipulated in order to solve a given problem. (You would thus use an algorism.1. x22 . for example. Since the computation of a and a1 would take more time than the original multiplication.9 Transformations of the Domain 9. Figure 9. to multiply two large numbers given in Roman figures.

Most computers that handle numerical data read these data Example 9. Such tables.2.Transformations of the Domain 278 Chap. d. 1. computed beforehand. 463) and q = (. taking p = (1. Example 9. An alternative way to represent the same polynomials is to give their values at d+ 1 distinct points. 238).. . 7). calculating the transformed function (carrying out the pointwise multiplication) is much quicker than straightforwardly calculating the original function . 53. 34) and q = ( . 1.2.2.2. Example 9. . Thus we obtain r = (. . The polynomials in our example would be represented by p = (1. These polynomials are represented by their coefficients. In this example. We would have needed to use seven points from the outset.4.109. It is often useful to transform between Cartesian and polar coordinates. 3. 2. The original domain of the problem is therefore Zd + 1. for example. However.1.2.1.1. p(x) = 3x3-5x2-x +I and q(x) =x3-4x2+6x -2. f o r instance.. The transformed Zd+I. and print the results in decimal but carry out their computations in binary. 246. . 22. 0. it takes a time in O(d 2) if the scalar operations are taken to be elementary. 34. Transformation of the domain. The naive algorithm for multiplying these polynomials resembles the classic algorithm for integer multiplication . Let r(x) = 3x6-17x5+37x4-31x3+8x -2 be the product of p(x) and q(x). 7..1.6. this does not allow us to recover the coefficients of r (x) because a polynomial of degree 6 is not uniquely defined by its value at only four points. calculated once and for all. but its meaning has changed. where d is the degree of the polynomials involved. The new representation suffices to domain is still define the original polynomials because there is exactly one polynomial of degree at most 3 that passes through any four given points. 1.2. 2. thus furnish a historical example of preconditioning (Chapter 7). 3. 106). if the computation of r were to be carried out correctly using this representation. You want to perform a symbolic multiplication of two poly- nomials. Using the transformed representation.3. 2. we can carry out the multiplication in a time in O (d) since r (i) is calculated straightforwardly as p (i) q (i) for 0 5 i <.2.d. 9 I D Dt o` Rt Figure 9.1.

If n = 8. and w= (1+i )/J is a possible value in the field of complex numbers. They are assumed to be executed at unit cost unless it is explicitly stated otherwise. As in Example 9..1.. B.(b) and Fa(c ). co) : array [0... so it is legitimate to talk about Fa.7. a 2 .(a) is calculated using the divide-and-conquer technique.2d must take a time in S2(d2). hence a`+' = a` and w`+` d. where a=w2. provided that the points at which we evaluate the polynomials are chosen judiciously..4 that this is not in fact the case. . n -1 ] (n is a power of2andwn'2=-1 ) array A [0.n / 2 arrays b. At first glance. it seems that the computation of the values p (i) and q (i) for 0 <_ i <.2 THE DISCRETE FOURIER TRANSFORM For the rest of this chapter all the arithmetic operations involved are carried out either modulo some integer m to be determined or in the field of complex numbers . which appears to remove any interest this approach may have had. Suppose that n > 2 and set t = n / 2. n -1 ]. Let n > I be a power of 2. a3.. We shall see in Section 9. pa (wn . n -1] [ the answer is computed in this array ) if n = 1 then A [0] <. an _ 1). c. this is only useful if the transformation function (evaluation of the polynomials) and its inverse (interpolation) can be calculated rapidly.a [0] else t . a' = 1 and co' _ -1. it appears at first glance that the number of scalar operations needed to calculate this transform is in S2(n2). .. . pa (w). an-4.6)... The discrete Fourier transform of a with respect to m is the n-tuple Fu. Clearly. However. Consider an n-tuple a = (ao. This algorithm is vitally important for a variety of applications. This defines in the natural way a polynomial pa(x)=an_1Xn-I+an_2Xn-2+ "' +alx+ao of degree less than n.(a) = (pa (1). Pa (w')=pb((X')+w'pC((X'). particularly in the area of signal processing (Section 1. 9. for example. 9.Sec.. C [0.1)). . function FFT (a [0. Furthermore..4. then w= 4 is a possible value if we are using arithmetic modulo 257. pa ((02). . The t-tuples b and c defined by b = (ao. an_j) are such that pa(X)=ph(x2)+xpc(x2). thanks to an algorithm known as the Fast Fourier Transform (FFT). We denote by co some constant such that 012 =-1.. t -1 ] ( intermediate arrays ) ( creation of the sub-instances ) fori -0tot-1 dob[i] 4-a[2i] c[i] E-a[2i+1] . a naive implementation of Lagrange's algorithm to carry out the final interpolation would take a time in S2(d 3). The Fourier transform Fu. However. . this is not in fact the case.2 The Discrete Fourier Transform 279 (performing a symbolic multiplication of two polynomials given by their coefficients). a`12 = (m2)t/2 = m' = wn/2 = -1. . Worse still. an-2) and c = (al . so that pa (w`+') = Pb ((X') .or more generally in any commutative ring. a i . In particular.w' pC (c'). an_3. .

This can be done because 44 =.-1 (mod 257). is in O(n log n). let q (x) = (x-x1 )(x-x2) .2. Despite the conceptual simplicity of this recursive algorithm. yield B = F 16(b) = (38. 226.9-70 = 75 The final result is therefore A = (255. X2.226. 200. A[ 11 E-. our concern here is to use the discrete Fourier transform as a tool for transforming the domain of a problem. where w = 4. 85. 9) and C = F 16(c) = (217. Show that p (xi) = r (x. Let n = 8 and a = (255.] 9.0. Using the fast Fourier transform allows us to evaluate the polynomials p (x) and q (x) at the points 1. 32.3 THE INVERSE TRANSFORM Despite its importance in signal processing. the remainder of the symbolic division of p (x) by the monomial (x-x. which use t = 4 and w2 = 16. Example 9. 170. 9 { recursive Fourier transform computation of the sub-instances } B t-FFT(b.22to2 = 194 A [7] .1. In particular.w2) C -FFT(c. q(x1) = q (x 2) = = q (xr) = 0. .1.I for i F. ** Problem 9. cue. 22.1. (x x1) be a polynomial of degree t such that x1 are arbitrary distinct values. 43. .aC [i] a F-ato return A Show that the execution time of this algorithm Problem 9..) for 15 i <.38 . w.0. Our principal aim is to save the idea proposed in Example 9.37. We combine these results to obtain A. 7).. 255.2. 78.170+43co = 85 A [4] <. it is preferable in practice to use an iterative version.0).127. 240. First a is split into b = (255.0 to t-1 do (a= w' I A[i] F-B[i]+aC[i] A [t+i ] E.B [i] .Transformations of the Domain 280 Chap.3. & -1 in a time in 0 (n log n).240.2.3) and c = (8.(o 2) { Fourier transform computation of the original instance } af.37.2.t. Let us calculate FW(a) in arithmetic modulo m = 257.4. 0). and an iterative algorithm..8. where x 1 . Give such [Hint: Let p (x) be an arbitrary polynomial.9+7w3 = 200 32 . where n is a power of 2 greater than the degree of the product .217 = 78 A [5] F 170-43w = 255 A [2] A [6] A [0] 38 + 217 = 255 32 + 22002 = 127 A [3] F.) is the constant polynomial whose value is p (x1). The recursive calls. 75).-- Let r (x) be the remainder of the symbolic division of p (x) by q (x). 194.

Theorem 9. we shall denote it by col. Obvious. With this in view we say that w is a principal n th root of unity if it fulfills three conditions : 1.3. E or" = 0 for every I <. Conclude by splitting y o' wjP into 2n sub sums of 2s elements. iv.1. 1 = w0 . v. Notice that (w 1)n i2 = -1. w' . Assuming the existence of a multiplicative inverse n-1 for n in our ring. let n = 2k and decompose p = 2n v where u and v are integers.1. Prove that e 2i 1/.3. Then wP = 1 and 1 <.(dj+s )P for every integer j. and v is odd. More generally we use W-' to denote (0-1)' for any integer i. and use the existence of n-1. * Problem 9. j=0 When n is a power of 2. iv. w is a principal n th root of unity. it turns out that these conditions are automatically satisfied whenever con/2 = . complex numbers. wn 12 = -1 is a consequence of w being a principal n th root of unity. w2 . Let n > 1 be a power of 2.i. The pointwise multiplication can be carried out in a time in O (n). Assume co' = w j for 0 <. To obtain the final result. iii. Consider any commutative ring. and let w be an element of the ring such that con/2 = .i < j < n and let p = j . we still have to interpolate the unique polynomial r (x) of degree less than n that passes through these points. . wn -1 is the multiplicative inverse of co in the ring in question .1.. As a slight abuse of notation.1. Show that co-" = . is a principal n th root of unity in the field of .3. ii. iii. Thus we have in fact to invert the Fourier transformation. Problem 9. let us use " n " also to denote the element in our ring obtained by adding n ones together (the "1" from our ring). Conditions (1) and (2) are obviously fulfilled.3.1. each summing to zero. w 1 is also a principal n th root of unity. as we have assumed already. wn -' are all distinct . Let s =2k-1.3 The Inverse Transform 281 polynomial. Prove Theorem 9. To show condition (3).1. Assuming this n is not zero.p < n . Use condition (3) and n * 0 to obtain a contradiction. Hints : i.p < n . 9.they are called the n th roots of unity. and n-I 3.Sec. Use p = n/2 in condition (3).2. w# 1 2. Then i. wn = 1. v. ii.

OP = 0 by property (3) of a principal k=0 n th root of unity.3. let p = i -j. n i. Prove that 0 <.3.3.3. Prove further that n-1 exists modulo m. The inverse Fourier transform can be calculated efficiently by the following algorithm. Now C.j .3. then co('-j)k=co0=1.j < n . n . This matrix A provides us with another equivalent definition of the discrete Fourier transform : F"..andsoC11=n-1 nxn-1=1. provided that .Transformations of the Domain 282 Chap. Now C.j = di for 0 <. Then AB = I.. Proof. the n x n identity matrix. . show how x mod m can be calculated in a time in p 0 (u) when m = 2" + 1. . From now on.3. The main theorem asserts that A has an inverse. and let m = w"12 + 1. as in Problem 9.j = n-1 Y. (The only possibility for j = 2" is when a = b = m -1. namely the matrix B defined by B. and otherwise set d = m + i . Let a = (a o.i < 2' and 0 <.4.j =n(o)1'P = 0 because co 1 is also a k=0 principal n th root of unity by Theorem 9. by showing that n-1 = m . since 1 <. n -1pa (W-').(a)) = Fw(FW' (a)) = a for every a.2.b < m ..(a) = aA.' (a) = aB = (n -'pa (1)..5. Let A and B be the matrices just defined. Cu = Jk _o A.j = n'o'. If i > j. Let n and w be positive powers of 2.p < n . By definition.k Bki . Theorem 9. .a < m and 0 <. multiplications modulo m can be carried out on a binary computer without using division operations.I iii. Theorem 9.3.1) be an n-tuple. Let C = AB. an . The inverse Fourier transform of a with respect to w is the n-tuple F.) If i > j.i.3.2".1(iii).2 justifies the following definition. More generally. set d =i-j. If i < j. Prove that Fw' (Fu. Decompose c into two blocks of u bits : c = 2" j + i. . where 0 <. n -'Pa ((02 ).3. Let c = ab be the ordinary product obtained from a multiplication. let p = j . n -'Pa (w (n -1))) Problem 9. assume that w is a principal n th root of unity and that n-1 exists in the algebraic structure considered. Prove that w is a principal n th root of unity in arithmetic modulo m.(m -1)/n . Let A be the n x n matrix defined by A. k=0 n-1 H. we wish to obtain their product modulo m. a I .(22u + 2u) < x < 22u +2u+'.i < n and 0 <. When m is of the form 2" + 1. provided that n-1 is either known or easily calculable.n-I ik -o w(i-j)k There are three cases to consider.Ifi=j..d < m and ab = d (mod m).. Let a and b be two integers such that 0 <. 9 Problem 9.j <. . Problem 9.

w" -' + w" '2. Combined.3. if the transform is calculated modulo m (Problem 9.Sec.0 for i f-0tot-1 do I R= iy} A [i] <-B[i]+C[i]TR A[t+i]<-.3.0.(B ± C T (3) mod m. w is indeed a principal n th root of unity. where w' = co7 = 193. co) : array [0. where 0 <. For this it is convenient to modify the algorithm slightly. rounding errors may occur on the computer. that is.200.4) that reductions modulo m can be carried out in a time in O (log m).133) and C =FFT(c. which means that it can be reduced modulo m in a time in O (log m) = O (n log (o) by Problem 9.3.12.1. To do this.3). denoted by y.127.w2)_(140.3.221.3). 85..0).65.226. The final result is thus F = (255. Secondly. Furthermore. it may be necessary to handle large integers.3 The Inverse Transform 283 function FFTinv (a [0.3. where w = 4. where x Ty denotes the value of x shifted left y binary places.4. we supply the base 2 logarithm of w.2).0. these results give A = (241. By Problem 9. even when the recursive calls are taken into account.11. 121.8.4. the recursive calls are made with 2y rather than w2 as the second argument.1) Ig w.3.194) 0-2 = 241 recursive and c = (85. 39.3.w2)_(101. Let us calculate Fj' (a) in arithmetic modulo m = 257. 9. For the rest of this section we no longer suppose that arithmetic operations can be performed at unit cost : the addition of two numbers of size 1 takes a time in O (1). 75)..37. The calls with yield B =FFT(b. the complete computation of the Fourier . The heart of the algorithm consists of executing instructions of the form A .3. If the Fourier transform is calculated in the field of complex numbers (Problem 9. instead of giving w as the second argument.1. There remains the mul- tiplication by n-' = m -(m -1)/n = 225 (Problem 9.64. First we calculate FFT (a . thanks to the particular form chosen for m.255.2.3.3.. 0).w" -' <. 255.78. x x 2Y. of ' ). 194.75). n -1 ] F <-FFT(a. n . n -1 ] { n is a power of 2 and w" = 1 } array F [0. 127. Consequently . the fact that w is a power of 2 means that multiplications in the FFT algorithm can be replaced by shifts.B[i]-C[i]TP RF-R+y.240.9. We already know (Problem 9. All the arithmetic is carried out modulo m = w"'2+ I using Problem 9. a is decomposed into b=(255. which is consistent with Example 9.31).B < m and 0 <. On the other hand.w"-') for i <-Oton-1 do F[i]Fn-'F[i] return F Example 9.B ± C T Q <. 78.C < m . The value of the shift never exceeds (z .143. Since the number of operations of this type is in 0 (n log n). The final loop becomes R <-. First. Let n = 8 and a = (255. 24. 200.

(Padding c with zeros is unnecessary if d -1 is a power of 2..0. Both of these degree 1 polynomials evaluate to 1 and 7 at the points 0 and 3. (Consider. and let co be a principal n th root of unity. 1 1 Similarly.4. cd . c = Fw 1(C ). . respectively. (The algorithm FFTinv has to be modified.284 Chap.0.(x) = 2x + 1 and p2(x) = 5x + 1 in the ring of integers modulo 9. . causes shifts that can go up to (2 -1) (n .. and this is fortunate because unique interpolation does not always hold when the arithmetic is performed in a ring rather than in a field.1.4. and C = Fu. 9 Transformations of the Domain transform modulo m can be carried out in a time in 0 (n 2 log n log co). we need to be able to calculate efficiently a principal n th root of unity.(c)..3.. 0).3. respectively. and c = (c0 . .0. Otherwise a direct call on the new FFT with y = (n .xs +aS_.. Show that the inverse transform modulo m = co"t2+ I can also be computed in a time in 0 (n 2 log n log co).a1.) Problem 9. for instance. and c be the n-tuples defined by a =(ao. the algorithm takes a time in 0 (n log n). . 0). provided that a principal n th root of unity and the multiplicative inverse of n are both easily obtainable.1) lg co. the coefficients of the product polynomial r (x) are given by the first d + 1 entries in c = F.2.6.. Let p(x) =a.) Let A = Fu. Therefore C is the pointwise product of A and B. since -n-1 = (o"t2/n is a power of 2.4 SYMBOLIC OPERATIONS ON POLYNOMIALS We now have available the tools that are necessary to finish Example 9. = r (co') = p (w`) q (co`) = A. B = F". c 1. as.(a).4 can no longer be applied. Show that it can be used to multiply two polynomials whose product is of degree d with a number of scalar operations in 0 (d log d). bt. It is somewhat surprising that efficient symbolic multiplication of polynomials with integer . where n is the smallest power of 2 greater than d. .1) lg co.bl.. if m is sufficiently small that arithmetic modulo m can be considered to be elementary. Let n be the smallest power of 2 greater than d. corresponding to the use of w = co' as a principal root of unity. 0 . By definition of the Fourier transform. b =(bo.) Problem 9. 0 . 1(Fw(a) x Fjb)).) 9. C. 0)..O. Let a..5. The easiest approach is to use the complex number field and Problem 9. the final multiplication by n-1 can profitably be replaced by a multiplication by -n-1 followed by a change of sign. b. Putting all this together. (From a practical point of view. . In order to implement this algorithm for the symbolic multiplication of polynomials. . B.. respectively..3. We want to calculate symbolically the product polynomial r(x)=cdxd+cd_IXd-1+ +clx+co=p(x)q(x) of degree d = s + t.xs-1+ +alx+ao and q(x) = btxt+bt_lxt-1+ +b1x+bo be two polynomials of degrees s and t. p. By Problem 9.3. Give explicitly the algorithm we have just sketched.. which means that Problem 9. .(b).1. Notice that this reasoning made no use of the classic unique interpolation theorem. .

a = 5. 0). 3.2 for this.(c) = C is c = (255. and to use Problem 9. (Use Problems 9.1.3. and 240 correspond to -2. the integers 255.29.194.3. Generalize this idea : give explicitly an algorithm mul (a [0 .127. 226. The pointwise product of these two transforms. -31.251.85.0).6.3 and 9. Problem 9.. (Continuation of Example 9. the algorithm multiplies p (x) and q (x) symbolically in a time in 0 (d log d).199. 0. and m.4 285 Symbolic Operations on Polynomials coefficients should require operations on complex numbers. By Example 9.255.2. and u = 3. w = 4 is a principal n th root of unity in arithmetic modulo 257. w. Let p (x) and q (x) be two polynomials with integer coefficients.1.133) and B = F.(a) = (255.4. -1. 3. your algorithm should determine suitable values for n.3 to obtain a principal n th root of unity. 9.78. still working modulo 257.4.. 240. the vector c such that F.1. Let u be the maximum of the degrees of the two polynomials.2.0. t ]) : array[0 . Since the product is of degree 6.(b) _ (1. respectively. b = 6. By Problem 9. and n -' = 225. 103. all the coefficients of the product polynomial r (x) = p (x) q (x) lie between -120 and 120. The final answer is therefore r(x) = 3x6-17x5+37x4-31x3+8x -2 . Let a = (1.75).3 depends on the degrees s and t of the polynomials to be multiplied and on the size of their coefficients.3.4.0. Two applications of the algorithm FFT yield A = F. it becomes necessary to take precautions against the possibility of rounding errors on the computer. 0.) Q The analysis of the algorithm of Problem 9. and -17. 0.4. . Among other things. thus it suffices to calculate them modulo in = 257.4) We wish to multiply sym- bolically the polynomials p(x) = 3x3-5x2-x +l and q(x) =x3-4x2+6x -2. Problem 9. respectively. s+t ] that carries out symbolic multiplication of polynomials with integer coefficients.1. it suffices to take n = 8. 244.4. (In Example 9.200. If an exact answer is required. 22.) Example 9. For this reason it may be more attractive to carry out the arithmetic modulo a sufficiently large number (Problem 9.. By Problem 9. Since all the coefficients of r (x) lie between -120 and 120. 226. 179.1. s ].70. Let a and b be the maxima of the absolute values of the coefficients of p (x) and q (x).4.-4.Sec. 8. 0) and b =(-2.3.3. so no coefficient of r(x) can exceed 120 in absolute value.4. . 37.5. b [0 . 82. This may require the use of multipleprecision arithmetic. Prove that no coefficient of the product polynomial p(x)q(x) exceeds ab (u+l ) in absolute value.3.2). 0. 188) . 193.4.109. In this case a more thorough analysis of the possible build-up of rounding errors is needed.O.247. is C =(255. If the latter are sufficiently small that it is reasonable to consider operations modulo m to be elementary.

. x 22 . Since M (1) n O(1 log l log log 1) with the best-known algorithm for integer multiplication (Section 9. Y2. x 2. where Co = 2 suffices if none of the coefficients of the polynomials to be multiplied exceeds 2' 14/ 2(1 + max(s . In every case.4.) By comparison. then the symbolic product of these polynomials is obtained using the discrete Fourier transform. Let x 1 .2 is relevant here too.) Your algorithm should take a time in 0 (n log2n) provided all the necessary operations are taken to be elementary.2. It is possible for this time to be in 0 (st) in practice. Give an efficient algorithm to calculate each y.59). = p (x1) for 1 <_ i <_ n.4. Let x 1 . and the intermediate pointwise multiplication of the transforms takes a time in 0 (d M (d log co)). Let p (x) be a polynomial of degree n . if arithmetic can be carried out on integers of size l at unit cost.--.2. Let a and b be two n-bit integers whose product we wish to calculate.4.5. We can do better than this thanks to a double transformation of the domain.11.) ** Problem 9.5 MULTIPLICATION OF LARGE INTEGERS We return once more to the problem of multiplying large integers (Sections 1. The original integer domain is first transformed into the domain of polynomials represented by their coefficients . The naive algorithm is therefore preferable to the "fast" algorithm if d is very large and I is reasonably small. Give an efficient algorithm to calculate the coefficients of the unique polynomial p (x) of degree less than n such that p (xi) = yi for every 1 <_ i <_ n. t )) in absolute value. (Hint : see Problem 4.x a time be n in 0 (n log2n). x be n distinct points. the first term in this analysis can be neglected. the initial computation of the Fourier transforms and the final calculation of the inverse take a time in 0 (d 2 log d log (o).4. . y 1. Your algorithm should take the hint to Problem 9. or even in 0 (n") for any a > 1 (Problem 4. the naive algorithm takes a time in 0 (s t M (1)). whereas the algorithm using divide-and-conquer requires only a time in 0 (n 1. Your algorithm should take a time in 0 (n log2n ). where 1 is the size of the largest coefficient in the polynomials to be multiplied.8). and 4.x be n distinct points. 9 where d = s + t.2. . distinct points.) * Problem 9. On the other hand. Give an efficient algorithm to calculate the coefficients of the unique monic polynomial p (x) of degree n such that p (xi) = 0 for every 1 <_ i <_ n. where M (1) is the time required to multiply two integers of size 1. not necessarily distinct. The naive algorithm would have taken a time in 0 (st ). and let x 1 .7).7.Transformations of the Domain 286 Chap. Problem 9. The total time is therefore in 0 (d M (d log co)). The classic algorithm takes a time in f (n2). 9. (Remember that n is the smallest power of 2 greater than d. .1. 1.7. the algorithm that uses 0) = e 2fi /" can multiply approximately the two polynomials in a time in 0 (d log d). (Hint : and let yn be n values.--.6. . if we are obliged to use multiple-precision arithmetic. (The polynomial is monic if the coefficient of x" is 1..5). x2 2 . Suppose for simplicity that n is a power of 2 (nonsignificant leading zeros are added at the left of the operands if necessary).

4. The original multiplication of two integers of size n therefore requires 2n multiplications of slightly larger integers ! To correct this. we perform the computation Example 9. Let a = 2301 and b = 1095.1.595. the polynomial r (x) = Pa (x)pb (x) is calculated symbolically using the Fourier transform. Thus pa (x) = 2x3 +3 X2 + 1 and Pb (x) = x3+9x +5.1. 1 + d Ig w = n + 1. For instance. these integers are of silt. Unfortunately.5. 0 The recursive nature of this algorithm obliges us to refine the analysis of symbolic multiplication of polynomials given following Problem 9. p53(x) = X5+X4+X2 + I because 53 in binary is 00110101.5 Multiplication of Large Integers 287 We denote by pa (x) the polynomial of degree less than n whose coefficients are given by the successive bits of the integer a. The symbolic product is r (x) = pa (X)pb (x) = 2x6+3x5 + 18x4+38x3 + 15x2+9x +5 and we obtain the desired product ab as r (10) = 2. This time the degree of the product polynomial r (x) is less than n. suppose we redefine pa (x) to be the polynomial whose coefficients are given by the successive figures of the representation of a in base 4. To obtain the product of the integers a and b. . Let M (n) be the time required to multiply two n-bit integers. As before.Sec. so that pa (10) = a. To make the illustration simpler. This can be carried out in a time in n M (2 n Ig w) + 0 (n 2). pa (2) = a for every integer a. For instance. However. the symbolic multiplication takes a time in n M (-'n Ig (o) + 0 (n 2log n logo)). As an extreme case. we must reduce the degree of the polynomials used to represent the integers to be multiplied. its coefficients lie between 0 and 3. and pa (4) = a. the algorithm of Section 4. even if we take co = 2. The central step in the symbolic multiplication of two polynomials of degree less than n consists of d multiplications f integers less than wd12+ 1. and then evaluate r (2). the polynomials considered must have a sufficiently high degree. The algorithm is recursive because one of the stages in the symbolic multiplication of polynomials consists of a pointwise multiplication of Fourier transforms. 9.519. where d = 2n is a power of 2 greater than the degree of the product polynomial.) Taking into account also the time spent in computing the two initial Fourier transforms and the final inverse transform. in order that using the discrete Fourier transform should be attractive. The central step in the symbolic multiplication of Pa (x) and Pb (x) therefore only requires n multiplications of integers less than m = w"i2+ 1. Clearly. where n is a power of 2. and the final answer is obtained by evaluating r (4). For the purpose of this example only. This polynomial is thus of degree less than n / 2. (The second term is added in case certain operands are exactly w" 2.3.4). since this integer of length 1 + i n Ig (o has to be treated specially.7 consists of representing each integer by a polynomial of degree 1 whose two coefficients lie between 0 and 2'12 . let pa (x) therefore denote the polynomial whose coefficients are given by the successive digits of a. here in decimal rather than in binary. even if this means increasing the size of their coefficients. we need only calculate symbolically the polynomial r (x) = p (x) q (x) using the fast Fourier transform (Section 9.

Note that 1 and k are powers of 2. whatever The preceding problem shows that the modified algorithm is still bad news. Problem 9.2). t(1) = 1 n =2 k . it is thus sufficient that 9n / 2 < (0/2 . Since the coefficients of the polynomials pa (x) and pb (x) lie between 0 and 3. / = 4. we must choose a principal n th root of unity tv. As the computations are carried out modulo m = o)'12 + 1.155. The symbolic product is Example 9.4. Thus we have that pa (21) = a. consists of n shifts and n additions of integers whose size is not more than lg (9n / 2). and k = 4. prove that t(n) = the value of the constant k. we must lower the degree of the polynomials still further.1. depending on whether lg n is even or odd.260. The choice to = 2 is adequate provided n >: 16. so that n = 16.100 = 9885 x 21. 1! Ign1 Let l = 2 2 . This time we need to choose a principal d th root of unity t. Consider the recurrence t (n) = nt (n / 2). denote by pa (x) the polynomial of degree less than k whose coefficients correspond to the k blocks of I successive bits in the binary representation of a. k E lN+ . The symbolic computation of r (x) therefore takes a time in n M (n / 2) + 0 (n 2 log n). a power of 2 greater than the degree of the product polynomial r (x).5. 9 Before analysing the new multiplication algorithm. l = I or l = 2n . This time. namely the evaluation of r (4). the largest coefficient possible in r (x) is 9n/2 (Problem 9. and since these polynomials are of degree less than n / 2. Let a = 9885 and b = 21. using Fourier transforms. for instance. To correct this.2. When n is a power of 2. comes from the decomposition into four blocks of the binary representation of a 0010 0110 1001 1101. which takes a negligible time in 0 (n log n).5. Let d = 2k.3.260.288 Transformations of the Domain Chap. and then evaluate r (21). even if we do not take into account the time required to compute the Fourier transforms ! This is explained by the fact that we used the "fast" algorithm for multiplying two polynomials in exactly the circumstances when it should be avoided : the polynomials are of high degree and their coefficients are small. We thus obtain the asymptotic recurrence M (n) e n M (n / 2) + 0 (n 2 log n).1. To calculate the product of the integers a and b. r(x) =pa(x)pb(x)=10x6+36x5+63x4+116x3+111x2+108x + 156 and the final evaluation yields r (16) = 210. Let k = n /I. The last stage of the multiplication of a and b. that is. we need only calculate symbolically the polynomial r (x) = Pa (X)Pb (x). We form the polynomials pa (x) = 2x3+6x2+9x + 13 and Pb (x) = 5x3 + 3x2 + 12. The first of these polynomials. Since the coefficients of the polynomials pa (x) and Pb (x) lie between 0 and 21 . Show that t(n) 0 (nk ). and the degree of these poly- .

hence.59) c n°C whatever the value of the real constant a > 1. This algorithm therefore outperforms all those we have seen previously. w = 8 suffices to guarantee that the computation of the coefficients of r (x) will be correct when Ig n is even.3.5. * Problem 9. Consequently.5. The multiplication of two n-bit integers is thus carried out using a symbolic multiplication of two polynomials. Prove that 0 (n (log n)2.5 Multiplication of Large Integers 289 nomials is less than k. provided n is sufficiently large. Consequently. where . [Hints: For the second case use the fact that Iglg(1J) (Iglgn) .59 ). For the third case prove by constructive induction that t (n) 5 S [(lg n)1gT . By Problem 9. this can easily be carried out in a time in 0 (d 2 log w). In the case when Ig n is even I = k = dl 2 = I and Ig w >.11 . t (n) E O ((log n)1 g 6) c 0 ((log n)2-59). Problem 9.5. Prove that t(n)E if y < 2 O (log n) O (log n log log n) if y= 2 O ((log n)19 Y) if y > 2 .5. we split the n-bit integers to be multiplied into k blocks of I bits. which arises here as the max- imum of 2 x 3 and ' x i . Also use the fact that (lg1')Ig1< T(lgn)IST+2lgylg1(lg1`r)(IgY)-1 provided n y21g R.lg(5/3) provided that n >_ (310 for every real constant 0 > 1.(21 + Ig k)/(dl 2). More precisely.Sec. which is negligible. 9. Let y > 0 be a real constant. that is Ig co >. This is possible provided we increase slightly the size of the coefficients of the polynomials used in order to decrease their degree.)/ = 2 + (Ig J )/W W. this algorithm can multiply two n-bit integers in a time in M (n) = nt (n) E 0 (n (log n)2. Problem 9. d = 2J and w = 8. the largest coefficient possible in r (x) is k (21 -1)2. It suffices therefore that k 221 < m = wd 12 + 1. when n is odd. The equations obtained earlier for M (n) lead to t(n)E6t(3N/n-)+O(logn) when lgn is even. Is this algorithm optimal ? To go still faster using a similar approach. yt.] Let t(n) = M (n)/n . When n is odd. and t(n)E5t(52 )+O(logn) when n is odd. for some constants S. which takes a time in d M (2 d lg w) + O (d 2 log d log w). d = 2n and w = 32. and p that you must determine and for n sufficiently large.2. As far as the final evaluation of r(21) is concerned.p Ig n . and w = 32 is sufficient when Ig n is odd. When n is even. M (n) E 2n M (2 I) + 0 (n log n). Simi- larly.2. and let t(n) be a function satisfying the asymptotic recurrence t(n) E yt(O (' )) + 0 (log n).yt(lg n)(197). we obtain Ig w ? 4 + (Ig' )/ . for all real constants Q ? 1 and y > 2.(2' + IgFn. which gives M (n) e 24W M (3') + O (n log n).2 suggests that we should reduce the constant y = 6.

that is. thus avoiding a costly conversion when printing out the result. We mention simply that it is possible to obtain y=4 by calculating the coefficients of the polynomial r (x) modulo 221 + 1 (using the Fourier transform and proceeding recursively) and modulo k (using the algorithm for integer multiplication of Section 4.7). These authors mention that the source of their method goes back to Runge and Konig (1924). 9. Even more decimals of it have been calculated since. Because 221 + 1 and k are coprime. It also allows the computation to be carried out directly in decimal. and finally to evaluate r(2 21 ). an algorithm that takes a time in 0 (n log n log log n). and Welch (1967). Lewis. For a more complete account of the history of the fast Fourier transform. In view of the great practical importance of Fourier transforms. The complexity of this algorithm is such that it is of theoretical interest only. An efficient implementation and numerous applications are suggested in Gentleman and Sande (1966). 2-' f(2'++2-i`1)J)=4+21 2r The algorithm obtained takes a time in O(n(log n)a). increasing the parameter i reduces the exponent a at the expense of increasing the hidden constant in the asymptotic notation. To go faster still. and 2-' 2n recursive calls on integers of size (2' + + 2-' if Ig n is odd provided n is sufficiently large.Transformations of the Domain 290 Chap. it is then possible to obtain the coefficients of r (x) by the Chinese remainder theorem. read Cooley. it is astonishing that the existence of a fast algorithm remained almost entirely unknown until its rediscovery nearly a quarter of a century later by Cooley and Tukey (1965).6 REFERENCES AND FURTHER READING The first published algorithm for calculating discrete Fourier transforms in a time in O (n log n) is by Danielson and Lanczos (1942). where a=2+Ig(1+2-1-21) <2+2-1-2'/ln2 can be reduced arbitrarily close to 2. Rather than resorting to modulo computations in a finite ring. The corresponding y is thus 1 y=max(21-'(2'+1+2-'). This is still not optimal. Detailed analysis shows that recursive calls on integers of size (2' + 1 + 2-' if Ig n is this gives rise to 21-' even. 9 I _ 2r+f. ignl and k = n/I for an arbitrary constant i >_ 0. This approach allowed Schonhage and Strassen to obtain y = 2. but the algorithms that are even faster are too complicated to be described in detail here. The outcome is an algorithm that is capable of multiplying two n-bit integers in a time in 0 (n log2n). Needless to say. we have to redefine the notion of the "product" of two polynomials so as to avoid doubling the degree of the result obtained. they used a variant involving operations in the complex number field. Their approach requires care to avoid problems due to rounding errors but gives rise to a simpler algorithm. . An integer multiplication algorithm based on the fast Fourier transform has been used by the Japanese to calculate it to 10 million decimal places. at least asymptotically.

The algorithm that is able to multiply two integers of size n in a time in O (n log2n) is attributed to Karp and described in Borodin and Munro (1975). Monet. and Ullman (1974).4.2. Yoshino and Ushiro (1986). Horowitz and Sahni (1978).2 is described in several references. and Ullman (1974).1.4.5 and 9. Further ideas concerning the symbolic manipulation of polynomials. Pollard (1971) studies the computation of Fourier transforms in a finite field.3. which is the world record at the time of this writing. Also read Turk (1982). but does not explain the algorithm used. Cray Research (1986) mentions an even more precise computation of the decimals of it. read Gleick (1987).6 is given in Aho. 1975). The details of the algorithm by Schonhage and Strassen (1971) are spelled out in Brassard. The algorithm used by the Japanese to com- pute it to 10 million decimal places is described in Kanada. Hopcroft. The solution to Problems 9. The empire struck back shortly thereafter when the Japanese computed 134 million decimals. Hopcroft. Tamura. The second edition of Knuth (1969) includes a survey of algorithms for integer multiplication.Sec. The nonrecursive algorithm suggested in Problem 9. for example Aho.6 References and Further Reading 291 and Rabiner and Gold (1974). evalua- tion. And the saga goes ever on. The book by Brigham (1974) is also worth mentioning.3. 9. and interpolation can be found in Borodin and Munro (1971. and Zuffellato (1986). and Turk (1982). although the solution given there to Problem 9. .3 is unnecessarily complicated in the light of Theorem 9. A practical algorithm for the rapid multiplication of integers with up to 10 thousand decimal digits is given in Pollard (1971).

Consider first the following algorithm. Thus we ask the following question : what is the minimum number of comparisons that are necessary to sort n elements? For simplicity we count only comparisons between the elements to be sorted. 10. to solve some given problem. a field of study that runs in parallel with algorithmics. we try to find a function g(n) as large as possible and to prove that any algorithm that is capable of solving our problem correctly on all of its instances must necessarily take a time in 92(g(n)). In this case we say that the complexity of the problem is known exactly . unfortunately. we can prove. Using algorithmics. Our satisfaction is complete when f (n) e O(g(n)). 292 . this does not happen often. In this chapter we introduce only a few of the principal techniques and concepts used in the study of computational complexity. by giving an explicit algorithm.1 DECISION TREES This technique applies to a variety of problems that use the concept of comparisons between elements. we have been interested in the systematic development and analysis of specific algorithms.10 Introduction to Complexity Up to now. We illustrate it with the sorting problem. Computational complexity. since then we know that we have found the most efficient algorithm possible (except perhaps for changes in the hidden multiplicative constant). each more efficient than its predecessors. that a certain problem can be solved in a time in 0 (f (n)) for some function f (n) that we aim to reduce as much as possible. considers globally the class of all algorithms that are able to solve a given problem. ignoring those that may be made to control the loops in our program. Using complexity.

this leaf contains the verdict associated with the order relation used. 1.1.4. a trip through the tree consists of starting from the root and asking oneself the question that is found there.1.1. The most important characteristic of countsort and its variations is that they work using transformations : arithmetic operations are carried out on the elements to be sorted.1). only in rare applications will these algorithms prove preferable to quicksort or heapsort. 1. Simulate the operation of T [ 1 . if max(T) . The trip ends when it reaches a leaf .1 Decision Trees 293 procedure countsort(T [1. which is the greater. it becomes impractical. Each leaf contains an ordering of the elements.j]F-0 fork F. If the answer is "yes". if not. As a function of n. On the other hand. Given a total order relation between the elements. 6. In this case.2. 3. However. directed binary tree. n ]) i <. However.. 10] containing the values 3. how many comparisons between elements are made? Coming back to the question we asked at the beginning of this section : what is the minimum number of comparisons that are necessary in any algorithm for sorting n elements by comparison ? Although the theorems set out in this section still hold even if we consider probabilistic sorting algorithms (Section 8. j F. 4. the algorithm provides an efficient and practical way of sorting an array in linear time. 10.Sec. the trip continues recursively in the left-hand subtree . 2.. this algorithm on an array This algorithm is very efficient if the difference between the largest and the smallest values in the array to be sorted is not too large.. we shall for simplicity confine our discussion to deterministic algorithms. Show exactly how countsort can be said to carry out arithmetic operations on the elements to be sorted.1 to ndo C[T[k]]*-C[T[k]]+1 k F--1 for p F. 9. 5.min(T). otherwise it continues recursively in the right-hand subtree.1 to C [p ] do T[k]*-p k -k+l Problem 10. For example.max(T) array C[i. all the sorting algorithms considered in the preceding chapters work using comparisons : the only operation allowed on the elements to be sorted consists of comparing them pairwise to determine whether they are equal and. Each internal node contains a comparison between two of the elements to be sorted. This difference resembles that between binary search and hash coding. variants such as radix sort or lexicographic sort (not discussed here) can sometimes be used to advantage.min(T) = #T . Problem 10. 5. on account of both the memory and the time it requires.i to j do for q <. In this book we pay no further attention to algorithms for sorting by transformation. A decision tree for sorting n elements is valid if to each possible order relation between the elements it associates a verdict that is . the number of elements to be sorted. when the difference between the elements to be sorted is large. A decision tree is a labelled.

B. (The annotations on the trees are intended to help follow the progress of the corresponding algorithms.A. C. Verify that the decision tree given in Figure 10.C. a decision tree that is valid for sorting n elements.1.T [1]. Finally.3 give the trees corresponding to the insertion sorting algorithm (Section 1.1. Figures 10. B. for each value of n.2 and 10.1.C else T . . to every deterministic algorithm for sorting by comparison there corresponds.B else if B < C then if A <C then T . C <. to the decision tree of Figure 10. procedure adhocsort3(T [l .C.A Similarly.A else T <.T [2]. Every valid decision tree for sorting n elements gives rise to an ad hoc sorting algorithm for the same number of elements.4 and Problem 2.C. when three elements are to be sorted.4) and to heapsort (Section 1. and C. 3]) A.B else T .9.A.T [3] if A < B then if B < C then { already sorted } else if A < C then T .B.B.1. The following problem will help you grasp these notions.1.1. Problem 10.1 there corresponds the following algorithm.3).A.<A<B B<A < C B<C<A A valid decision tree for sorting three elements. a decision tree is pruned if all its leaves are accessible from the root by making some consistent sequence of decisions.Introduction to Complexity 294 Chap.1 is valid for sorting three elements A.2.1..3. For example. 10 compatible with this relation.C. respectively.) Notice that heapsort C<B<A A<B<C 1 A<C<B Figure 10.B.

2. Give the pruned decision trees corresponding to the algorithms for sorting by selection (Section 1.5) for the case of three elements. For instance.3 first tests whether B >A (answer : no). the tree is pruned : the leaf that would correspond to a contradictory answer "yes" to the third question has been removed. that is. nonetheless.1.1.1. if B <A A < C.1. In the two latter cases do not stop the recursive calls until there remains only a single element to be "sorted". (Despite this.1 295 ABC BAC ABB BAA x=C CBA I C<B<A x=C BAC BCA B.<C<B ABC A<B.5. and to quicksort (Section 4. (You will need a big 0 piece of paper !) The following observation is crucial: the height of the pruned decision tree corresponding to any algorithm for sorting n elements by comparison. 0 Problem 10.Decision Trees Sec.4). but beware of appearances : it occurs even more frequently with the insertion sorting algorithm than with heapsort when the number of elements to be sorted increases. It would now be possible to establish the correct verdict. Problem 10. the decision tree of Figure 10. Give the pruned decision trees corresponding to the insertion sorting algorithm and to heapsort for the case of four elements.<C<A I ACB A. but it. and then whether C > A (answer: yes). asks again whether B > A before reaching its conclusion.1. The three element insertion sort decision tree.4. 10. sometimes makes unnecessary comparisons. so that every leaf can be reached by some consistent sequence of decisions.4) and by merging (Section 4. This situation does not occur with the decision tree of Figure 10.C Figure 10. gives the number of comparisons carried . the distance from the root to the most distant leaf.2.) Thus heapsort is not optimal insofar as the number of comparisons is concerned.

C < A ?.) . a binary tree with t nodes in all cannot have more than It /21 leaves.3.Introduction to Complexity 296 Chap. Can we find a valid decision tree for sorting three elements whose height is less ? If so. We now prove more generally that such a tree is impossible. we shall have an ad hoc algorithm for sorting three elements that is more efficient in the worst case. For example. n! leaves. Any valid decision tree for sorting n elements contains at least Lemma 10. Now a binary tree of height h can have at most 2h+1-1 nodes in all (by another simple argument using mathematical induction.2. this time on the height of the tree). in this case the three comparisons B < A ?. out by this algorithm in the worst case.1.1.8. It is easy to show (by mathematical induction on the total number of nodes in the tree) that any binary tree with k leaves must have at least k -1 internal nodes. The lemma follows immediately. 10 yes C> B C A rBA BC C A BC A C BC AB BA A C BC A C A B BA BC AB CB AC C B BA CA A<B<C C<A<B A< C<B B<A<C C < B < A B<C<A I Figure 10.1. Proof. Try it : you will soon see that this cannot be done. The three element heapsort decision tree. The upper limit on the number of leaves of any pruned decision tree can be computed with Problem 5. and C < B ? situated on the path from the root to the appropriate verdict in the decision tree all have to be made.6. and hence it has at most 2" leaves.1. (It may have more than n! leaves if it is not pruned or if some of the leaves can only be reached when some keys are equal. To say the same thing differently. Any binary tree with k leaves has a height of at least [lg k 1. The decision trees we have seen for sorting three elements are all of height 3. a possible worst case for sorting three elements by insertion is encountered if the array is already sorted into descending order (C < B < A ). Lemma 10.

a verdict such as A< B <. 10. Suppose we ask our sorting algorithm not merely to determine the order of the elements but also to determine which ones.1. The previous problem therefore shows that heapsort is optimal to within a factor of 2 as far as the number of comparisons needed in the worst case is concerned.1 Decision Trees 297 Proof. n ? 2. (Some modifications of heapsort come very close to being optimal for the worst-case number of comparisons. By the crucial observation that precedes Lemma 10. 0 Prove that the number of comparisons carried out by heap- sort on n elements. Give exact formulas for the number of comparisons carried out in the worst case by the insertion sorting algorithm and by the selection sorting algorithm when sorting n elements.n +I comparisons in the worst case when sorting n elements. Prove further that if n is a power of 2.6. the insertion sorting algorithm makes 66 comparisons when sorting 12 elements. Any deterministic algorithm for sorting by comparison takes Theorem 10. In the worst case. What can you say about sorting by merging in the general case ? More precise analysis shows that [lg(n!)1 E n lgn -O(n).1. Problem 10. and lg(n!)eS2(n logn) (Problem 2.1.1.1. Since each comparison takes a time in 82(1).7. then mergesort makes n Ig n . This tree contains at least n! leaves by Lemma 10.8. This proof shows that any deterministic algorithm for sorting by comparison must make at least Iig(n!)l comparisons in the worst case when sorting n elements. Rework this problem assuming that there are three possible outcomes of a comparison between A and B : A < B .1.1. are equal.1. the algorithm thus needs at least [lg(n!)l comparisons in the worst case to sort n elements.17).1. A valid tree must be able to produce at least one verdict corresponding to each of the n! possible orderings of the n elements to be sorted. How well do these algorithms do when compared to the lower bound Fig (n!)] for n = 50 ? ** Problem 10. whereas heapsort makes 59 (of which the first 18 are made during construction of the heap). and that sorting by merging almost attains the lower bound. Give a lower bound on the number of comparisons required in the worst case to handle n elements.Sec.1. if any. is never greater than 2n ign. it has been proved that 30 comparisons are necessary and sufficient in the worst case for sorting 12 elements. A = B. This certainly does not mean that it is always possible to sort n elements with as few as [lg(n!)1 comparisons in the worst case.1. the algorithm takes a time in S2(n log n) in the worst case.2. . For example. a time in S2(n log n) to sort n elements in the worst case.) Problem 10. To every deterministic algorithm for sorting by comparison there corresponds a pruned valid decision tree for sorting n elements. In fact. Its height is therefore at least [lg(n!)1 by Lemma 10. orA >B. and yet rlg(12!)] = 29.C is not acceptable : the algorithm must answer either A < B < C or A < B = C. Proof.

Since the dis- tance from each leaf to the root of A is one less than the distance from the same leaf to the root of T. define h (k) as the smallest value possible for H (X) for all the binary trees X with k leaves.1 has an average height (2+3+3+3+3+2)/6=8/3.k -1. In the first case the root is the only leaf in the tree and H (T) = 0. 1. Define H (T) as the sum of the depths of the leaves. Define the average height of T as the sum of the depths of all the leaves divided by the number of leaves.x) lg(k . 0 if k<-1 h(k) = k +min[h(i)+h(k-i) 11 <-i <-k-1 } ifk > 1 Now. Let T be a binary tree.1. the average height of T is H (T )/k . This difficulty disappears because it is impossible that h (k) = h (k) + k. Calculating the derivative gives g' (x) =1g x .. If we define h (0) = 0. for some 1 <. Let T be a binary tree with k leaves.9. n ] be an array sorted into ascending order.298 Introduction to Complexity Chap. For example. H (T) = 16 for the tree in Figure 10. (By comparison with Lemma 10.i ) + k 10<-i <-k } for every k > 1.1. How does binary search compare to this lower bound ? What lower bound on the number of comparisons do you obtain using the decision tree technique if the problem is simply to determine whether x is in the array. h (1) = 0. If each verdict is equally likely.i < k .1. the decision tree of Figure 10.3. How many comparisons between elements are needed in the worst case to locate x in the array ? As in Section 4. 10 Problem 10. At first sight this recurrence is not well founded since it defines h (k) in terms of itself (when we take i =0 or i =k in the minimum.i S n and T [i ] <.x). Any binary tree with k leaves has an average height of at least lgk. where x e R is such that 1 <. Suppose for simplicity that the n elements are all distinct. By a similar argument we obtain this time H (T) = H (B) +H (C) + k. which is .Ig (k -x). with the logical convention that T [0] = . By definition. Let T [I .o0 and T [n + 1 ] = + oo. or 2 children. rather than to 0 determine its position ? Decision trees can also be used to analyse the complexity of a problem on the average rather than in the worst case. and let x be some element. respectively. we see that there is little difference between the worst case and the average.x <.1. which also has k leaves.i leaves. In the third case the tree T is composed of a root and of two subtrees B and C with i and k . In the second case.1. which corresponds to the root having only one child). We can thus reformulate the recurrence that defines h (k). In particular. the single child is the root of a subtree A.3. we have H (T) = H (A) + k.) Proof.1. consider the function g (x) = x Igx + (k . then 8/3 is the average number of comparisons made by the sorting algorithm associated with this tree. the preceding discussion and the principle of optimality used in dynamic programming lead to h (k) = min I h (i) + h (k . The root of T can have 0. Lemma 10. the problem is to find an index i such that 0 <.1. For k >-1. For example.x < T [i + 11.

How do these values compare to the number of comparisons performed by these algorithms in the worst case? * Problem 10.3.k lg k for every tree T with k leaves. x E 1R } . Using the result obtained previously. it is therefore efficient even if n is large and if the array is supplied on a magnetic tape. (Optionally : give an intuitive interpretation of this formula in the context of the average height of a tree with k leaves. By definition. Proof. Prove that this also implies that h (k) >. it is therefore at least Ig k. if x = k12. Determine the number of comparisons performed on the average by the insertion sorting algorithm and by the selection sorting algorithm when sorting n elements. Let t = Llg k and 1 =k-2.1. h (k) ? k + min { g (i) I 1 < i 5 k -1. * Problem 10. Problem 10.1. The problem consists of returning in descending order the k largest elements of T.2. . Prove that h (k) = k t + 21 . Because min X ? min Y for any two nonempty sets X and Y such that X c Y. On the other hand. both in the worst case and on the average.1.10.2 and 10.k.12. 10.1. Let T [1 . Let k > 1. h(k)=k +min{h(i)+h(k-i) I 1 <i <-k-1 }. By the induction hypothesis.1. Since the second derivative is positive..3. give an algorithm able to solve this problem in a time in 0 (n log k) and a space in O (k) in the worst case.) Theorem 10. i E IN }. g (x) attains its minimum at x = k / 2.1. that is.1 Decision Trees 299 zero if and only if x = k -x. we have h (k) ? k + g (k / 2) = k lg k. Suppose by the induction hypothesis that h (j) >. it follows that h (k) > k + min {g (x) I < x 5 k -1.1. It therefore takes an average time in Q (n log n). Your algorithm should make no more than one sequential pass through the array T . The average height of T being H (T )lk .Sec. This shows that H (T) >. Follows immediately from Lemmas 10. where h (k) is the function used in the proof of Lemma 10. Conclude that it must take a time in Q(k log n). Any deterministic algorithm for sorting by comparison makes at least lg(n!) comparisons on the average to sort n elements. Justify your analysis of the time and space used by your algorithm. This minimum is g (k / 2) = (k lg k) .11. n ] be an array and k <. Prove that any deterministic algorithm that solves this problem using comparisons between the elements must make at least (k / 2) lg (n / 2) comparisons.n an integer. The base k = 1 is immediate. The proof that h (k) >.k lg k.k lg k for every integer k ? 1 now follows by mathematical induction.j lg j for every strictly positive integer j < k -1.

From a negative point of view. when A =' B we can be sure that they are the same.1. In the remainder of this section we shall see a number of examples of reduction from a variety of application areas. Suppose we are able to prove that a certain number of problems are equivalent in the sense that they have about the same complexity.Introduction to Complexity 300 Chap. 10 10. then the fact that the problems are equivalent makes it even more unlikely that such an algorithm exists. cible to B . it does not often happen in the present state of our knowledge that the bounds derived from algorithmics and complexity meet so satisfactorily. Show with the help of an explicit example that the notions A <_! B and A <g B are not equivalent even if there exists an algorithm for B that works in a time in O (p (n )). There are two reasons for doing this. Any algorithmic improvement in the method of solution of one of these problems now automatically yields.2. denoted A = 1 B. if these problems have all been studied independently in the past. When A 51 B and B S ' A both hold. . implies that there exists an algorithm for A that also works in a time in 0 (t (n)). Problem 10.1.2 REDUCTION We have just shown that any algorithm for sorting by comparison takes a minimum time in SZ(n log n) to sort n elements. denoted A <_ 1 B. For the purposes of this problem only. A and B are linearly equivalent.2. write A <g B if the existence of an algorithm for B that works in a time in O(t(n)). A is linearly reduDefinition 10.2. and if all the efforts to find an efficient algorithm for any one of them have failed.3 goes into this second motivation in more detail. both on the average and in the worst case. Because it is so difficult to determine the exact complexity of most of the problems we meet in practice. a more efficient algorithm for all the others. On the other hand. Let A and B be two solvable problems. Prove that the relations <_ 1 and = 1 are transitive. Section 10. we often have to be content to compare the relative difficulty of different problems. at least in theory. A less restrictive definition of linear reduction is obtained if * Problem 10. we know that heapsort and mergesort both solve the problem in a time in O (n log n). for any function t(n).2. if the existence of an algorithm for B that works in a time in 0(t(n)). implies that there exists an algorithm for A that works in a time in 0 (t(0 (n))). Even if we are not able to determine the complexities of A and B exactly. the question of the complexity of sorting by comparison is therefore settled : a time in O(n log n) is both necessary and sufficient for sorting n elements. we content ourselves with comparing the efficiency of algorithms for A on instances of size n with the efficiency of algorithms for B on instances of size in O (n). Unfortunately. Except for the value of the multiplicative constant. for any function t(n). where p (n) is a polynomial.

This can be interpreted literally as meaning that A <_ 1 B under the assumption that B is smooth. Prove that any strongly quadratic function is at least quadratic. and logn is smooth. this does not imply that the actual time taken by a specific implementation of the algorithm must also be given by an eventually nondecreasing function.20. implies that there exists an algorithm for A that also works in a time in 0 (t(n)). all these theorems are constructive : the algorithm for B follows from the algorithm for A and the proof of the corresponding theorem. By "reasonable". Give an explicit example of an eventually nondecreasing function that is at least quadratic but not strongly quadratic.8. strongly and supra linear functions are defined similarly. .2 Reduction 301 Let us extend the notion of smooth functions (introduced in Section 2. for any smooth function t(n). No problem could be smooth without this restriction to reasonable algorithms because any problem that can be solved at all can be solved by an algorithm that takes a time in Q(2'). In particular.8.5) to algorithms and problems.2.Sec. assuming B is smooth". such as "A <_ ' B.1. Moreover. counting the multiplications at unit cost. 0 iii. The actual time taken by any reasonable implementation of this algorithm is not an eventually nondecreasing function of the exponent. Most theorems in this section are stated conditionally on a "reasonable" assumption. A problem is smooth if any reasonable algorithm that solves it is smooth. Nonetheless. it is supra quadratic if it is eventually nondecreasing and if there exists an CEIR+ such that t(an) >_ a2+et(n) for every positive integer a and every sufficiently large integer n. It is strongly at least quadratic (strongly quadratic for short) if it is eventually nondecreasing and if t(an) >_ a2t(n) for every positive integer a and every sufficiently large integer n.5) that the time it takes to compute a"mod m is a linear function both of lg n and the number of Is in the binary representation of n . From a more practical point of view it also means that the existence of an algorithm for B that works in a time in 0 (t(n)). A function t : IN -* It* is at least quadratic if t(n) E c (n2). Even though a smooth function must be eventually nondecreasing by definition. which cannot be smooth. (Hint: apply Problem 2. At least. These notions extend to algorithms and problems as in the case of smooth functions. Problem 10. We have seen (Problem 4. i. this algorithm is smooth because it takes a time in O(log n ). Finally. we mean an algorithm that does not purposely waste time. An algorithm is smooth if it takes a time in O(t(n)) for some smooth function t.) ii.1. it takes longer to compute a 31st power than a 32nd. 10. Consider for instance the modular exponentiation algorithm dexpo of Section 4. Show that n 2log n is strongly quadratic but not supra quadratic.3.

No confusion should arise from this. multiplication of upper triangular matrices.9).) Once again this means that any new algorithm that allows us to multiply upper triangular matrices more efficiently will also provide us with a new. and the resulting algorithm is numerically unstable. and inversion of nonsingular upper triangular matrices. 10 10. that is. M. Proof. We saw in Section 4. contrary to the intuition that may suggest that this problem will inevitably require a time in S2(n3). Consider the following matrix product: . referring to an algorithm that runs in a time in O(n 2) as quadratic. Notice that the problems considered are at least quadratic in the worst case because any algorithm that solves them must look at each entry of the matrix or matrices concerned. this is incorrect because the running time should be given as a function of the size of the instance.376). assuming MT is smooth. In what follows we measure the complexity of algorithms that manipulate n x n matrices in terms of n. Theorem 10.1. Suppose there exists an algorithm that is able to multiply two n x n upper triangular matrices in a time in 0 (t (n)). more efficient algorithm for inverting arbitrary nonsingular matrices (at least in theory). respectively. (The problem of inverting an arbitrary nonsingular matrix is also linearly equivalent to the three preceding problems (Problem 10. MQ <_ ' MT. experience might well lead us to believe that inverting nonsingular upper triangular matrices should be an operation inherently more difficult than multiplying them.9 that a time in O (n 2. Formally speaking. it implies that we can invert any nonsingular n x n matrix in a time in 0 (n 2.2. Any algorithm that can multiply two arbitrary square matrices can be used directly for multiplying upper triangular matrices. that is.81) (or even 0 (n 2-376)) is sufficient to multiply two arbitrary n x n matrices.2. multiplication of arbitrary square matrices. but the proof of this is much more difficult. In particular. MT <_ I MQ.1 Reductions Among Matrix Problems An upper triangular matrix is a square matrix M whose entries below the diagonal are all zero. where t (n) is a smooth function. Let A and B be two arbitrary n x n matrices to be multiplied.2. Theorem 10. and IT. We denote these three problems.2.2. Is it possible that multiplication of upper triangular matrices could be carried out significantly faster than the multiplication of two arbitrary square matrices ? From another point of view. by MQ. so that a time in O(n2) is really linear.Introduction to Complexity 302 Chap. We shall show under reasonable assumptions that MQ I MT =_1 IT. it requires a slightly stronger assumption. Proof. = 0 when i > j. MT.

10. Consider the following matrix product : I AO 0IB x 0 0/ 1 -A AB 0 1 -B 00 1 100 = 010 001 where I is the n x n identity matrix. where t (n) is strongly quadratic. and D each of size n / 2 x n / 2 such that Cl A= I B0DJ where B and D are upper triangular and C is arbitrary.2. This product shows us how to obtain the desired result AB by inverting the first of the 3n x 3n upper triangular matrices.2. Proof. . (In fact. inversion is trivial. Suppose for simplicity that n is a power of 2.) Proof. this operation takes a time in 0 (t(n)). assuming MQ is strongly quadratic. Because t(n) is at least quadratic. where t (n) is a smooth function. n 2 E O (t (n)). Consequently. Otherwise decompose A into three submatrices B.4 for the general case. The time required for this operation is in 0 (n 2) for the preparation of the two big matrices and the extraction of AB from their product. By Problem 10. weaker but less natural hypothesis suffices : it is enough that MQ be supra linear.4. C. plus 0 (t (3n)) for the multiplication of the two upper triangular matrices.3. assuming IT is smooth. and D-'.' IT. a Theorem 10. This product shows us how to obtain the desired result AB by multiplying two upper triangular 3n x 3n matrices.Sec.2 Reduction 303 0 A 0 000 0 0 AB 0 0 0 x 0 0 0 00B = 0 0 0 00 0 0 0 0 where the "0" are n x n matrices all of whose entries are zero. and D-'.5. and then multiplying the matrices B-'.2. the total time required to obtain the product AB is in 0 (t(n)). Now consider the following product : BC 0D X B-1 -B-'CD-' 0 D-' 10 01 This product shows us how to obtain A-' by first calculating B-1.) If n = 1. the matrices B and D are nonsingular. C. t (3n) E O (t (n)). The upper triangular matrices B and D. Suppose there exists an algorithm that is able to multiply two arbitrary n x n matrices in a time in 0 (t (n)). Theorem 10. MQ <. IT <' MQ.2. Let A be a nonsingular n x n upper triangular matrix to be inverted. (See Problem 10. Let A and B be two arbitrary n x n matrices to be multiplied. By the smoothness of t(n). Suppose there exists an algorithm that is able to invert a nonsingular n x n upper triangular matrix in a time in O (t(n)). As in the proof of the previous theorem.

Complete the proof that IT <_' MQ. singular matrix. diagonal are 1. and Z be three sets of nodes.6. are smaller than the original matrix A. this implies that g(n) e 0 (t (n) I n is a power of 2).8.2. 10 which we now have to invert. What assumptions do you need? Denote by IQ the problem of inverting an arbitrary non** Problem 10.3(iii).2.9.2. 2.see Problem t7 10. using the assumption that t(n) is strongly quadratic (or at least supra linear).4 really shows is that IT2 <_' MQ.2.7. What assumptions do you matrix. with the natural conventions that x +(+oo) =+oo and min(x. by MS the problem of multiplying symmetric matrices.4. Let X.6.) 10. Denote by SU the problem of squaring a unitary upper triangular MQ under suitable assumptions. Prove that if A is a nonsingular upper triangular matrix whose size is even.13(iv). and if t(n) is strongly quadratic.2. note that it is enough to assume that t(n) is supra linear.2. Prove that IQ =' MQ.) An upper triangular matrix is unitary if all the entries on its Problem 10.2. Let f : X x Y -4 IR°° and g : Yx Z -4 IR°° be two functions representing the cost of going directly from one node to another. Using the divide-and-conquer technique suggests a recursive algorithm for inverting A in a time in O(g(n)) where g(n) a 2g(nl2)+2t(nl2)+O(n2). (Hint : apply Problem 2. Problem 10. Prove that MS =' MQ under suitable assumptions. Assume that both IQ and MQ are smooth and supra quadratic.5. By Problem 10. Y.4.Introduction to Complexity 304 Chap.2. then B and D are nonsingular. All that the proof of theorem 10. (Note : this reduction would not go through should an algorithm that is capable of multiplying n x n matrices in a time in 0 (n 2 log n) exist . Denote Problem 10.+oo) = x for all x e R-.2. Prove that if g(n) E 2g(n / 2) +O (t (n)) when n is a power of Problem 10. Denote by . Problem 10. Prove that SU need ? A matrix A is symmetric if Aid = Aji for all i and j. then g(n) E O (t(n) I n is a power of 2). The fact that t(n)ES2(n2) and the assumption that t(n) is eventually nondecreasing (since it is strongly quadratic) yield g(n) e 2g(n / 2) +O (t(n)) when n is a power of 2. An infinite cost represents the absence of a direct route.2 Reductions Among Graph Problems In this section IR°° denotes IR*u I+-}. and if B and D are defined as in the proof of theorem 10.2.3.2. Let IT2 be the problem of inverting nonsingular upper triangular matrices whose size is a power of 2.

we do not discuss them here. The preceding notation becomes particularly interesting when the sets X. and also the functions f and g. we saw in Section 5. This definition is not practical because it apparently implies an infinite computation .) So the definition of f * can be taken to give us a direct algorithm for calculating its value in a time in O(n4). We thus have that f * = min ( f' I 0 <_ i < n . there is no obvious way of adapting to this problem Strassen's algorithm for ordinary matrix multiplication (Section 4. At first sight. Thus it is possible to get away with a time in O(n 3) after all. The existence of algorithms asymptotically more . However. Consequently. min(f .Sec. } There is no equivalent to this operation in the present context since taking the minimum is not a reversible operation. respectively). z) I y E Y }. it suffices to consider only those paths whose length is less than the number of nodes in X. Similarly. Because they are quite complicated and have only theoretical advantages. Let this number be n. which we write f*. By analogy. f ° represents the cost of going from one node to another while staying in the same place.2 305 Reduction fg the function h : X x Z -* IR°° defined for every x E X and z (=Z by h (x . there exist more efficient algorithms for this problem. Could it be that the problems of calculating fg and f * are of the same complexity ? The following two theorems show that this is indeed the case : these two problems are linearly equivalent. The minimum cost of going from one node to another without restrictions on the number of nodes on the path. but do not confuse this operation with the composition of functions. Notice the analogy between this definition and ordinary matrix multiplication (where addition and multiplication are replaced by the minimum operation and addition.9). Unfortunately. too). The straightforward algorithm for calculating fg takes a time in O(n3) if the three sets of nodes concerned are of cardinality n. y) + g (y. namely Floyd's algorithm. gives the minimum cost of going from one node of X to another (possibly the same) while passing through exactly one intermediate node (possibly the same. coincide. f never takes negative values. Any path that passes twice through the same node can therefore be shortened by taking out the loop thus formed. and Z. which we shall write f 2. z) = min { f (x. is therefore f * = min { f' I i >_ 0 1. Y. computing f * for a given function f seems to need more time than calculating a simple product fg. without increasing the cost of the resulting path. Nevertheless. f2) gives the minimum cost of going from one node of X to another either directly or by passing through exactly one intermediate node. In this case ff . (The intuitive reason is that Strassen's algorithm does subtractions. However.4 a dynamic programming algorithm for calculating shortest paths in a graph. 10. This represents the minimum cost of going from x to z passing through exactly one node in Y. so that ° f (x' y) 0 ifx =y +00 otherwise. The meaning of f' is similar for any i > 0. it is not even immediately clear that f * is well defined. This calculation is nothing other than the calculation of f * .

1R°° as follows : f(u. respectively. In the case when u EX.1R°° and g : Y x Z .2. .Introduction to Complexity 306 Chap. which is precisely the definition of f g (u. let us find the value of h 2(u . v) = +. it therefore follows that h 2(u . Y. v) = g (y. and let f : X x Y .5. An algorithm such as Dijkstra's. we have 2 h (u. h 3(u . v) = +o. z) _ + oo when x e X and z E Z. Theorem 10. y) and h (y.v) if u EX and veZ + otherwise . v) =min { h (u . h 2(U. Therefore h (u. Consequently. The conclusion from all this is that h* = min(h °.0 for every WE W.v)= g(u. y) + g (y. w) + which implies that h 3(U. y E Y. By definition. Y. at least in theory. Now. h 2) is given by the following equation. But h (u . v). Define the function h : W x W. y) :. Let W = X u Y u Z. v) I W E W 1. v) is easier. If u 0 X or if v 0 Z. By the definition of h. As in the previous section. v E W. Again. n 2 and n 3. Summing up. Suppose there exists an algorithm that is able to calculate h* in a time in O(t(n)). h 2(U. would be considered quadratic even though it is linear in the number of edges (for dense graphs). w) + h 2(w . v) = h 2(w . for instance. w) _ + oo when w E X whereas h 2(W. Denote by MUL and TRC the problems consisting of calculating fg and f*. Let X. v) I w E W I. v) # +oo when y E Y is to have v EZ. The calculation of h 3(u . and the only way to have h (y. MUL <_ ' TRC. v) I y c: Y ). v) _ + oo for all u. it is impossible that h (u.v) ifuEY andvEZ +00 otherwise . v) # +oo simultaneously unless w c= Y. v) = hh 2(u . By definition. where n is the cardinality of the set W such that h : W x W . v) = min ( h (u.= v) fg(u. w) + h (w. v). y) + h (y. v) to conclude that h 2(U. the problems considered are at least quadratic in the worst case because any algorithm that solves them must look at each edge concerned. v) for all i>3. v) = min { f (u . and Z are disjoint. 10 efficient than O(n3) for solving the problem of calculating fg therefore implies that Floyd's algorithm for calculating shortest routes is not optimal. But the only way to have h (u. v) I y E Y 1. and Z be three sets of nodes of cardinality n . and v e Z it suffices to note that h (u . y) = f (u . min ( h (u . The same holds for h ` (u . Notice in particular that h (x. for a smooth function t(n). v) = + 00 when w e X.1R°° be two functions for which we wish to calculate fg.& + oo when y E Y is to have u EX.v) ifuEX andvEY h(u. respectively. h . time complexities will be measured as a function of the number of nodes in the graphs concerned.1R°°. Proof. w) # +oo and h (w. assuming TRC is smooth. Suppose without loss of generality that X.

u) = 0 for the unique u EH. b : J x K . a weaker but less natural hypothesis suffices again : it is enough that MUL be supra linear.2.v) h*(u. e*(u. be*d ).) Proof.Sec.6.2 Reduction 307 0 ifu =v f(u. each containing half the nodes. v ). Notice that e (u . assuming MUL is strongly quadratic. n 2. n 3))). Let n = n I +n 2+n 3 be the cardinality of W. (In fact.1R°°.1R. v) ifu E K and v E K To calculate h* using divide-and-conquer. Let e : J x J .11 for the general case).be a cost function for which we wish to calculate h*. and Z such that f : X x Y -+ 1R. Suppose there exists an algorithm that is able to calculate fg in a time in 0 (t (max(n I . for u and v in J.v) h*(u.v) = if uEX andvEY ifuEY andvEZ fg(u. the time g(n) required by this approach is characterized by the equation g(n) E 2g(n / 2)+0 (t(n)). and d : KxJ -* R. . each of size n / 2. The other restrictions are obtained similarly.as the restrictions of h to the corresponding subdomains. n 2.v) = < if ueJ and veJ e*bc* (u. If n = 1.v) +oo otherwise Therefore the restriction of h* to X x Z is precisely the product fg we wished to calculate. TRC <_' MUL. Assume t (n) is strongly quadratic. v) if u E J and v E K c*de* (u. c*de*bc* )) (u . n 2. however. v) if u E K and V E J (min(c*. n 3))) + O (n 2)g 0 (t (max(n I . Define a : J x J -> IR°°. n 2. it is obvious that h* (u . the nodes in K can be used as often as necessary. which implies that g(n) E 0 (t(n) I n is a power of 2). where n I . c : K x K -* 1R°°. In other words. and n 3 are the cardinalities of the sets X. As in theorem 10. e* is the restriction to J x J of the h* that we wish to calculate. The minimum cost for going from u to v with no restrictions on the path is therefore e* (u. n 3))) because t(n) is smooth and at least quadratic.2. v) when u and v are both in J. Otherwise split H into two disjoint subsets J and K. Suppose for simplicity that n is a power of 2 (see Problem 10. we therefore solve recursively two instances of the TRC problem. The desired result is then obtained after a number of instances of the MUL problem.1R°° be the function given by e = min(a . Using the algorithm for calculating h* thus allows us to compute fg in a time in t (n) + O (n 2)C O (t(3 max(n i . Y. Theorem 10.and g : Y x Z -* 1R°°.4.2. 10. or not used at all if this is preferable. Let H be a set of cardinality n and let h : H x H .v) if uEX and vEZ g(u. represents the minimum cost for going from u to v without passing through any node in J (not counting the endpoints) . in order to obtain c* and e*.

u) for every u . regardless of the cost of the path. We saw that Warshall's algorithm (Problem 5.2) solves this problem in a time in 0(n3). Do you believe that MULBS=1 TRCBS? If not. does one of these two problems appear strictly more difficult than the other? Which one? Justify your answer. TRCS. calculating f * comes down to determining for each pair of nodes whether or not there is a path joining them.7. Chap.2. No algorithm is known that can solve MULB faster than MQ. We saw that it is possible to multiply two integers of size n in a time in 0 (n 1.2. the proof that MUL =/ TRC can easily be adapted to show that MULB =' TRCB. and TRCBS. Problem 10. that using Strassen's algorithm requires a number of arithmetic operations in 0 (n 2-81 ) . v E X .6 really shows is that TRC2 5 1 MUL. 10. Let TRC2 be the problem of calculating h* for h : X x X . Furthermore. restricted cost functions. It is clear that MULB <_ 1 MUL and TRCB <_ 1 TRC since the general algorithms can also be used to solve instances of the restricted problems. MULBS.11.1R°° when the cardinality of X is a power of 2.4.+-) and g: Y x Z -* (0. Call these four problems MULS. Assuming we count arithmetic operations at unit cost. When the range of the cost functions is restricted to 10. A cost function f : X x X -4R.2. 10 Prove formally that the preceding formula for h* is correct.10. the time in 0 (n 3) taken by Warshall's algorithm counts only Boolean operations as elementary.+-l be two Problem 10. Show that the arithmetic can be done modulo p.81). * Problem 10.7.12). Note. Let f : X x Y -* 10. 4. Complete the proof that TRC 5 ' MUL.2. Strassen's algorithm can therefore be used to solve the problems MULB and TRCB in a time in 0 (n 2.5). and 9. respectively. Conclude that MULB <_ 1 MQ. v) = f (v . where MQ is the problem of multiplying arbitrary arithmetic square matrices (Problem 10.2.13. All the proof of theorem 10. Let MULB and TRCB be the problems consisting of calculating fg and h*.12.3 Reductions Among Arithmetic and Polynomial Problems We return to the problems posed by the arithmetic of large integers (sections 1.2. however.is symmetric if f (u . +00 }. What can we say about integer division and taking square roots ? Our everyday experience leads us to believe that the second of . Unlike the case of arbitrary cost functions.2. when the cost functions are restricted in this way.2. Prove that MULBS=1 MULB. thus showing that Warshall's algorithm is not optimal. where p is any prime number larger than n. This is interesting because MULB <_ 1 MQ. Each of the four problems discussed earlier has a symmetric version that arises when the cost functions involved are symmetric.Introduction to Complexity 308 * Problem 10.59) and even in 0 (n log n log log n). show how to transform the problem of calculating fg into the computation of an ordinary arithmetic matrix multiplication.

this choice is not critical : the time taken by the various algorithms would be in the same order if given as a function of the size of their operands in decimal digits or computer words. The following formula enables us to obtain their product by carrying out two squaring operations of integers of size at most n + 1. The full proof of this theorem is long and technical. In this . we can solve MLT in a time in 2t(n +1)+O(n)c O(t(n)) because t(n) is smooth and t (n) E Q(n). MLT. respectively. Proof outline.7. (For simplicity we measure the size of integers in bits. where t(n) is smooth (it is enough to assume that t (n + 1) E O (t (n)) ). Let x and y be two integers of size n to be multiplied. For this reason we content ourselves in the rest of this section with showing the equivalence of these operations in the "cleaner" domain of polynomial arithmetic.4 how to multiply two polynomials of degree n in a time in 0 (n log n). is inapplicable in the case of division unless the two polynomials concerned divide one another exactly with no remainder.7. Clearly. assuming SQR is smooth (a weaker assumption would do). and probably the first one. a few additions. SQR =' MLT =' DIV.Sec.22).) Theorem 10. The quotient of the division of p (x) by d (x) is q (x) =x+2.2. is genuinely more difficult than multiplication. which works so well for multiplying two polynomials. and a division by 4: xy =((x+y)2-(x-y)2)/4.2.2 Reduction 309 these problems. assuming these three problems are smooth and MLT is strongly linear (weaker but more complicated assumptions would suffice). of multiplying two integers of size n.y . Notice that a direct approach using discrete Fourier transforms. This is the case precisely because we assume all these algorithms to be smooth. Its conceptual beauty is also defaced in places by the necessity of using an inordinate number of ad hoc tricks to circumvent the problems caused by integer truncation (see Problem 10. 10. Assume without loss of generality that x >. let p(x)=x3+3x2+x+2 and d(x) = x2+x +2. since squaring is only a special case of multiplication. Let SQR. As mentioned in Section 1. Clearly. Nonetheless. Since the additions and the division by 4 can be carried out in a time in O (n). SQR 51 MLT. suppose there exists an algorithm that is able to square an integer of size n in a time in 0 (t(n)). these problems are at least linear because any algorithm that solves them must take into account every bit of the operands involved. however. provided that the necessary arithmetic operations on the coefficients can be counted at unit cost. For example. To show that MLT <_' SQR. and of determining the quotient when an integer of size 2n is divided by an integer of size n. we take a moment to prove that SQR MLT. too.2. Once again this turns out not to be true. and DIV be the problems consisting of squaring an integer of size n. We have seen in Section 9. We now show that the problem of polynomial division is linearly equivalent to that of multiplication.

Lp1(x)/Lp2(x)/d(x)Jj = L(p1(x)d(x))/p2(x)]. if p(x)=x3+3x2+x+2.3x .) Prove that if p (x). The polynomial q (x) is of degree n .2. Despite this difficulty. which we denote p* (x).14. Prove the existence and the uniqueness of the quotient and Problem 10. (There is a very simple proof.15. but q (2) = 4 # 24/8. provided that the degree of p1(x) P I(x) d i(x) is not more than twice the degree of LP 2(X)ld (x)j. d i(x) and d2(x) are three nonzero polynomials. it is possible to determine the quotient and the remainder produced when a polynomial of degree 2n is divided by a polynomial of degree n in a time in 0 (n log n) by reducing these problems to a certain number of polynomial multiplications calculated using the Fourier transform. Then there exists a unique polynomial r (x) of degree strictly less than m and a unique polynomial q (x) such that p (x) = q (x) d (x) + r (x). By convention the polynomial p (x) = 0 is of degree -1. Notice that p (x) and p* (x) are always of the same degree. then p*(x)= x 3 . the remainder. Recall that p (x) = E. We even have that p (1) = 7 is not divisible by d (1) = 4. MLTP of multiplying two polynomials of degree at most n. and d* (x). Let p(x) = x 3 +x 2 + 5 x + 1 Problem 10. Calculate Prove that if p (x) is a nonzero polynomial and if Problem 10.2. The inverse of p (x). respectively. and if d (x).'_ 0 ai x' is a polynomial of degree n provided that an # 0. By analogy with the integers. is defined by p*(x)= Lx 2nIp(x)]. p* (x). the quotient is denoted by q(x) = Lp(x)Id(x)J.2.17. nomials.m if n > m -1 . and iii. LLp(x)/di(x)j/d2(x)J = Lp(x)/(di(x)d2(x))].3 x 2 + 8x . otherwise q (x) = 0.16. and d(x)=x-2. Let p (x) be a nonzero polynomial of degree n.23. p I (x) and p 2(x) are three arbitrary poly* Problem 10. 10 case p (2) = 24 and d (2) = 8. INVP of determining the inverse of a polynomial of degree n and DIVP of calculating the . 11 Consider the four following problems : SQRP consists of squaring a polynomial of degree n. then p2(x) _ p i(x)d 2(x) ± p2(x)d 1(x) d i(x)d2(x) ± d2(x) in particular.2. the quotient and the remainder of the division of p (x) by d (x). We call q (x) and r (x). Let p (x) be a polynomial of degree n. This is all due to the remainder of the division r (x) = . We also need the notion of an inverse. Lp1(x)/d(x)] ± Lp2(x)/d(x)] = L(p1(x)±p2(x))/d(x)]. ii. L p (x)/d (x)].Introduction to Complexity 310 Chap. q (x) = p* (x) then q* (x) = p (x). For example. Show that if both p (x) and d (x) are monic polynomials (the coefficient of highest degree is 1) with integer coefficients then both q (x) and r (x) have integer coefficients and q (x) is monic (unless q (x) = 0).2. and let d (x) be a nonzero polynomial of degree m.

Sec. 10.2

Reduction

311

quotient of the division of a polynomial of degree at most 2n by a polynomial
of degree n. We now prove under suitable assumptions that these four problems
linearly equivalent using the following chain of reductions : MLTP <-'
SQRP <' INVP <-' MLTP and INVP:-' DIVP <-' INVP. Again, all these problems
are at least linear. We assume arithmetic operations can be carried out at unit cost on
are

the coefficients.

MLTP <-' SQRP, assuming SQRP is eventually nonde-

Theorem 10.2.8.
creasing.

Proof. Essentially the same as the proof that MLT <-1 SQR given in theorem
10.2.7. There is no need for a smoothness assumption this time because the sum or

difference of two polynomials of degree n cannot exceed degree n.

Theorem 10.2.9.

11

SQRP <- 1 INVP, assuming INVP is smooth.

Proof. The intuitive idea is given by the following formula, where x is a
nonzero real number :

x2=(x-'-(x+1)-')-'-x

.

A direct attempt to calculate the square of a polynomial p (x) using the analogous formula (p* (x) - (p (x) + 1)* )* -p (x) has no chance of working : the degree of this
expression cannot be greater than the degree of p (x). This failure is caused by truncation errors, which we can, nevertheless, eliminate using an appropriate scaling factor.

Suppose there exists an algorithm that is able to calculate the inverse of a polynomial of degree n in a time in 0 (t(n)), where t(n) is a smooth function. Let p (x) be a
polynomial of degree n > 1 whose square we wish to calculate. The polynomial
x In p (x) is of degree 3n, so
[X2np(x)]* = Lx6n/x2np(x)J = LX4n/p(x)J
II

Similarly

[x2n (p(x)+1)]* = Lx4n/(p(x)+l)J
By Problem 10.2.17
[x2n p(x)]* -[x2n (p(x)+l)]* = Lx4n/p(x)]- Lx4n/(p(x)+1)]
=

L(x4n

(p(x)+1)-x 4np(x))l(p(x)(p(x)+1))j

= Lx4nl(p2(x)+p(x))j

= [p2(x)+p(x)]*

.

The last equality follows from the fact that p 2(x) + p (x) is of degree 2n. By Problem
10.2.16, we conclude finally that
p2(x) _

[[x2n p(x)]* -[x2n (p(x)+l)]*]* -p(x)

312

Introduction to Complexity

Chap. 10

This gives us an algorithm for calculating p2(x) by performing two inversions of polynomials of degree 3n, one inversion of a polynomial of degree 2n, and a few operations (additions, subtractions, multiplications by powers of x) that take a time in O (n).
This algorithm can therefore solve SQRP in a time in

2t(3n)+t(2n)+0(n)c 0(t(n))
because t(n) is smooth and at least linear.

Theorem 10.2.10.

INVP <' DIVP.

Proof. To calculate p* (x), where p (x) is a polynomial of degree n, we evaluate
Lx2n /p (x)j, an instance of size n of the problem of polynomial division.
Theorem 10.2.11.

DIVP :- I INVP, assuming INVP is smooth.

Proof. The intuitive idea is given by the following formula, where x and y are
real numbers and y # 0:

x/y = xy-I
If we try to calculate the quotient of a polynomial p (x) divided by a nonzero polynomial d (x) using directly the analogous formula p (x) d* (x), the degree of the result is
too high. To solve this problem we divide the result by an appropriate scaling factor.
Suppose there exists an algorithm that is able to calculate the inverse of a polynomial of degree n in a time in 0 (t(n)), where t(n) is a smooth function. Let p (x) be

a polynomial of degree less than or equal to 2n, and let d (x) be a polynomial of
degree n. We wish to calculate LP (x)/d (x)j. Let r (x) be the remainder of the division
of x 2n by d (x), which is to say that d* (x) = Lx 2" /d (x) j = (x 2n - r (x)) /d (x) and that
the degree of r (x) is strictly less than n . Now consider
d* (x)p (x)
X 2"

=

x 2n p (x) - r (x)p (x)
x2n d(x)
x 2n p (x)

x2nd(x)

_

r (x) p (x)

x2"d(x)

by Problem 10.2.17(i). But the degree of r (x) p (x) is strictly less than 3n , whereas the
degree of x 2" d (x) is equal to 3n , and so L(r (x) p (x))/(x 2" d (x))] = 0. Consequently,
L(d* (x)p (x ))/x2n] = L p (x) Id (x)], which allows us to obtain the desired quotient by
performing an inversion of a polynomial of degree n, the multiplication of two polynomials of degree at most 2n, and the calculation of the quotient from a division by a

power of x. This last operation corresponds to a simple shift and can be carried
out in a time in O(n). The multiplication can be performed in a time in O(t(2n))
thanks to theorems 10.2.8 and 10.2.9 (using the assumption that t(n) is smooth).

Sec. 10.2

Reduction

313

The calculation of Lp(x)/d(x)] can therefore be carried out in a time in
El
t(n)+O(t(2n))+O(n)c O(t(n)) because t(n) is smooth and at least linear.
Theorem 10.2.12.
INVP 5' MLTP, assuming MLTP is strongly linear
(a weaker assumption will do - see the hint of Problem 10.2.19).
Proof. This reduction is more difficult than the previous ones. Once again, we
appeal to an analogy with the domain of real numbers, namely, Newton's method for
finding the zero of f (w) = I - xw. Let x be a positive real number for which we want
to calculate x-1. Let y be an approximation to x-1 in the sense that xy = I -S, for
-1 < S < 1. We can improve the approximation y by calculating z = 2y -y2x.
Indeed, xz = x (2y - y 2x) = xy (2 -xy) _ (1- S) (1 + S) = l _ S2. From our assumption
on S, S2 is smaller than 6 in absolute value, so that z is closer than y to x-1. To calculate the inverse of a polynomial, we proceed similarly, first finding a good approximation to this inverse and then correcting the error.
Suppose there exists an algorithm able to multiply two polynomials of degrees
less than or equal to n in a time in 0 (t (n)) where t(n) is strongly linear. Let p (x) be a
nonzero polynomial of degree n whose inverse we wish to calculate. Suppose for simplicity that n + 1 is a power of 2 (see Problem 10.2.18 for the general case).
If n =0, the inverse p* (x) is easy to calculate. Otherwise let k = (n + 1)/2.
During the first stage of the polynomial inversion algorithm we find an approximation h (x) to p* (x) such that the degree of x In - p (x) h (x) is less than 3k -1. (Note
that the degree of x In - p (x) p* (x) can be as high as n -1 = 2k - 2.) The idea is to rid
ourselves provisionally of the k coefficients of lowest degree in the polynomial p (x) by
dividing the latter by x'. Let h (x) = x k LP (x)/x k J * . Note first that the degree of
LP(x)/xk] is n -k = k -1, so Lp(x)/xk j* = Lx2k-2/LP(x)/xk]] = LX 3k -21p (x)] by
Problem 10.2.17(iii). Let r (x) be the polynomial of degree less than n such that
Lx 3k -2/p (x) J = (x 3k-2 - r (x)) /p (x). Then we have
x2n-p(x)h(x)=x4k-2 _p(x)xk(x3k-2-r(x))/p(x)=xkr(x)

which is indeed a polynomial of degree less than 3k - 1.
During the second stage, we improve the approximation h (x) in order to obtain
p* (x) exactly. Taking into account the appropriate scaling factor, the analogy introduced at the beginning of this proof suggests that we should calculate
q (x) = 2h (x) - Lh2(x)p (x)1x2n j. Let s (x) be the polynomial of degree less that 2n

such that Lh2(x)p(x)/x2n ] _ (h2(x)p(x)-s(x))/x2". Now calculate
p (x) q (x) = 2p (x) h (x) _ (p 2(x) h 2(x) _ P (x) s (x)) /x

2"

= [(P(x)h(x))(2x2n -p(x)h(x))+P(x)s(x)]Ix In
_xkr(x))(xIn +xkr(x))+p(x)s(x)]/xIn
= [(x2n
= [xa" _x2kr2(x)+P(x)s(x)]lx2"

=x2n +(P(x)s(x)-x2kr'2(x))/x2"

Introduction to Complexity

314

Chap. 10

It remains to remark that the polynomials p (x) s (x) and x 2k r 2(x) are of degree at most
3n -1 to conclude that the degree of x 2n - p (x) q (x) is less than n , hence
q (x) = p* (x), which is what we set out to calculate.
Combining these two stages, we obtain the following recursive formula :

p*(x) = 2xk LP(x)lxkJ* - Lv(x)[Lp(x)lxk J*]2/xn-1 J
Let g(n) be the time taken to calculate the inverse of a polynomial of degree n by the
divide-and-conquer algorithm suggested by this formula. Taking into account the
recursive evaluation of the inverse of Lp(x)/xk J, the two polynomial multiplications
that allow us to improve our approximation, the subtractions, and the multiplications
.

and divisions by powers of x, we see that

g(n)Eg((n -1)/2)+t((n -1)/2)+t(n)+O(n)c g((n -1)/2)+O(t(n))
because t(n) is strongly
g(n) E O (t (n))

linear.

Using Problem 10.2.19, we conclude that

Problem 10.2.18.
Let INVP2 be the problem of calculating p* (x) when p (x)
is a polynomial of degree n such that n + 1 is a power of 2. All that the proof of
theorem 10.2.12 really shows is that INVP2 <-' MLTP. Complete the proof that
INVP S ' MLTP.

Problem 10.2.19.
Prove that if g(n)Eg((n -1)/2)+O(t(n)) when n +1 is a
power of 2, and if t(n) is strongly linear, then g(n) E O (t (n) I n +1 is a power of 2).
(Hint : apply Problem 2.3.13(iv) with the change of variable T (n) = g(n -1) ; note
that it is enough to assume the existence of a real constant a> 1 such that
t (2n) ? a t (n) for all sufficiently large n - strong linearity of t (n) would unnecessarily impose a=2.)
Problem 10.2.20.
Let p (x) = x 3 +x 2 + 5 x + 1. Calculate p* (x) using the
approach described in the proof of theorem 10.2.12. You may carry out directly the
intermediate calculation of the inverse of Lp (x)/x2J rather than doing so recursively.
Compare your answer to the one obtained as a solution to Problem 10.2.15.
* Problem 10.2.21.
We saw in Section 9.4 how Fourier transforms can be used
to perform the multiplication of two polynomials of degrees not greater than n in a
time in O (n log n). Theorems 10.2.11 and 10.2.12 allow us to conclude that this time
is also sufficient to determine the quotient obtained when a polynomial of degree at
most 2n is divided by a polynomial of degree n. However, the proof of theorem
10.2.11 depends crucially on the fact that the degree of the dividend is not more than
double the degree of the divisor. Generalize this result by showing how we can divide

a polynomial of degree m by a polynomial of degree n in a time in 0 (m log n).
** Problem 10.2.22.

Following the general style of theorems 10.2.9 to 10.2.12,

complete the proof of theorem 10.2.7. You will have to define the notion of

Sec. 10.3

Introduction to NP-Completeness

315

inverse for an integer : if i is an n-bit integer (that is, 2n-1 < i <- 2" -1), define
i* = L22n - '/i J. Notice that i* is also an n-bit integer, unless i is a power of 2. The
problem INV is defined on the integers in the same way as INVP on the polynomials.
The difficulties start with the fact that (i* )* is not always equal to i, contrary to
Problem 10.2.16. (For example, 13* = 9 but 9* = 14.) This hinders all the proofs.
For example, consider how we prove that DIV <- I INV. Let i be an integer of size 2n

and let j be an integer of size n ; we want to calculate

Li I j J.

If we define

z = Li j * / 22n - 1 ] by analogy with the calculation of L(p (x) d* (x )) /x 2n j in the proof

of theorem 10.2.11, we no longer obtain automatically the desired result z = Li / j].
Detailed analysis shows, however, that z <- Li / j J < z + 2. The exact value of Li / j J
can therefore be obtained by a correction loop that goes around at most three times.
z E- Li j* /22n

I]

t - (z +1)x j
while t

<- i

t Ft +j

do

z
z+1
return z

The other proofs have to be adapted similarly.
** Problem 10.2.23.
Let SQRT be the problem of computing the largest integer
less than or equal to the square root of a given integer of size n. Prove under suitable

assumptions that SQRT=/ MLT. What assumptions do you need? (Hint: for the
reduction SQRT <-' MLT, follow the general lines of theorem 10.2.12 but use
Newton's method to find the positive zero of f (w) = w 2 -x ; for the inverse reduction,
use the fact that

x++- x l- x- x+l+ + 1=1/x.)
Let MOD be the problem of computing the remainder when
Problem 10.2.24.
an integer of size 2n is divided by an integer of size n . Prove that MOD <_ 1 MLT.
Let GCD be the problem of computing the greatest common
** Problem 10.2.25.
divisor of two integers of size at most n . Prove or disprove GCD = MLT. (Warning:
at the time of writing, this is an open problem.)

10.3 INTRODUCTION TO NP-COMPLETENESS

There exist many real-life, practical problems for which no efficient algorithm is
known, but whose intrinsic difficulty no one has yet managed to prove. Among these
are such different problems as the travelling salesperson (Sections 3.4.2, 5.6, and
6.6.3), optimal graph colouring (Section 3.4.1), the knapsack problem, Hamiltonian cir-

cuits (Example 10.3.2), integer programming, finding the longest simple path in a

316

Introduction to Complexity

Chap. 10

graph (Problem 5.1.3), and the problem of satisfying a Boolean expression. (Some of
these problems are described later.) Should we blame algorithmics or complexity ?
Maybe there do in fact exist efficient algorithms for these problems. After all, computer science is a relative newcomer: it is certain that new algorithmic techniques
remain to be discovered.
This section presents a remarkable result : an efficient algorithm to solve any one
of the problems we have listed in the previous paragraph would automatically provide
us with efficient algorithms for all of them. We do not know whether these problems
are easy or hard to solve, but we do know that they are all of similar complexity. The
practical importance of these problems ensured that each of them separately has been
the object of sustained efforts to find an efficient method of solution. For this reason it
is widely conjectured that such algorithms do not exist. If you have a problem to solve
and you are able to show that it is equivalent (see Definition 10.3.1) to one of those

mentioned previously, you may take this result as convincing evidence that your
problem is hard (but evidence is not a proof). At the very least you will be certain that
nobody else claims to be able to solve your problem efficiently at the moment.

10.3.1 The Classes P and NP

Before going further it will help to define what we mean by an efficient algorithm.
Does this mean it takes a time in O (n log n) ? O (n 2) ? O (n'-") ? It all depends on
the problem to be solved. A sorting algorithm taking a time in O(n2) is inefficient,
whereas an algorithm for matrix multiplication taking a time in 0 (n 2 log n) would be
an astonishing breakthrough. So we might be tempted to say that an algorithm is
efficient if it is better than the obvious straightforward algorithm, or maybe if it is the
best possible algorithm to solve our problem. But then what should we say about the
dynamic programming algorithm for the travelling salesperson problem (Section 5.6)
or the branch-and-bound algorithm (Section 6.6.3) ? Although more efficient than an
exhaustive search, in practice these algorithms are only good enough to solve instances
of moderate size. If there exists no significantly more efficient algorithm to solve this
problem, might it not be reasonable to decide that the problem is inherently intractable ?

For our present purposes we answer this question by stipulating that an algorithm
is efficient (or polynomial-time) if there exists a polynomial p (n) such that the algorithm can solve any instance of size n in a time in 0 (p (n)). This definition is
motivated by the comparison in Section 1.6 between an algorithm that takes a time in
O(2') and one that only requires a time in O (n 3 ), and also by sections 1.7.3, 1.7.4,
and 1.7.5. An exponential-time algorithm becomes rapidly useless in practice, whereas

generally speaking a polynomial-time algorithm allows us to solve much larger
instances. The definition should, nevertheless, be taken with a grain of salt. Given
two algorithms requiring a time in O(ntg'g" ) and in O(n 10), respectively, the first, not
being polynomial, is "inefficient". However, it will beat the polynomial algorithm on
all instances of size less than 10300 assuming that the hidden constants are similar. In
fact, it is not reasonable to assert that an algorithm requiring a time in O(n 10) is

Sec. 10.3

Introduction to NP-Completeness

317

efficient in practice. Nonetheless, to decree that O(n3) is efficient whereas S2(n4) is
not, for example, seems rather too arbitrary.

In this section, it is crucial to avoid pathological algorithms and analyses such as
those suggested in Problem 1.5.1. Hence no algorithm is allowed to perform arithmetic operations at unit cost on operands whose size exceeds some fixed polynomial in
the size of the instance being solved. (The polynomial may depend on the algorithm
but not of course on the instance.) If the algorithm needs larger operands (as would be
the case in the solution of Problem 1.5.1), it must break them into sections, keep them

in an array, and spend the required time to carry out multiprecision arithmetic.
Without loss of generality, we also restrict all arrays to contain a number of elements
at most polynomial in the size of the instance considered.
The notion of linear reduction and of linear equivalence considered in Section
10.2 is interesting for problems that can be solved in quadratic or cubic time. It is,
however, too restrictive when we consider problems for which the best-known algorithms take exponential time. For this reason we introduce a different kind of reduction.

Definition 10.3.1.
Let X and Y be two problems. Problem X is polynomially
reducible to problem Y in the sense of Turing, denoted X <_ T Y, if there exists an algo-

rithm for solving X in a time that would be polynomial if we took no account of the
time needed to solve arbitrary instances of problem Y. In other words, the algorithm
for solving problem X may make whatever use it chooses of an imaginary procedure
that can somehow magically solve problem Y at no cost. When X <_T Y and Y <_T X
simultaneously, then X and Y are polynomially equivalent in the sense of Turing,
denoted X -T Y. (This notion applies in a natural way to unsolvable problems - see
Problem 10.3.32.)

Example 10.3.1.
Let SMALLFACT(n) be the problem of finding the smallest
integer x > 2 such that x divides n (for n >_ 2), let PRIME(n) be the problem of determining whether n >_ 2 is a prime number, and let NBFACT(n) be the problem of counting
the number of distinct primes that divide n. Then both PRIME <_ T SMALLFACT and
NBFACT <_ T SMALLFACT. Indeed, imagine solutions to the problem SMALLFACT can be

obtained at no cost by a call on SolveSF ; then the following procedures solve the
other two problems in a time polynomial in the size of their operand.

function DecidePRiME(n)
we assume n _ 2}
if n = SolveSF (n) then return true
else return false
function SolveNBF (n)
nb - 0

while n > I do nh - nh + I
x - SolveSF (n)
while x divides n don f- n /x
return nh

Let X and Y be two decision problems. In particular. or none of them can. Prove that if X <_ T Y and Y E P. Let X and Y be two problems such that X <T Y. the equivalence mentioned in the introduction to this section implies that either all the problems listed there can be solved in polynomial time. We also assume that the instances can be coded efficiently in the form of strings of bits. Problem Definition 10. 10 Notice that SolveNBF works in polynomial time (counting calls of SolveSF at no cost) because no integer n can have more than Llg n ] prime factors. such as IN. or "the set of all possible graphs".2. Let X and Y be two problems such that X <_ T Y Y. When no confusion can arise. For example.3. P is the class of decision problems that can be solved by a 0 polynomial-time algorithm. then X E P.3. we may sometimes omit to state explicitly the set of instances for the decision problem under consideration. Let X c I and Y c J be two decision problems. if there exists a function f :I . The usefulness of this definition is brought out by the two following exercises. Prove that Problem 10.2. X is many-one polynomially reducible to problem Y. whereas "find the smallest prime factor of n " is not. known as the reduction function between X and Y. even taking repetitions into account. nondecreasing function. whether or not x EX. "Is n a prime number?" is a decision problem. .3.Introduction to Complexity 318 Chap. Problem 10. The restriction to decision problems allows us to introduce a simplified notion of polynomial reduction. Prove that there exist a polynomial p (n) and an algorithm that is able to solve problem X in a time in O (p (n) t(p (n))). When X <_. Then the problem consists of deciding. Problem 10.3. given some x e 1.3. For technical reasons we confine ourselves from now on to the study of decision problems. Suppose there exists an algorithm that is able to solve problem Y in a time in O (t(n)). then X and Y are many-one polynomially equivalent. the existence of an algorithm to solve problem Y in polynomial time implies that there also exists a polynomial-time algorithm to solve problem X. Definition 10.3.° Y and Y <_ m X both hold. We generally assume that the set of all instances is easy to recognize.J computable in polynomial time. denoted by X <m Y. denoted X = m Y. such that (VXEI)[xEX f (x)EY] . where t(n) is a nonzero.1. A decision problem can be thought of as defining a subset X of the set I of all its instances.3.

In fact.v)EA 2 otherwise and the bound L = #N.3. Proof. G E HAM if and only if < H. Lemma 10. . Clearly.3.5. <_ m . together with some bound L used to turn the travelling salesperson optimization problem (as in Sections 5.4. and the question is to decide whether there exists a circuit in the graph passing exactly once through each node n (with no optimality constraint). An instance of TSPD consists of a directed graph with costs on the edges. but this is significantly harder to prove.3. It is also the case that TSPD m HAM. Define f (G) as the instance for TSPD consisting of the complete graph H = <N. and = R. the cost function c(u . let G = < N.3) into a decision problem : the question is to decide whether there exists a tour in the graph that begins and ends at some node. The introduction of TSPD in Example 10. Imagine solutions to problem Y can be obtained at no cost by a call on DecideY and let f be the polynomial-time computable reduction function between X and Y. most optimization problems are polynomially equivalent in the sense of Turing to an analogous decision problem. then xSTY.Sec. A > be a directed graph for which you would like to decide if it has a Hamiltonian circuit. v) _ I if (u. NxN >. the number of nodes in G. An instance of HAM is a directed graph. respectively. Let TSPD and HAM be the travelling salesperson decision problem and the Hamiltonian circuit problem.2. Problem 10. El T . c. To prove that HAM 5 m TSPD.6 and 6.1. If X and Y are two decision problems such that X <_m Y.6. Prove that the converse of Lemma 10.3.2 shows that the restriction to decision problems is not a severe constraint.3 Introduction to NP-Completeness 319 Example 10. as the following exercise illustrates. function DecideX (x) y -f (X) if DecideY (y) then return true else return false * Problem 10. and whose cost does not exceed L.3.1 does not necessarily hold by giving explicitly two decision problems X and Y for which you can prove that X <_ T Y whereas it is not the case that X <_ m Y. Then the following procedure solves X in polynomial time. are transi- tive. after having visited each of the other nodes exactly once. Prove that the relations <_ T .3. 10. L > E TSPD.

Introduction to Complexity

320

Chap. 10

* Problem 10.3.6.
Let G = < N, A > be an undirected graph, let k be an
integer, and c : N -* (1, 2, ... , k } a function. This function is a valid colouring of G
if there do not exist nodes u , v E N such that (u , v ) E A and c (u) = c (v) (Section
3.4.1). The graph G can be coloured with k colours if there exists such a valid

colouring. The smallest integer k such that G can be coloured with k colours is called
the chromatic number of G, and in this case a colouring with k colours is called an
optimal colouring. Consider the three following problems.
COLD: Given a graph G and an integer k, can G be coloured with k colours ?
COLO: Given a graph G, find the chromatic number of G.
COLC: Given a graph G, find an optimal colouring of G.

Conclude that there exists a polynomial-time
algorithm to determine the chromatic number of a graph, and even to find an optimal
Prove that COLD =T COLO =T COLC.

colouring, if and only if COLD E P.

These graph colouring problems have the characteristic that although it is perhaps

difficult to decide whether or not a graph can be coloured with a given number of
colours, it is easy to check whether a suggested colouring is valid.

Let X be a decision problem. Let Q be a set, arbitrary for
Definition 10.3.4.
the time being, which we call the proof space for X. A proof system for X is a subset

FcXxQ such that (`dxEX)(3gEQ)[<x,q>EF]. Any q such that <x,q >EF
is known as a proof or a certificate that x E X. Intuitively, each true statement of the
type x E X has a proof in F, whereas no false statement of this type has one (because if
x E X, there does not exist a q E Q such that < x, q > (=- F ).
Example 10.3.3.
Let I = N and COMP = { n I n is a composite integer }. We
can take Q = 1N as the proof space and F = ( < n, q > I 1 < q < n and q divides n
exactly } as the proof system. Notice that some problems may have more than one
natural proof system. In this example we could also use the ideas of Section 8.6.2 to
define

F'

< n, q > I (n is even and n > 2) or
(1 < q < n and n is not a strong pseudoprime to the base q

which offers a large number of proofs for all odd composite numbers.

Example 10.3.4.
Consider the set of instances I = { < G , k > I G is an
undirected graph and k is an integer } and the problem COLD = { < G, k > E 1 I G
coloured with k colours }. As proof space we may take Q =
(c : N -* (1, 2, ... , k} I N is a set of nodes and k is an integer). Then a proof

can be

system is given by

Sec. 10.3

F

Introduction to NP-Completeness

321

< < G, k >, c > I G = < N, A > is an undirected graph,
k is an integer,
c : N -4 { 1, 2, ... , k } is a function and

(`du,vEN)[{u,v}EA =c(u)#c(v)] }

.

Problem 10.3.7. Let G = < N, A > be an undirected graph. A clique in G is
a set of nodes K E_ N such that { u, v } E A for every pair of nodes u, v K. Given a
graph G and an integer k, the CLIQUE problem consists of determining whether there
exists a clique of k nodes in G. Give a proof space and a proof system for this decision problem.

Definition 10.3.5.
NP is the class of decision problems for which there exists
a proof system such that the proofs are succinct and easy to check. More precisely, a
decision problem X is in NP if and only if there exist a proof space Q, a proof system
F c X x Q, and a polynomial p (n) such that

i. (VxEX)(agEQ)[<x,q>eFand IqI <p(Ix1)],
where I q I and I x I denote the sizes of q and x, respectively ; and
ii. F E P.

We do not require that there should exist an efficient way to find a proof of x when
x c X, only that there should exist an efficient way to check the validity of a proposed
short proof.

The conceptual distinction between P and NP is best grasped
Example 10.3.5.
with an example. Let comp be the problem in Example 10.3.3. In order to have
comp E P, we would need an algorithm
function DecideCOMP (n)
{ decides whether n is a composite number or not }

return true

return false

whose running time is polynomial in the size of n. No such algorithm is currently
known. However, to show that coMP E NP, we need only exhibit the following
(obvious) polynomial-time algorithm.

Introduction to Complexity

322

Chap. 10

function VerifyCOMP (n, q)

if 1 < q < n and q divides n then return true
else return false
By definition of NP, any run of VerifyCOMP (n, q) that returns true is a proof that n is

composite, and every composite number has at least one such proof (but prime
numbers have none). However, the situation is not the same as for a probabilistic
algorithm (Chapter 8) : we are content even if there exist very few q (for some
composite n) such that VerifyCOMP (n , q) is true and if our chance of hitting one at
random would be staggeringly low.

Problem 10.3.8.
Let X be a decision problem for which there exists a
polynomial-time true-biased Monte Carlo algorithm (section 8.6). Prove that X E NP.
(Hint : the proof space is the set of all sequences of random decisions possibly taken
by the Monte Carlo algorithm.)
Problem 10.3.9.
Prove that P E_ NP. (Hint : Let X be a decision problem
in P. It suffices to take Q = { 0 } and F = (< x, 0 > I x e X } to obtain a system of
"proofs" that are succinct and easy to check. This example provides an extreme illustration of the fact that the same proof may serve for more than one instance of the
same problem.)

Example 10.3.6.
The problems COLD and CLIQUE considered in example
10.3.4 and Problem 10.3.7 are in NP.
Although COLD is in NP and COLO =T COLD, it does not appear that NP contains
the problem of deciding, given a graph G and an integer k, whether k is the chromatic
number of G. Indeed, although it suffices to exhibit a valid colouring to prove that a
graph can be coloured with a given number of colours (Example 10.3.4), no one has
yet been able to invent an efficient proof system to demonstrate that a graph cannot be
coloured with less than k colours.

Definition 10.3.6.
Let X S I be a decision problem. Its complementary
problem consists of answering "Yes" for an instance x E I if and only if x 0 X. The
class co-NP is the class of decision problems whose complementary problem is in NP.
For instance, the preceding remark indicates that we do not know whether COLD E
co-NP. Nonetheless, we know that COLD E co-NP if and only if NP = co-NP (Problems 10.3.27 and 10.3.16). The current conjecture is that NP # co-NP, and therefore
that COLD

co-NP.

Problem 10.3.10.

Let A and B be two decision problems. Prove that if

A 5 m B and B E NP, then A E NP.

Problem 10.3.11.
Let A and B be two decision problems. Do you believe that
if A 5 T B and B E NP, then A E NP ? Justify your answer.

Sec. 10.3

Introduction to NP-Completeness

323

Show that HAM, the Hamiltonian circuit problem defined in
Problem 10.3.12.
Example 10.3.2, is in NP.

that

In 1903, two centuries after Mersenne claimed without proof
Example 10.3.7.
is a prime number, Frank Cole showed that
267_ 1

267 -1 = 193,707,721 x 761,838,257, 287

.

It took him "three years of Sundays" to discover this factorization. He was lucky that
the number he chose to attack is indeed composite, since this enabled him to offer a
proof of his result that is both short and easy to check. (This was not all luck : Lucas
had already shown in the nineteenth century that 267-1 is composite, but without
finding the factors.)
The story would have had quite a different ending if this number had been prime.
In this case the only "proof" of his discovery that Cole would have been able to produce would have been a thick bundle of papers covered in calculations. The proof
would be far too long to have any practical value, since it would take just as long to
check as it did to produce in the first place. (A similar argument may be advanced
concerning the "proof' by computer of the famous four colour theorem.) This results
from a phenomenon like the one mentioned in connection with the chromatic number
of a graph : the problem of recognizing composite numbers is in NP (Example 10.3.3),

but it seems certain at first sight not to be in co-NP, that is, the complementary
problem of recognizing prime numbers seems not to be in NP.

However, nothing is certain in this world except death and taxes : this problem
too is in NP, although the notion of a proof (or certificate) of primality is rather more
subtle than that of a proof of nonprimality. A result from the theory of numbers shows
that n, an odd integer greater than 2, is prime if and only if there exists an integer x
such that

0<x <n
x"- I = 1 (mod n ), and
xI" - I)lP 4P 1 (mod n) for each prime factor p of n -1

A proof of primality for n therefore consists of a suitable x, the decomposition of n -1
into prime factors, and a collection of (recursive) proofs that each of these factors is
indeed prime. (More succinct proof systems are known.)

Complete the proof sketched in Example 10.3.7 that the
* Problem 10.3.13.
problem of primality is in NP. It remains to show that the length of a recursive proof
of primality is bounded above by a polynomial in the size (that is, the logarithm) of the
integer n concerned, and that the validity of such a proof can be checked in polynomial
time.

Introduction to Complexity

324

Chap. 10

Problem 10.3.14.
Let F = { < x , y > I x , y E N and x has a prime factor less
than y }. Let FACT be the problem of decomposing an integer into prime factors.
Prove that

i. F E NP n co-NP; and
ii. F = T FACT.

If we accept the conjecture that no polynomial-time factorization algorithm exists, we
11
can therefore conclude that F E (NP n co-NP) 1 P.

10.3.1 NP-Complete Problems

The fundamental question concerning the classes P and NP is whether the inclusion
P9 NP is strict. Does there exist a problem that allows an efficient proof system but
for which it is inherently difficult to discover such proofs in the worst case ? Our intuition and experience lead us to believe that it is generally more difficult to discover a
proof than to check it: progress in mathematics would be much faster were this not so.
In our context this intuition translates into the hypothesis that P # NP. It is a cause of
considerable chagrin to workers in the theory of complexity that they can neither prove
nor disprove this hypothesis. If indeed there exists a simple proof that P # NP, it has
certainly not been easy to find !

On the other hand, one of the great successes of this theory is the demonstration

that there exist a large number of practical problems in NP such that if any one of
them were in P then NP would be equal to P. The evidence that supports the
hypothesis P # NP therefore also lends credence to the view that none of these problems can be solved by a polynomial-time algorithm in the worst case. Such problems
are called NP-complete.
Definition 10.3.7.

A decision problem X is NP-complete if

I. X E NP ; and
ii. for every problem Y E NP, Y S T X.

Some authors replace the second condition by Y

n, X or by other (usually stronger)

kinds of reduction.

Prove that there exists an NP-complete problem X such that
Problem 10.3.15.
X E P if and only if P = NP.

Prove that if there exists an NP-complete problem X such
* Problem 10.3.16.
that X E co-NP, then NP = co-NP.

Sec. 10.3

Introduction to NP-Completeness

325

Prove that if the problem X is NP-complete and the

Problem 10.3.17.
problem Z E NP, then

i. Z is NP-complete if and only if X <T Z ;
ii. if X n, Z, then Z is NP-complete.

Be sure to work this important problem. It provides the fundamental tool for
proving NP-completeness. Suppose we have a pool of problems that have already
been shown to be NP-complete. To prove that Z is NP-complete, we can choose an
appropriate problem X from the pool and show that X is polynomially reducible to Z
(either many-one or in the sense of Turing). We must also show that Z E NP by exhibiting an efficient proof system for Z. Several thousand NP-complete problems have
been enumerated in this way.
This is all well and good once the process is under way, since the more problems

there are in the pool, the more likely it is that we can find one that can be reduced
without too much difficulty to some new problem. The trick, of course, is to get the
ball rolling. What should we do at the outset when the pool is empty to prove for the
very first time that some particular problem is NP-complete? (Problem 10.3.17 is then
powerless.) This is the tour de force that Steve Cook managed to perform in 1971,
opening the way to the whole theory of NP-completeness. (A similar theorem was
discovered independently by Leonid Levin.)

10.3.2 Cook's Theorem
Definition 10.3.8.
A Boolean variable takes its values in the set
B = { true, false }. Boolean variables are combined using logical operators (not, and,
or, t:* , r , and so on) and parentheses to form Boolean expressions. It is customary
to represent disjunction (or) in such expressions by the symbol "+" and conjunction
(and) by simply juxtaposing the operands (as for arithmetic multiplication). Negation
is often denoted by a horizontal bar above the variable or the expression concerned. A
Boolean expression is satisfiable if there exists at least one way of assigning values to
its variables so as to make the expression true. A Boolean expression is a tautology if
it remains true whatever values are assigned to its variables. A Boolean expression is a
contradiction if it is not satisfiable, that is, if its negation is a tautology. We denote by

SAT, TAUT and CONT, respectively, the problems of deciding, given a Boolean
expression, whether it is satisfiable, whether it is a tautology, and whether it is a con-

0

tradiction.

Example 10.3.8.
ables p and q

Here are three Boolean expressions using the Boolean vari-

.

i. (p+q)=pq
ii. (p b q) (p +q)(p +q )
iii. p (p +q)q

Introduction to Complexity

326

Chap. 10

Expression (i) is satisfiable because it is true if p = true and q = true, but it is
not a tautology because it is false if p = true and q = false . Verify that expression
(ii) is a tautology and that expression (iii) is a contradiction.
To prove that a Boolean expression is satisfiable, it suffices to produce an assignment that satisfies it. Moreover, such a proof is easy to check. This shows that
SATE NP. It is not apparent that the same is true of the two other problems : what
short and easy proof can one give in general of the fact that a Boolean expression is a
tautology or a contradiction? These three problems are, nevertheless, polynomially
equivalent in the sense of Turing.
Prove that

Problem 10.3.18.

i. SAT = T TAUT - T CONT ; and even

ii. TAUT -m CONT.
It is possible in principle to decide whether a Boolean expression is satisfiable by
working out its value for every possible assignment to its Boolean variables. However
this approach is impractical when the number n of Boolean variables involved is large,
since there are 2" possible assignments. No efficient algorithm to solve this problem is
known.

Definition 10.3.9.
A literal is either a Boolean variable or its negation.
A clause is a literal or a disjunction of literals. A Boolean expression is in conjunctive
normal form (CNF) if it is a clause or a conjunction of clauses. It is in k-CNF for
some positive integer k if it is composed of clauses, each of which contains at most k
literals (some authors say : exactly k literals).

Example 10.3.9.

Consider the following expressions.

L (p +q +r)(p +q +r)q r
ii. (p +qr)(p +q (p +r))

W. (p =* q)b(p+q)
Expression (i) is composed of four clauses. It is in 3-CNF (and therefore in CNF), but

not in 2-CNF. Expression (ii) is not in CNF since neither p +qr nor p +q (p +r) is a
clause. Expression (iii) is also not in CNF since it contains operators other than conjunction, disjunction and negation.

*Problem 10.3.19.

i. Show that to every Boolean expression there corresponds an equivalent expression in CNF.

ii. Show on the other hand that the shortest equivalent expression in CNF can be
exponentially longer than the original Boolean expression.

3 Introduction to NP-Completeness 327 Definition 10. x2 2 . El The interest of Boolean expressions in the context of NP-completeness arises from their ability to simulate algorithms. SAT-CNF is NP-complete. the size of 'Y (A) is also polynomial in n).. polynomial time.20. -The Boolean expression is constructed so that there exists a way to satisfy it by choosing the values of its other Boolean variables if and only if algorithm A accepts the instance corresponding to the Boolean value of the x variables. and for each unit t of time taken by this computation.Sec. the clauses of 'P (A) force these Boolean variables to simulate the step-by-step execution of the algorithm on the corresponding instance. algorithm A accepts at least one instance of size 5 if and only if T5(A) is satisfiable.x are fixed. Once the variables x I . (The number of additional Boolean variables is polynomial in the size of the instance because the algorithm runs in polynomial time and because we have assumed without loss of generality that none of its variables or arrays can ever occupy more than a polynomial number of bits of memory. For any positive integer k. Theorem 10.1. x i . Let p (n) be the polynomial (given by the definition of NP) such . Polynomial-time algorithms are known for a few of them. For example. 10. Consider an arbitrary decision problem that can be solved by a polynomial-time algorithm A. SAT-CNF is the restriction of the SAT problem to Boolean expressions in CNF. Let Q be a proof space and F an efficient proof system for X. We content ourselves with mentioning that the expression 'P (A) contains among other things a distinct Boolean variable b.) We are finally in a position to state and prove the fundamental theorem of the theory of NP-completeness.3. all these problems are in NP or in co-NP. Suppose that the size of the instances is measured in bits. More interestingly. The proof that this Boolean expression exists and that it can be constructed efficiently poses difficult technical problems. This Boolean expression contains a large number of variables.3. such as the Turing machine. For every integer n there exists a Boolean expression 'P (A) in CNF that can be obtained efficiently (in a time polynomial in n. It usually requires a formal model of computation beyond the scope of this book. Clearly. The problems TAUT-(k-)CNF and 11 CONT-(k-)CNF are defined similarly. Prove that SAT-2-CNF and TAUT-CNF can be solved in * Problem 10. where the polynomial may depend on the algorithm A . Thus it remains to prove that X <_ T SAT-CNF for every problem X E NP. We already know that SAT-CNF is in NP. Proof. algorithm A accepts the instance 10010 if and only if the expression xix2x3xax5`P5(A) is satisfiable.--. among which x correspond in a natural way to the bits of instances of size n for A.10.3. SAT-k-CNF is the restriction of SAT-CNF to Boolean expressions in k-CNF. X2. for each bit i of memory that algorithm A may need to use when solving an instance of size n.

Example 10.O to p (n) do if DecideSATCNF (Ti (Ax)) then return true return false Let X E X be of size n. whether < x.3. Prove that in fact X <_ m SAT-CNF for any decision problem 10. Conversely. there corresponds a two variable polynomial r such that the Boolean formula Ti (Ax) can be constructed in a time in 0 (r (I x I.) Problem 10. i )) and such that its size is bounded similarly.3. (To be precise. Thereafter. function DecideX (x) let n be the size of x (in bits) for i . and therefore Ax accepts no inputs. That such an algorithm exists is ensured by the fact that F E P.Introduction to Complexity 328 Chap. we have the choice of proving SAT-CNF <_ T Y or X <_ T Y. (q ) if A (< x. The fact that algorithm A accepts < x. let Ax be an algorithm whose purpose is to verify whether a given q E Q is a valid proof that x E X. 10 that to every x E X there corresponds a q E Q whose length is bounded above by p (I x I) such that < x. q > E F. given < x. The reduction is immediate because Boolean expressions in CNF . if we imagine that answers concerning the satisfiability of certain Boolean expressions in CNF can be obtained at no cost by a call on DecideSATCNF. as DecideX (x) will discover. hence the Boolean expression Ti (Ax) is satisfiable. We already know that SAT E NP.3.17). to show that Y E NP is NP-complete.3 Some Reductions We have just seen that SAT-CNF is NP-complete.10. then there exists no q e Q such that < x. q > implies that algorithm Ax accepts q. This completes the proof that X <_ T SAT-CNF. q > belongs to F ).3. whether q is a proof that x E X (that is. q >) then return true else return false Here finally is an algorithm to solve problem X in polynomial time. SAT is NP-complete. where 0 <_ i <_ p (n). and let q e Q be a proof of size i that x E X. For each x. This is an instance of size i. q > as input. function Ax. we need only prove that SAT CNF <_ T X (Problem 10. It therefore remains to show that SAT-CNF <_ T SAT. one technical detail is worth mentioning : to each algorithm A. We illustrate this principle with several examples. Let A be a polynomial-time algorithm able to decide. which implies that DecideX (x) will find no integer i such that Ti (Ax) is satisfiable. which is part of the definition of X E NP. q > E F. To show that X too is NP-complete. if x 0 X. XENP.21. Let X E NP be some other decision problem.

. (Continuation of Example 10. here is a polynomial-time algorithm for solving SAT-CNF. which is therefore a disjunction of k literals.11) If `P=(p+q+r+s)(r+s)(p+s+x+v+u) we obtain E=(p+q+u1)(u1+r+s)('+s)(p+s+u2)(u2+x+u3)(u3+v+w) . Take = (l 1 + 12 + u) (i + 13 +14)iii. More precisely. More generally. . Let P be a Boolean expression in CNF. 3. It remains to show that SAT CNF <_ T SAT-3-CNF..) Example 10.Sec. Using Example 10.10.3. uk_3 be new Boolean variables.3. let l . can be exponentially longer than expression T. in 3-CNF that is satisfiable if and only if P is satisfiable. If the expression `P consists of several clauses. However. u2 . if k >_ 4.22. .. Our problem is to construct efficiently a Boolean expression i. .19(ii) shows that expression i. If k = P. 10.19(i) to obtain the algorithm function DecideSAT(Y) let E be a Boolean expression in CNF equivalent to Y if DecideSATCNF (E. Take l 1 +12+ ii. 12.3. Let u 1 . treat each of them independently (using different new variables for each clause) and form the conjunction of all the expressions in 3-CNF thus obtained. conclude that SAT= T SAT-CNF. This time let us show that SAT-CNF <_ m SAT-3-CNF. set If k = 4. resist the temptation to use Problem 10.) then return true else return false because Problem 10.3.. i. ..3. SAT-3-CNF is NP-complete. lk be the literals such that `P is +lk . so it cannot be computed in polynomial time in the worst case. function DecideSATCNF(`Y) if P is not in CNF then return false if DecideSAT (P) then return true else return false Prove that SAT <_ T SAT-CNF. let 1 .3. Let u be a new Boolean variable. which is already in 3-CNF.12.3 Introduction to NP-Completeness 329 are simply a special case of general Boolean expressions. if we imagine that the satisfiability of Boolean expressions can be decided at no cost by a call on DecideSAT. Problem 10.3. 12. (Hint : This problem has a very simple solution.11. l3 . Example 10. Consider first how to proceed if `P contains only one clause. 1 1 =(11+12+u1)(u1+13+u2)(u2+14+u3) (uk_3+lk-1+lk). We have already seen that SAT-3-CNF is in NP. and 14 be the literals such that `P is l 1 + 12 + 13 + l4 .

corresponds to the literal x.3. think of this intuitively as an assignment of the value true to the Boolean variable x. We still have to add 6 nodes and 12 edges for each clause in T. xi is false if it is zi that is the same colour as T.13. we shall prove this time that SAT-3-CNF <_ m 3-COL. Given a Boolean expression `P in 3-CNF. this forces yj to be the same colour as either T or F and zi to be the complementary colour. Suppose for simplicity that every clause of the expression `P contains exactly three literals (see Problem 10. Prove that SAT-CNF <_ m SAT-3-CNF still holds even if in the definition of SAT-3-CNF we insist that each clause should contain exactly three literals. For each Boolean variable x. One copy of this widget is added to the graph for each clause in T.1. corresponds to an assignment of Boolean values to x t . F.3. is the same colour as T.3. . Let k be the number of clauses in T. These are added in such a way that the graph will be colourable with three colours if and only if the choice of colours f o r y l .11. . Figure 10. . The graph G that we are about to build contains 3 + 2t + 6k nodes and x2 2 . imagine that the colours assigned to T and F represent the Boolean values true and false. If y.3.3. respectively. This reduction is considerably more complex than those we have seen so far. Three distinguished nodes of this graph are linked in a triangle : call them T. Figure 10.3. that satisfies every clause. -x. of `P the graph contains two nodes yj and zi that are linked to each other and to the control node C.3. The control triangle. The problem k-COL consists of determining whether G can be coloured with k colours (see Problem 10. x.2 shows the part of the graph that we have constructed up to now. When the time comes to colour G in three colours. In any colouring of G in three colours. Prove that the construction of Example 10. Chap. If t = 3. and C.24).--3 + 3t + 12k edges.6). Contrariwise. and node zi corresponds to the literal x. for example.23. 3-COL is NP-complete.1.3. Example 10.24. In every case node y. we have to construct efficiently a graph G that can be coloured in three colours if and only if T is satisfiable. The colour used for node C will be a control colour.Introduction to Complexity 330 Problem 10. Let G be an undirected graph and k an integer constant. 10 is satisfiable if and only if P is satisfiable in Problem 10. See Figure 10. It is easy to see that 3-COL E NP. To show that 3-COL is NP-complete.-y.3. This can be accomplished thanks to the widget illustrated in Figure 10. Suppose further without loss of generality that the Boolean variables appearing in `P are x t .3. Y2. X2 . .3.

and three nodes chosen from the y. Problem 10.3.3. and F if and only if at least one of the nodes 1. 331 Graph representation of three Boolean variables. T. and z.2. Each widget is linked to five other nodes of the graph : nodes C and T of the control triangle.3 Introduction to NP-Completeness Figure 10. T Figure 10.25 shows that the widget can be coloured with the colours assigned to C.3. so as to correspond to the three literals of the clause concerned.Sec. A widget. and 3 is coloured with the same colour as node T. 2 and 3 cannot be the same colour as C. since the colour assigned to node T represents true. Because these input nodes 1. 2. 10.3. the widget simulates the disjunction of the three literals represented by the nodes to which it is joined. . In other words.

28. * Problem 10. which we only sketch here. A function f : IN -* IN is polynomially bounded if the size of its value on any argument is polynomially bounded by the size of its argument. 10.3. The classic definition involves the notion of non-deterministic algorithms. * Problem 10. ** Problem 10.7) is NP-complete.31. Give a simple proof that 4-COL is NP-complete.3. The name NP arose from this other definition: it represents the class of problems that can be solved by a Non-deterministic algorithm in Polynomial time.25. given any program as instance. Show. Problem 10. The notion of reducibility extends in a natural way to unsolvable problems such as the halting problem (although it is usual to drop the polynomial-time aspect of the reductions . Prove that CLIQUE (Problem 10. that 2-COL is in P.3.4 Non-determinism The class NP is usually defined quite differently. although the definitions are equivalent.332 Introduction to Complexity Chap. Prove that COLD (Problem 10.3.) ** Problem 10.29.3.3. Verify that the colours attributed to nodes C. Prove that the problem of the Hamiltonian circuit (Example 10.3. on the other hand.3.26.32.3. (Hint : prove that SAT-3-CNF <_ m CLIQUE. T. and therefore that 3-COL is NP-complete. Prove that the problem of computing any polynomially bounded computable function is polynomially reducible to the halting problem in the sense of Turing.2 and Problem 10.3. Problem 10. which can be coloured with three colours if and only if `i is satisfiable. We conclude that SAT-3-CNF <_ m 3-COL. Prove however that there exist decision problems that are not polynomially reducible to the halting problem in the sense of Turing. 10 This ends the description of the graph G.3. whether the latter will ever halt when started. Prove that 3-COL is still NP-complete even if we restrict ourselves to planar graphs of degree not greater than 4. It is clear that the graph can be constructed efficiently starting from the Boolean expression `I' in 3-CNF.6) is NP-complete. .which we do not do here).27. Problem 10.3.3.30.3. * Problem 10.12) is NP-complete.3 if and only if at least one of the input nodes is coloured with the same colour as node T (knowing that the input nodes cannot be coloured with the same colour as node C). The halting problem consists of deciding. and F suffice to colour the widget of Figure 10.

bool return . Notice that there is no limit on how long a polynomial-time non-deterministic algorithm can run if the "wrong" non-deterministic .Sec. The difference between non-deterministic and Las Vegas algorithms is that for the former we do not set any bound on the probability of success. y . When a non-deterministic algorithm is consistent.11. otherwise no solution is obtained. success). 10. a non-deterministic algorithm resembles a Las Vegas probabilistic algorithm (Section 8. we write return success . The actual value assigned to n is not specified by the algorithm.5). we do not denote nondeterministic choices by calls on uniform (i . this does not imply that P = NP. A non-deterministic algorithm runs in polynomial time if the time it takes is bounded by some polynomial in the size of its instance provided that the instance is in the domain of the algorithm. The algorithm is total if its domain is the set of all possible instances. then y is a correct solution to instance x . Definition 10. nor is it subject to the laws of probability. we call ND (x . although nondeterministic algorithms can solve NP-complete problems in polynomial time. To solve instance x of some problem X.3 Introduction to NP-Completeness 333 On the surface.bool as an abbreviation for success . the time is undefined on instances that are not in the domain of the algorithm.) To avoid confusion with probabilistic algorithms. The time taken by a non-deterministic algorithm on a given instance of its domain is defined as the shortest possible time that any computation can cause it to run on this instance . For this reason non-deterministic algorithms are only a mathematical abstraction that cannot be used directly in practice : we would not program such an algorithm in the hope of running it successfully and efficiently on a computer. j) as in Chapter 8. If the algorithm sets success to true. We are not concerned with how such sequences could be determined efficiently or how their nonexistence could be established. we use a special instruction choose n between i and j . hence to returning a solution.3. but instead. it computes a well-defined function on the instances of its domain. whose effect is to set n to some value between i and j inclusive.. The algorithm is consistent if two different computations on any given instance always lead to the same result. A computation of the algorithm is a sequence of nondeterministic choices that leads to setting success to true. where y and success are return parameters. It is even allowable that for some instances the algorithm will never set success to true. The effect of the algorithm is determined by the existence or the nonexistence of sequences of non-deterministic choices that lead to the production of a result. (This explains why. For simplicity. The domain of the algorithm is the set of instances on which it has at least one computation.

suc) if suc and pr and p divides m and dexpo (x.true 1 When n is an odd composite number.5. n >.3 or when n is even.n < 3) return success f. Consider the following total consistent non-deterministic primality testing algorithm (recall from Section 4.7. n) then while p divides m do m . n .7. and guessing successively each prime divisor of n -1. It is even possible for a computation to be arbitrarily long. the algorithm also has a computation that consists of choosing guess = 1. Clearly.7 } if dexpo (x.9. n >. the algorithm has a computation that consists of choosing guess =0 and non-deterministically setting m to some nontrivial factor of n. (n -1)/p . 10 choices are made. the algorithm also has a computation when n <.3.3.false success f.let's guess a proof ! } prime . pr.(2<. It is also consistent by the same theorem from Example 10. The algorithm primeND is therefore total. i .true choose guess between 0 and 1 if guess = 0 then { the guess is that n is composite .true else success . provided that the same instance also admits at least one polynomially bounded computation. n) computes x' mod n efficiently).3.m /p else return success f. n) # 1 then return success f.false else { the guess is that n is prime .false choose m between 2 and n -1 if m divides n then success .1. procedure primeND (n .true choose x between 1 and n -1 { the guess is that x is as in Example 10.let's guess a proof ! } prime . var success) { non-deterministically determines whether n is prime } if n < 3 or n is even then prime F. choosing x in accordance with the theorem mentioned in example 10.8 that dexpo (x . var prime. When n is prime.14. .Introduction to Complexity 334 Chap.3. Example 10.false m f-n -1 while m > 1 do choose p between 2 and m { the guess is that p is a new prime divisor of n -1 } primeND (p. Notice again that it would be pointless to attempt implementing this algorithm on the computer by replacing the choose instructions by random choices in the same interval : the probability of success would be infinitesimal.

the actual result returned by the algorithm in case of success is irrelevant. var ans.q > EF ). procedure XND (x. (Hint: use the set of all possible computations as proof space. we are only concerned with the existence or nonexistence of computations (usually called accepting computations in this context) .true.3.false 11 Problem 10. var success) n .) Proof. Let X E NP be a decision problem. (The algorithm can easily be made consistent but it will not in general be total.35 Prove the converse of Theorem 10.14 runs in a time polynomial in the size of its instance. it is sometimes easier to show that a problem belongs to NP with this other definition. * Problem 10.size of x choose l between 0 and p (n) q . let F c X x Q be its efficient proof system. let Q be its proof space. The following polynomialtime non-deterministic algorithm has a computation on instance x if and only if x E X . . and let p be the polynomial that bounds the length of a proof as a function of the length of the corresponding positive instance.33.3. Although the authors prefer the definition based on proof systems.2 Every decision problem in NP is the domain of some polynomial-time non-deterministic algorithm.3.1 to l do choose b between 0 and 1 append bit b to the right of q if < x.Introduction to NP-Completeness Sec. that there are sequences of non-deterministic choices that can cause the algorithm to run for a time exponential in the size of its instance.3.3. For instance.3 335 * Problem 10. Assume for simplicity that Q is the set of binary strings.9 completely obvious. it makes Problems 10. however.empty binary string for i . In this case.3. success .8 and 10.3.true else success . 10. Show.34. and the corresponding parameter may be ignored altogether (there is no point in algorithm XND setting ans to true when it finds a q such that <x. q > E F then ans . Prove that the non-deterministic algorithm primeND of Example 10.3.) The preceding theorem and problem suggest the alternative (and more usual) definition for NP: it is the class of decision problems that are the domain of some polynomial-time non-deterministic algorithm. Theorem 10. this problem belongs to NP. Prove that a decision problem can be solved by a total con- sistent polynomial-time non-deterministic algorithm if and only if it belongs to NP c co-NP.2: whenever a decision problem is the domain of some polynomial-time non-deterministic algorithm.

Part of the solution to Problem 10. however.2. If f and g are two cost functions as in section 10. and 10. To be historically exact. In his tutorial article.2. 10. To find out more about non-determinism. Arlazarov.2. and Problems 10.3.11.7 and 10.2. In particular.2.16 are from Brassard (1979). In the case of cost functions whose range is restricted to (0.6 is due to Furman (1970). comes from Cook and Aanderaa (1969). the lexicographic sorting algorithm can sort n elements in a time in O(n +m).2. Pippenger (1978) describes a method similar to decision trees for determining a lower bound on the size of logic circuits such as those presented in Problems 4. and Horowitz and Sahni (1978). Theorem 10. Hopcroft. see Hopcroft and Ullman (1979). Hopcroft.336 Introduction to Complexity Chap.14.16).3. then an algorithm that is asymptotically more efficient than the naive algorithm for calculating fg is given in Fredman (1976).22).+-). For modifications of heapsort that come close to being optimal for the worstcase number of comparisons.3. Hopcroft. the original statement from Cook (1971) is that X <_ T TAUT-DNF for every X E NP. 1987). consult Gonnet and Munro (1986) and Carlsson (1986.3. and Ullman (1974).9 and 4.3. and Ullman (1974). and Faradzev (1970) present an algorithm to calculate fg using a number of Boolean operations in 0 (n 3/log n). it should be noted that TAUT-DNF is probably not NP-complete since otherwise NP = co-NP (Problem 10. Sahni and Horowitz (1978).2. which is crucial in the proof of Theorem 10. where TAUT-DNF is concerned with tautologies in disjunctive normal form . this technique shows that the required solutions for Problems 4.12 are optimal.12.28 can be found in Stockmeyer (1973). where m is the sum of the sizes of the elements to be sorted. The theory of NP-completeness originated with two fundamental papers : Cook (1971) proves that SAT-CNF is NP-complete.3. The uncontested authority in matters of NP-completeness is Garey and Johnson (1979).4 REFERENCES AND FURTHER READING For an overview of algorithms for sorting by transformation consult Aho.3.8 to 4.7.5 is due to Fischer and Meyer (1971) and Theorem 10. the fact that a problem is NP-complete does not make it go away. For further information concerning the topics of Section 10.11.12 is solved in Fischer and Meyer (1971). In practice.13.11. more succinct primality certificates can be found in Pomerance (1987). and Ullman (1974). Problems 10.2.34) was discovered by Pratt (1975). The reduction INV <_/ MLT (Problem 10. Dinic.14 and 10. A good introduction can also be found in Hopcroft and Ullman (1979).3.33. An algebraic approach to lower bounds is described in Aho. 10 10. Problem 10.2. This chapter has done little more than sketch the theory of computational complexity. However in this case we have to be content with heuristics and approximations as described in Garey and Johnson (1976). In particular.9) comes from Bunch and Hopcroft (1974). Several important techniques have gone unmentioned. A similar theory was developed independently by Levin (1973). and Karp (1972) underlines the importance of this notion by presenting a large number of NP-complete problems. The reduction IQ <_/ MQ (Problem 10. Borodin and . The fact that the set of prime numbers is in NP (Examples 10.3. consult Aho. Kronrod.11.

and Brassard and Monet (1982). Hopcroft.Sec.4). consult Horowitz and Sahni (1978). there exist problems that are intrinsically difficult. and Winograd (1980).32. whatever the resources available. as described in Aho. . even if it is allowed to take a time comparable to the age of the universe and as many bits of memory as there are elementary particles in the known universe (Stockmeyer and Chandra 1979). For an introduction to adversary arguments (Problem 8. As mentioned in Problem 10. and Ullman (1974). Hopcroft and Ullman (1979). there also exist problems that cannot be solved by any algorithm. read Turing (1936). Gardner and Bennett (1979).3.4 References and Further Reading 337 Munro (1975). but it can be proved that no algorithm can solve them in practice when the instances are of moderate size.4. These can be solved in theory. 10. Although we do not know how to prove that there are no efficient algorithms for NP-complete problems.

7182818.2.Table of Notation #T i j div. lg. and b. 2. interval of integers : { k E IN I i <_ k <_ j } arithmetic quotient and modulo . mod x T E- var x return return v IxI number of elements in array T . n! > 1) number of combinations of k elements chosen among n 338 . respectively basis of the natural logarithm : 2. [lg(1+x)l if x is an integer.. logb e n! absolute value of x logarithm in basis 10. log. Cartesian product pointer assignment return parameter of a procedure or function dynamic end of a procedure dynamic end of a function with value v returned size of instance x . In.. n factorial (0! = 1 and n! =n x (n -1)!. extended to polynomials in Section 10.3 arithmetic and matrix multiplication . e. cardinality of set T.

open interval {x E IR I a <x < b [ closed interval {xeIR I a <-x <_b } set of real numbers larger than or equal to a denotes comments in algorithms . P (x) asymptotic notation (see Section 2.1) f is a function from A to B there exists an x such that P (x) there exists one and only one for all x. j) x =y (mod n) xO+y floor of x : largest integer less than or equal to x extended to polynomials in Section 10.3 ceiling of x : smallest integer larger than or equal to x iterated logarithm (see page 63) randomly and uniformly selected integer between i and j x is congruent to y modulo n (n divides x . reals. bit-by-bit exclusive-or for bit strings .b > (a.1) implies if and only if summation integral plus or minus derivative of the function f (x) 00 infinity limit of f (x) when x goes to infinity rx lg* uniform (i . b) 339 ordered pair consisting of elements a and b same as < a. set of elements such that set inclusion (allowing equality) strict set inclusion set union: AuB={x IxEA or xEB { set intersection : A nB = { x I x E A and x E B I set membership set nonmembership set difference : A\ B={ x IxEA and x 0 B empty set sets of integers. b > (in particular : edge of a directed graph) .Table of Notation <a. and Booleans (see Section 2.1..y exactly) exclusive-or of x and y for bits or Booleans .2.

Table of Notation 340 and.B A is many-one polynomially reducible to B A PB A is many-one polynomially equivalent to B P NP co-NP choose class of decision problems that can be solved in polynomial time class of decision problems that have an efficient proof system class of decision problems whose complementary problem is in NP instruction for non-deterministic choice .(a) Fw' (a) A B A B A . or A <_ T B Boolean conjunction and disjunction Boolean complement of x Fourier transform of vector a with respect to co inverse Fourier transform A is linearly reducible to B A is linearly equivalent to B A is polynomially reducible to B in the sense of Turing A =T B A is polynomially equivalent to B in the sense of Turing x F(.

AHO. Wynkyn de Worde. and J. ADLEMAN. Proceedings of 19th Annual ACM Symposium on the Theory of Computing. 175-178. MA. and J. "Recognizing primes in random polynomial time". HOPCROFT. ACKERMANN. HOPCROFT. J. Addison-Wesley. pp. V. and E. E. "On taking roots in finite fields". 1-9. SIAM Journal on Computing. A. Proceedings of 15th Annual ACM Symposium on the Theory of Computing. and G. Addison-Wesley. A. S. SZEMEREDI (1983). HOPCROFr. AJTAI. D. M.. V. pp. Reading. ULLMAN (1983).-D. KILIAN (1987). ADEL'SON-VEL'SKII. Proceedings of 19th Annual ACM Symposium on the Theory of Computing. A. J. "An algorithm for the organization of information" (in Russian). M. L. M. J. pp. J. AHO. ANON. Proceedings of 18th Annual IEEE Symposium on the Foundations of Computer Science.. 117. London. 263-266. J. pp. "Efficient string matching : An aid to bibliographic search". "On finding lowest common ancestors in trees". V. (1928). L. 18(6).E. M. and R. LANDIS (1962).. Doklady Akademii Nauk SSSR. D. ULLMAN (1976). Annals of Mathematics. and E. L. W. MILLER (1977). "Zum Hilbertschen Aufbau der reellen Zahlen". CORASICK (1975). (c. FEIGENBAUM. 5(1). AHO. A. and M. The Design and Analysis of Computer Algorithms. 173-206. ADLEMAN. V. 146. and J. M. and J. KOMLOS. Communications of the ACM. MANDERS. and M. A. 118-133. RUMELY (1983).. 333-340. ULLMAN (1974). "On hiding information from an oracle". M. AHO.. Reading. "An O(n logn) sorting network". "On distinguishing prime numbers from composite numbers". ADLEMAN.Bibliography ABADI.. 195-203. E.. MA. POMERANCE. Mathematische Annalen. D. HUANG (1987). M. G. 99. 1495) Lytell Geste of Robyn Hode. K. 115-132. J. C. 341 . 462-469. Data Structures and Algorithms.

"Programming pearls : Algorithm design techniques". 79-10. Proceedings of 8th Annual ACM Symposium on the Theory of Computing. and C. BACH. (1961). BELLMORE. Dynamic Programming. "On computing polynomials in one variable with initial preconditioning of the coefficients". C. and M. SIGACT News. S. A.. R. POMERANCE (1988). E. BELLMAN. "Privacy amplification through public discussion". E. A. D. "The traveling salesman problem: A survey". and G. 1(1). A. Departement de mathematiques et de statistique. "A simple unpredictable pseudo-random number generator".342 Bibliography ARLAZAROV. BENTLEY. 538-558. C. Teubner. FARADZEV (1970). 36-44. "Backgammon computer program beats world champion". G. (1984). 364-372. 194.. Princeton. Zahlentheorie. E. (1980). B. BAASE. BLUM. Applied Dynamic Programming. "A general method for solving divide-andconquer recurrences". 364-383.G. ACM.M. G. 27(9). BRASSARD. BELLMAN. L. BERGE. BATCHER. BEAUCHEMIN. Princeton. (1957).. and I. Graphes et hypergraphes. BERLINER. 24(111). L. North Holland. 7-15. perfect numbers and factoring". "Sums of divisors. 205-220. vol. 5. "Factoring polynomials over large finite fields". pp. Paris . Universite de Montreal. Theorie des graphes et ses applications. BLUM. K. (1970). STANAT. and J. SIAM Journal on Computing. L. no. D.H. 307-314. C. J. . B. NEMHAUSER (1968). F. "Analysis of a randomized data structure for representing ordered sets". L. J. J. Problemi Kibernetiki.. L. GOUTIER. BACHMANN. Methuen & Co. Reading. SIAM Journal on Computing. Proceedings of 19th Annual Allerton Conference on Communication. Paris. Leipzig. CR£PEAU. Control. L. L. G.G. H. 1987. (1894). "Divide-and-conquer in multidimensional space". J. SHUB (1986). 16(3). (1958). DREYFUS (1962). 14. (1979). 15(4). V. BENNETT. Mathematics of Computation. C.S.. in press. SHAMOS (1976). London. in press. C. 713-735. BENTLEY. and Computing. 1967. BERGE. Journal of Cryptology. C. D. NJ. DINIC. 865-871. BERLEKAMP. and J. and I. M. "Monte Carlo algorithms in graph isomorphism techniques". pp. and J. "On economical construction of the transitive closure of a directed graph" (in Russian). translated as: The Theory of Graphs and Its Applications (1962). Computer Algorithms: Introduction to Design and Analysis. J. M. Princeton University Press. "The generation of random numbers that are probably prime". KRONROD. SHALLIT (1986). (1970).. and S. E.. BENTLEY. BABAI. P. (1968). Dunod. R. BRASSARD. MILLER. 2: Die Analytische Zahlentheorie. HAKEN. Dunod. 220-230. P. SAXE (1980).-M. E. STEELE (1981). M. SIAM Journal on Computing. Communications of the ACM. R. second edition. Proceedings of AFIPS 32nd Spring Joint Computer Conference. Addison-Wesley. translated as : Graphs and Hypergraphs (1973). NJ. 12(3). H. Princeton University Press. BELAGA. BENTLEY. (1978). ROBERT (1988). E. "Sorting networks and their applications". Amsterdam. 1143-1154. MA. 15(2). Research report. E. Operations Research. Doklady Akademii Nauk SSSR. 487-488. Artificial Intelligence. pp. M.. and J. second edition.

5(4). and J. HOPCROFT (1974). KORSH (1976). TSI: Technique et Science Informatiques. (1988). 13(4). 1. 17(1). and L. S. Graph Theory: An Algorithmic Approach. Quebec. (1971). "A note on the complexity of cryptography". TARJAN (1976). and S. "Finding minimum spanning trees". Departement d'informatique et de recherche operationnelle. W. NY. 1987. (1926). B. Journal of Computer and System Sciences. CHRISTOFIDES. BRASSARD. and D. L. (1986). and R. C. New York. BRASSARD. "O jistem problemu minimalnim". second edition. A. Lidec. E. "Canonical coin changing and greedy solutions". CHANG. Mathematics of Computation. E. P. and S. 143-154. Canada. Journal of the ACM. 27. The Fast Fourier Transform. N. G. BORODIN. BRATLEY. J. ZUFFELLATO (1986). O. 89-102. NY. "L'arithmetique des tres grands entiers". L. Fox. MONET. Sweden. BORUVKA. SIGACT News.Bibliography 343 BLUM. and J. and S. 37-58. MOORE (1977). 232-233. BLUM. 1. SIAM Journal on Computing. FLOYD. S. The Computational Complexity of Algebraic and Numeric Problems. V. (1985). Lecture Notes in Computer Science. Englewood Cliffs. Publica- tion no. E. 243-244. Springer-Verlag. MUNRO (1975). BUNCH. Journal of Computer and System Sciences. BORODIN. D. Montreal. "Triangular factorization and inversion by fast matrix multiplication". 23(3). Analyse numerique. NY. 850-864. . BRASSARD. MONET (1982). TARJAN (1972). G. and R. and J. Prentice-Hall. A Guide to Simulation. 2-17. BRASSARD. 762-772. B. and M. CARLSSON. Praca Morarske Prirodovedecke Spolecnosti. SIAM Journal on Computing. 231-236. Information Processing Letters. P.5). CARASSO. BOYER.. G. RIVEST. CHERITON. E. M. "Time bounds for selection". NY. Bit. "The generation of random permutations on the fly". MUNRO (1971). R. MICAU (1984). "Universal classes of hash functions". 724-742. BRASSARD. S. (1979). LEVY (1980). R. N. A. Lund University. "Average case results on heapsort". (1975). S. and L. New York. "Evaluating polynomials at many points". Doctoral dissertation. Springer-Verlag. and J. "A fast string searching algorithm". American Elsevier. 66-68. Communications of theACM. R. G. 28(125). B. Department of Computer Science. J. 3. Information Processing Letters. IT-25(2). CARLSSON. 448-461. Academic Press. Information Processing Letters (in press). CODEN: LUNFD6/(NFCS-I(X)3)/(1-70)/(1986).R. PRATT. Modern Cr_yptology: A Tutorial. "The towers of Hanoi problem". New York. SCHRAGE (1983). M. IEEE Transactions on Information Theory. and J. BUNEMAN. "How to generate cryptographically strong sequences of pseudo-random bits". "Crusade for a better notation". 5(2). "L'indecidabilite sans larme (ni diagonalisation)". 20(10). ACM.. 7(4). L. G. 18(2). 418-422. S. (1974). KANNAN (1988). Universite de Montreal. BRASSARD. L. New York. WEGMAN (1979). CARTER. BRIGHAM. NJ.O. 60-64. 10(4.. (1987). Lund. 445. G. Heaps. 1(2). E.

1. 435-452. (1977). G. Mathematics of Computation. pp. C. 251(5). H. 36(153). 8(2). C. D. PA. COOLEY. How to Solve It by Computer. Graph Algorithms. Playboy's Book of Backgammon. 39. and C. "Computer recreations : Yin and yang : recursion and iteration. "CRAY-2 computer system takes a slice out of pi". 365-380. Cray Channels. CURTISS. Mathematics of Computation. and P. "The complexity of theorem-proving procedures". DEYONG. Mathematics of Computation. LEWIS. "History of the fast Fourier transform". Englewood Cliffs. (1981). W. and C. D. D. (1961). P. DIXON. "An algorithm for the machine calculation of complex Fourier series". Journal of the Franklin Institute. 55. N. WINOGRAD (1987). E. 19(90). New York. A. (1984). "On the minimum complexity of functions". . DANIELSON. COOK. Cryptography and Data Security. Proceedings of the IEEE. (1971). 259-279. COOK. Non-Uniform Random Variate Generation. E. ERDOS. "A theoretical comparison of the efficiencies of two classical methods and a Monte Carlo method for computing one component of the solution of a set of linear algebraic equations". L. A. 191-233. J. and S. K. (1983). 644-654. A. MA. HELLMAN (1976). "Transformee de Fourier rapide". "Worst-case analysis of a new heuristic for the traveling salesman problem". M. (1986). O. NY. "Some improvements in practical Fourier analysis and their application to X-ray scattering from liquids". Computer Science Press. K. Proceedings of 19th Annual ACM Symposium on the Theory of Computing. AANDERAA (1969).R. WELCH (1967). 233. ed. Addison-Wesley. Numerische Mathematik. R. pp. Amsterdam. S. John Wiley & Sons. H. Meyer. 291-314. Micro-Systemes". LANCZOS (1942). Springer-Verlag. DEMARS. LENSTRA. M. L. G. Mathematics of Computation. DEWDNEY. 1-6. (1981). IT-22(6). TUKEY (1965). 155-159. 103-121. H. Scientific American. NY. "A note on two problems in connexion with graphs". IEEE Transactions on Information Theory. J. DROMEY. "Implementation of a new primality test". DIJKSTRA. DIFFIE. and J. and A. (1976). Carnegie-Mellon University..E. Management Sciences Research Report no. "Matrix multiplication via arithmetic progressions". A. New York. 255-260. DENNING. in Symposium on Monte Carlo Methods. (1987). "Asymptotically fast factorization of integers". 388. EVEN. IL. the tower of Hanoi and the Chinese rings". (1959). Playboy Press. pp. and S. (1982). Pittsburgh. J. COOLEY. Chicago. 142. 269-271. MD. 46(173). CRAY RESEARCH (1986).. Asymptotic Methods in Analysis. J. S. D. P. 151-158. S. "New directions in cryptography". Rockville. POMERANCE (1986). NJ. W. COPPERSMITH. G. North Holland. (1980). and M. COHEN. 48(177). 1675-1679. 19-28.344 Bibliography CHRISTOFIDES. DEVROYE. Transactions of the American Mathematical Society. A. (1956). Proceedings of 3rd Annual ACM Symposium on the Theory of Computing. Reading. "On the number of false witnesses for a composite number". W. 297-301. DE BRUIJN. N. Prentice-Hall.

FLOYD. 83-89. N. GARDNER. 41-52. R. R. B. GOLDWASSER. DC. Computers and Intractability: A Guide to the Theory of NP-Completeness. M. GLEICK. 241(5). M. S. (1986). pp. "Probabilistic counting algorithms for data base applications". GARDNER. (1977). Scientific American.L. 57-69. M. "Almost all primes can be quickly certified". and D. E. R.Bibliography FEIGENBAUM.. Proceedings of 8th Symposium on the Mathematical Foundations of Computer Science. and E. Spartan. 477-488. 237(2). 129-131. or . F. pp. H. 864-866. Freeman and Co. Scientific American. C-22(9). GOLDWASSER.. pp. Proceedings of CRYPTO 85. M. JOHNSON (1979). FREDMAN. 38(4). Proceedings of IEEE 12th Annual Symposium on Switching and Automata Theory. FLAJOLET. L.E. 933-968. (1976). W. "Boolean matrix multiplication and transitive closure". M. GODBOLE. "Fast Fourier transforms-for fun and profit". and C. W. pp. M. S. and D. 5(1). "Variable length encodings". and G. and R. 5(6). "On efficient computation of matrix chain products". (1962). Bell System Technical Journal. Berlin. 182-209. (1973). and A. 194.. 316-329. 120-124. 362-376. MOORE (1959). (1970). 524. JOHNSON (1976). CA. J. FREDMAN. H. J. R. SIAM Journal on Computing. 28(2). Fox. 31(2). R. N. . MICALI (1984). P. MEYER (1971). Journal of Computer and System Sciences. FURMAN. Proceedings of Information Processing 77. Berlin. BENNETT (1979). pp. and G. Lecture Notes in Computer Science. March 14. SANDE (1966). M. TARJAN (1984). W. (1977). GILBERT. "Mathematical games : The random number omega bids fair to hold the mysteries of the universe". (1987). L. FREIVALDS. "Algorithm 97: Shortest path". Journal of Computer and System Sciences. "Fibonacci heaps and their uses in improved network optimization algorithms". pp. Proceedings of 25th Annual IEEE Symposium on the Foundations of Computer Science. 345. "Fast probabilistic algorithms". M.. Proceedings of 18th Annual ACM Symposium on the Theory of Computing. Communications of the ACM. 270-299. "Mathematical games : A new kind of cipher that would take millions of years to break". and S. GAREY. S. pp. E. ACM Transactions on Mathematical Software. "Encrypting problem instances. KILIAN (1986). "Application of a method of fast multiplication of matrices in the problem of finding the transitive closure of a graph" (in Russian). 345 (1986). New York Times. S. S. "Probabilistic encryption". GAREY. 563-578. c. R. FREIVALDS. and J. 74. Springer-Verlag. 29. "New bounds on the complexity of the shortest path problem". IEEE Transactions on Computers. "Probabilistic machines can use less running time". 12(4). "Approximation algorithms for combinatorial problems : An annotated bibliography". Doklady Akademii Nauk SSSR. J. 338-346. pp. "Algorithm 647: Implementation and relative efficiency of quasirandom sequence generators". GENTLEMAN. San Francisco. 839-842. in Traub (1976). (1979). MARTIN (1985). FISCHER. Proceedings of AFIPS Fall Joint Computer Conference. Washington. 20-34. M. Springer-Verlag. can you take advantage of someone without having to trust him?". "Calculating pi to 134 million digits hailed as great test for computer".

R. and D. (1873). TARTAN (1973). M. G. 549-568. D. 12(4). E. Addison-Wesley. Scientific American. G. HOARE. NY. HELLMAN. and R. HAMMERSLEY. J. M. HAREL. E. E. J. translated as : Graphs and Algorithms (1984). (1981). "Set merging algorithms". 21(4). Technical report TR-71-114. and R. and J. and G. HOPCRoFT. HOPCROFT. "Efficient planarity testing". 10(1). 7(1). 2. 372-378. (1968). H. HANDSCOMB (1965). J. TARJAN (1974). S. I. 113-114. and R. Information Processing Letters. "Quicksort". Dale and D. GONNET. M. HoPCROFT. 30-36. and J. L. Springer-Verlag. SIAM Journal on Applied Mathematics. 516-524. GONDRAN. SIAM Journal on Com- puting. E. 964-971. D. vol. and D. 1979. (1979). "Implementation of the substring test by hashing". Algorithmics : The Spirit of Computing. "An algorithm for testing the equivalence of finite automata". GREENE. GOOD. 20(1). MA. Michie. 777-779. pp. NY. E. and Computation. and R. M. Computer Journal. "Computing Fibonacci numbers (and similarly defined functions) in log time". H. (1971). MA. J. E. D. (1980). Addison-Wesley. BAUMERT (1965). "On the history of the minimum spanning tree problem". NY. Addison-Wesley. 43-57. H. D. J. Department of Computer Science. 196-210. "The mathematics of public-key cryptography". KERR (1971). HOPCROFT. England . HALL. D. Languages. SIAM Journal on Applied Mathematics. E. Birkhauser. I. Journal of the ACM. E. 294-303. eds. An Introduction to the Theory of Numbers.346 Bibliography GOLOMB. C.R. KARP (1971). Monte Carlo Methods. "Efficient algorithms for graph manipulation". 5(1). KARP (1962).E. Mathematics for the Analysis of Algorithms. J. and M. New York. The Science of Programming. American Elsevier. Communications of the ACM. "Heaps on heaps". Communications of the ACM.C. 241(2). in Machine Intelligence. LEVIN (1980). Boston. "A five-year plan for automatic chess". 16(6). HARDY. GRIES. Graphes et algorithmes. Reading.. "A dynamic programming approach to sequencing problems". 2(4). ULLMAN (1973). "On minimizing the number of multiplications necessary for matrix multiplication". GRIES. D. HOPCROFT. 2. A. 10-15. 146-157. J. Reading. WRIGHT (1938). "Backtrack programming". ULLMAN (1979).A. Messenger of Mathematics. HARRISON. and J. GONNET. Ithaca. (1987). M. Oxford. NY. HELL (1985). London. G. and E. C. MA. New York. M. 15(4). New York. Paris . E. Oxford Science Publications. Handbook of Algorithms and Data Structures. reprinted in 1979 by Chapman and Hall. Reading. John Wiley & Sons. . E. HOPCROFT. and P. fifth edition. Annals of the History of Computing. 68-69. GRAHAM. HELD. Eyrolles. "On an experimental determination of it ". 14(12). 11(2). Introduction to Automata Theory. (1962). Journal of the ACM. (1984). and L. and L. 89-118. J. R. SIAM Journal on Computing. Cornell University. H. KNUTH (1981). MUNRO (1986). MA.

1973. ACM Transactions on Mathematical Software. Y. USHIRO (1986). and S. "Reducibility among combinatorial problems". R. Proceedings of 22nd Annual IEEE Symposium on the Foundations of Computer Science. M. D. ITAI.395 decimal places based on the Gauss-Legendre algorithm and Gauss arctangent relation". Hu. manuscript. IBM Journal of Research and Development. SAHNI (1976). NY.. eds. and D.T. McCarthy. RIVEST. D. S. "Representation of events in nerve nets and finite automata". "Efficient algorithms for shortest paths in sparse networks". (1975). Michel and J.. . and M. and S. eds. 228-251. (1968). 1 : Fundamental Algorithms. Princeton. T. NY. second edition. JENSEN. pp. HoRowrrz. Fundamentals of Computer Algorithms. Miller and J. and A. B. D.013. Computer Science Press. Part II. Pascal User Manual and Report. E. B. E.C.C. NJ. "Extensions of the birthday surprise". 6. "0 jistem problemu minimalnim". OFMAN (1962). Thatcher. YOSHINO. D. R. Addison-Wesley. Princeton University Press. "A list insertion sort for keys with arbitrary key distribution". R. JOHNSON. 279-282. The Art of. 1981. 293-294. T. TAMURA. KANADA.J. Plenum Press. (1967). RODEH (1981). Bedford. 13(2). R. "Multiplication of multidigit numbers on automata" (in Russian). pp. (1965). Miner. (1977). 3(3). R. C.E. MA. S. KARATSUBA. NY. (1930). 150-158. JOHNSON. "Symmetry breaking in distributive networks". W. 2(2). Praca Moravske Prirodovedecke Spolecnosti. and M. (1969). B. "Computations of matrix chain products". Reading. "Is the Data Encryption Standard a group?". (1). KNUTH. Information Processing Letters. 249-260. MD. C. (1956). KALISKI. "Efficient randomized pattern-matching algorithms".. K. in press. JARNIK. SIAM Journal on Computing. KARP. KAHN. pp. Reading. 2: Seminumerical Algorithms. New York. E. The Art of Computer Programming. 11(2).Bibliography 347 HoRowiTz. V. 53-57. Y. and Y. SHING (1982). 362-373. Rockville. Journal of Combinatorial Theory. 85-104. A. D. KARP. Journal of Cryptology. Scientific Report AFCRL-65-758. Computer Programming. The Codebreakers: The Story of Secret Writing. R. SHERMAN (1988). 24(1). Computer Science Press. SIAM Journal on Computing. in Automata Studies. "Computations of matrix chain products". E. (1976). SHING (1984). Hu. (1972). in Complexity of Computer Computations. 3-40. 143-153. third edition revised by A. KLAMKIN. Shannon and J. New York. A. O.S. Part I. and N. 145. "Calculation of it to 10. 31(2). B.S. MA. KNUTH. Fundamentals of Data Structures. "Priority queues with update and finding minimum spanning trees". Springer-Verlag. 57-63. MD. Rockville. Macmillan. Addison-Wesley. KLEENE. and Y. and M. second edition.L. SAHNI (1978). New York. E. W. JANKO. NEWMAN (1967). Air Force Cambridge Research Laboratory. KASIMI. F. T. Doklady Akademii Nauk SSSR. Journal of the ACM. RABIN (1987). MA. and M. 1-13. "An efficient recognition and syntax algorithm for context-free languages". WIRTH (1985). 4(3).

5. LAWLER. LAWLER. 12(4).. 14(4). Mathematical Association of America. (1971). J. A. KNUTH. 238(1). Rinehart and Winston. D.-L. 1 : Sorting and Searching. Primality and Cryptography. Part I. "Building heaps fast". Elements de programmation dynamique. and V. MORRIS. B. Pascal pour programmeurs. 55-97... H. LEHMER. Problemy Peredaci Informacii. LUEKER. the Graph Traverser. Mathematisch Centrum. "Estimating the efficiency of backtrack programs". D. LECLERC.-L. Jr. NEBUT (1985). K. E. 117. D. (1970). "Algorithms". Michie. (1986). and B. Computing Surveys. H. Meltzer and D. D. "Big Omicron and big Omega and big Theta". WOOD (1966). 240-267. Jr. Mathematical Centre Tracts 154. W. ed. H. LENSTRA. Universiteit van Amsterdam. Data Structures and Algorithms. (1976). KNUTH. 29.. SIGACT News. and a simple control situation". p. Combinatorial Optimization : Networks and Matroids. J. "Optimal binary search trees". 2: Graph Algorithms and NPCompleteness. D. J. KNUTH. MCDIARMID. eds. (1986). D. Berlin. E. and R. "Computer technology applied to the theory of numbers". 281-300. Essai d'arithmetique morale. W. report 86-18. and Edinburgh University Press. and D. G. Data Structures and Algorithms. S. MARSH. E. pp. H. C. 419-436. E. Mathematics of Computation. REED (1987). "On the shortest spanning subtree of a graph and the traveling salesman problem". McGraw-Hill. AddisonWesley. 121-136. LeVeque. E. Scientific American. D. 7(1). 18-24.. "Factoring integers with elliptic curves". (1975b). L. 293-326. Acta Informatica. O. (1969). "The efficiency of algorithms". in Studies in Number Theory. (1977). Jr. "Some techniques for solving recurrences". in Lenstra and Tijdeman (1982). L. (1982).W. 6. 3: Sorting and Searching. H. Mathematisch Instituut. Wiley-Teubner Series in Computer Science. 48-50. (1777). 96-109. (1980). NY. The Art of Computer Programming. eds. J. Operations Research. Berlin. (1984a). E. PRATT (1977). 9. "Memo functions. Jr. 1. E. KNUTH. Bordas. 8(2). KNUTH. Proceedings of the American Mathematical Society. to appear in Annals of Mathematics. "Branch-and-bound methods : A survey". Computational Methods in Number Theory. in Machine Intelligence. Paris. Providence. H. "An analysis of alpha-beta cutoffs". "Primality testing". K. Holt. KRUSKAL. LEWIS. KRANAKIS. KNUTH. L. H. J. D. (1984b). Scientific American. LECARME.W. Springer-Verlag. MELHORN. 699-719. NY. 14-25. ACM.. H. MELHORN. (1982). SpringerVerlag. TIJDEMAN. . 63-80. G. "Universal search problems" (in Russian).. Amsterdam. (1956). (1973). SIAM Journal on Computing. New York. L. New York. B. (1975a). (1973). American Elsevier. Reading. LEVIN. Papadimitriou (1978). E. 115-116. (1979). Artificial Intelligence. and C. LENSTRA. R. MA. LENSTRA. D. E.348 Bibliography KNUTH. and J. 6(2). submitted to the Journal of Algorithms. E. pp. RI. Paris. (1976). LAURIERE . W. 236(4). "Fast pattern matching in strings". R.

New York. 12. 12. Bit. IEEE Transactions on Information Theory. and C. 273-280. Bell System Technical Journal. 97-108. R. and K. C. J. SIAM Journal on Computing. (1980). (1971). RABIN. 44(247). DEO (1977). 19-22. 4(3). O. STEIGLITZ (1982). 243-264. 846-847. (1978). 48(177). 462-463. "A simple and fast probabilistic algorithm for computing square roots modulo a prime number". PAPADIMITRIOU. 1389-1401. Mathematics of Computation. 114-124. (1987). 15. IT-32(6). "Complexity theory". 128-138. 36. O. NJ. "The fast Fourier transform in a finite field". POLLARD. PAN. H. 166-176. "Every prime has a succinct certificate". L. "Probabilistic algorithms". M. 335-341. "Strassen's algorithm is not optimal". P. V. J. REINGOLD. Springer-Verlag. New York. Rinehart and Winston. "'Memo' functions and machine learning". G. Holt. C. 214-220. Combinatorial Optimization : Algorithms and Complexity. 315-322. GOLD (1974). MONTGOMERY. (1975). pp. Data Structures and Algorithms. PRIM. N. C. (1980b). I. 3: Multi-Dimensional Searching and Computational Geometry. PIPPENGER. John Wiley & Sons. "A sorting problem and its complexity". (1968). (1966). Inc. Englewood Cliffs. (1957).C. NY. (1980a). R.. PRATT. M. 9(2). Prentice-Hall. Mathematics of Computation.. (1976). J. Berlin. "Very short primality proofs". Scientific American. and N. N. NEMHAUSER. PERALTA. Journal of the American Statistical Association. C. in Lenstra and Tijdeman (1982). Englewood Cliffs. A. (1986). RABIN. 21-39. POMERANCE. RABINER.. METROPOLIS. 89-139. (1987). New York. POLLARD. M. M. 15(6). Combinatorial Algorithms : Theory and Practice. M. ULAM (1949). "Speeding the Pollard and elliptic curve methods of factorization". pp. MICHIE. O. (1971). and S. BROWN (1985). (1984c). RABIN. POMERANCE. (1982). . NILSSON. POHL. The Analysis of Algorithms. (1975). W. Theoretical Computer Science. 331-334. L. Proceedings of 19th Annual IEEE Symposium on the Foundations of Computer Science. in Traub (1976). Problem Solving Methods in Artificial Intelligence. NIEVERGELT. SIAM Journal on Computing. V. 25(114). Englewood Cliffs. R. NJ. 218. Introduction to Dynamic Programming. Digital Signal Processing. MONIER. I. NY. 48(177). E.R. NJ. P. Prentice-Hall. Jr. "Probabilistic algorithms in finite fields". "A Monte Carlo method of factorization". "Analysis and comparison of some integer factoring algorithms". "The Monte Carlo method".Bibliography 349 MELHORN. 238(6). PURDOM. McGraw-Hill. Prentice-Hall. Communications of the ACM. pp. (1972). K. (1978). 365-374. and B. N. Nature. M. "Probabilistic algorithm for primality testing". Journal of Number Theory. NY. Mathematics of Computation. "Evaluation and comparison of two efficient probabilistic primality testing algorithms". D. L. "Shortest connection networks and some generalizations".

SAHNI. R. and V. Information and Control. ed. 7. R. New York. SIAM Journal on Computing.350 Bibliography RIVEST. 6(1). 84-85. STINSON. "An optimal encoding with minimum longest code and total number of digits". "Gaussian elimination is not optimal". Die Grundlehren der Mathematischen Wissenschaften. Data Structure Techniques. (1969). "Intrinsically difficult problems". STOCKMEYER. "Factoring numbers in O (log n) arithmetic steps". D. 7. 140-159. New York University. (1979). Technical Report no. GOLDNER (1977). MA. 2(1). ADLEMAN. (1973). (1973). W. McGraw-Hill. RUNGE. 118. "An improved algorithm for traversing binary trees without auxiliary stack". STANDISH. and A. Information Processing Letters. A Handbook of Integer Sequences. (1980). T. New York. R. (1980). TARJAN. Reading. A. and V.M. The Charles Babbage Research Centre. SHAMIR. 21(2). 120-126. (1964). STRASSEN. Algorithms. RIVEST. SCHWARTZ.. R. "Probabilistic algorithms for verification of polynomial identities". 11.. NY. Pierre. L. 5(3). SIGACT News. SOBOL'.J. 146-160. (1972).M. Algorithmics Press. 55-66. 8(1). "Schnelle Multiplikation grosser Zahlen". "Five number-theoretic algorithms". STONE. "Smallest augmentation to biconnect a graph". A. (1978). SCHONHAGE. Reading. Principles of Mathematical Analysis. R. (1953). New York.J. J. An Introduction to the Design and Analysis of Algorithms. STRASSEN (1971). University of Chicago Press. Scientific American. 19-25. RYTTER. MA. C. K. Proceedings of the Second Manitoba Conference on Numerical Mathematics. second edition. McGraw-Hill. NY. Addison-Wesley. Courant Institute. 604. 281-292. and E. "A fast Monte-Carlo test for primality". A. STOCKMEYER. IL. L. Chicago. 28-31. "Planar 3-colorability is polynomial complete". CHANDRA (1979). Operations Research. SHANKS. M. 37-44. KoNIG (1924). NY. erratum (1978). SIAM Journal on Computing. N. A. 718-759. E. R. Information Processing Letters. Computer Science Department. Computing. Academic Press. SHAMIR. (1972). D. SOLOVAY. 509-512. (1978). 51-70. Manitoba. SIAM Journal on Computing. S. S. ibid. 13. Introduction to Computer Organization and Data Structures. E. (1973). Addison-Wesley. ROSENTHAL. pp. "A correct preprocessing algorithm for Boyer-Moore string searching". I. W. FLOYD (1973). 1(2). L. NY. H. Numerische Mathematik. V. W. SEDGEWICK. Communications of the ACM. pp. HOROWITZ (1978). The Monte Carlo Method. 7(1). S. A. "A method for obtaining digital signatures and public-key cryptosystems". L. 12-14.J. Rustin. 354-356. (1972). "Depth-first search and linear graph algorithms". 9(3). STRASSEN (1977). and L. (1985). (1974). . RUDIN. in Combinatorial Algorithms. ROBSON. 69-76. "Combinatorial problems: reducibility and approximation". SIAM Journal on Computing. A. and H. New York. St. SCHWARTZ. 240(5). R. Berlin. and R. 26(4). and A. Springer. 6(1). J. "Bounds on the expected time for median computations". SLOANE. (1983).

21-23. Berkeley. 11-12. E. (1976). Computer Science. 189-208. A. "An O (log n) algorithm for computing the n th element of the solution of a difference equation". A. Journal of the ACM. Journal of Computer and System Sciences. 5. (1982). F. FISCHER (1974). "New hash functions and their use in authentication and set equality". 168-173. Information Processing Letters. NY. V. 230-265. (1956). C. "The string-to-string correction problem". Journal of'the ACM. H. C. Journal of the ACM. E. "The change-making problem". WAGNER.W. PA. VICKERY. 145-146.J. YOUNGER. Proceedings of 12th Annual ACM Symposium on the Theory of Computing. W. PA. J. A. R. YAO. Universite de Montreal. pp. Philadelphia. in Lenstra and Tijdeman (1982). URBANEK. R. S. New York. M. M.Bibliography 351 TARJAN. (1964). H. WINOGRAD. VAZIRANI. "A unified approach to path problems". (1980). NY. "A theorem on Boolean matrices". F. R. 43-54. Information Processing Letters. "Fast arithmetic operations on numbers and polynomials". D. (1975). VALOIS. (1975). Meyer. Paris. A. Les nombres et leurs mysteres. "Experimental determination of eigenvalues and dynamic influence coefficients for complex structures such as airplanes". Arithmetic Complexity of Computations. 4(1). S.E. WRIGHT. 347-348. 21(1). 9(1). Randomness. M. 10(2). D. "Recognition of context-free languages in time n 3 ". Massachusetts Institute of Technology. R. V. "An O (JE Ilog logI V 1) algorithm for finding minimum spanning trees". WILLIAMS. H. F. 127-185. 160-168. 429-435. 2(42).E. (1967). A. C. Algorithmes prohabilistes: une anthologie. "On the efficiency of a good but not linear set merging algorithm". R. J. (1975). 265-279. Algorithms and Complexity : Recent Results and New Directions. L. (1981)..N. 80-91. 125-128. 22(3). Proceedings of 23rd Annual IEEE Symposium on the Foundations of Computer Science. pp. Departement d'informatique et de recherche operationnelle. . University of California. F. Data Structures and Network Algorithms. A. CA. Ars Combinatoria. TARJAN. New York. (1979). 22(1). 577-593. Cambridge. W. TURK. YAO. SIAM. TURING. "Algorithm 232: Heapsort". Adversaries and Computation. Philadelphia. (1980). J. CARTER (1981). pp. Doctoral dissertation. TRAUB. (1961). YAO. (1962). Communications of the ACM. 215-225. Editions du Seuil. Masters Thesis. 22(2). (1936). Journal of the ACM. Academic Press. J. Proceedings of the London Mathematical Society. J. (1987). Doctoral dissertation. (1980). (1978). ZIPPED. (1986). (1982). 66-67. WARSHALL. "Primality testing on a computer". pp. 11(2). U. WEGMAN. WARUSFEL.. Symposium on Monte Carlo Methods. WILLIAMS. "Efficiency considerations in using semi-random sources". TARJAN. SIAM. 7(6). Journal of the ACM. Proceedings of 19th Annual ACM Symposium on the Theory of Computing. (1983). "Theory and applications of trapdoor functions". Information and Control. Probabilistic Algorithms for Sparse Polynomials. U. pp. ed. J. VAZIRANI. "Efficient dynamic programming using quadrangle inequalities". W. MA. ed. and M. John Wiley & Sons. 28(3). "On computable numbers with an application to the Entscheidungsproblem". (1987). and J.

.

298 Average... 277 Algol. 5 hybrid. 167.V. 350 Adversary argument. 204.. 190. 20 Articulation point. 313. 7-9. 35 Baase. 204. 204 alter-heap.L. 5 in worst case. 35. V. S. 198. 344 Abadi. L. 342 353 . G. 207-208 APL. 336. A.G. 276 Analysis of algorithms: on average. 5. 78. 25. 2-3 tree Barometer. 7-9 Ancestor. 204. W. 178-179. S. 207 Ancestry in a rooted tree. 341 Ajtai... 336. 168. 276. 274. Nested asymptotic notation. 23. 174-176 Asymptotic notation. 122. L. 137 Adel'son-Vel'skii.. 275. 342 Babaf. 336 Arlazarov. P. 205 Algorithm. 188 Backtracking. 54. 199 Ad hoc sorting. 185-189. 35 Alpha-beta pruning. 63.. 341 adjgraph. 204. 296 Adder. 22. 342 Batcher's sorting circuit. 2.. xiii. 342 Backgammon. L. M. 22. 342 Array. 377. E. 47 At least quadratic.. 140. 35. 248-252 Backus-Naur form. 43 See also Conditional asymptotic notation. M. 222. 276. 36. 263-266. 141. See AVL tree. 197-199... 203 Adleman. 37-51 with several arguments. 185. 173. K.O. M. 52 Basic subalgorithm.M. 138 Baumert. 341 Algebraic transformation. 251 Approximation. 118. 141..C. 35. 275. 222.. 6. P. 205 Balanced tree. See Analysis of algorithms.E. 336. 294. 106 Batcher. 342 Belaga.Index Aanderaa. 166 Acyclic graph. 275.. 337 Aho. 341 Ackermann's function. 78. 291. 341 Ackermann. E. 204 backtrack. R. 27 Amplification of stochastic advantage.. 342 Bellman. Complexity AVL tree. 18 theoretical. Operations on asymptotic notation Asymptotic recurrence. 341. 346 Beauchemin. 301 Atomic research.. 342 Bachmann. 274 Average height. 342 Bach.. 238 empirical. 342 Bellmore. 168. 276.H. 222. 224.. I Algorithmics. 4 Apple II.M. 184. 3. 227.

231 Cryptographic key. 78. 199-204 Brassard. 146-150. A. 167. 315-335 of polynomial arithmetic.. 275. 45-47. 336. 343 Change making. 343 BM. 342 bfs. 343 Buneman. N. 336.. E. 138 of graph problems. 104. 342 Blum. 276. 14-17. 65. 260-262. 164 Binomial queue. 194-198. 337. 332 CNF. 260 Contradiction. 250-252. See Fibonacci heap Birthday surprise. 344 Cook's theorem. 276. 235 Cryptology..Index 354 Bennett. 140... 320 Circuit. 234 countsort. 290 Choice of leader. Hamiltonian circuit. J. C. 133. M. 233. 293. 207 Change of variable. 204 Child. 325. 275. 174-176 binary-node. 0. 167 Chandra. 343 Bratley. 344 Cole. 336. 35. 275. R.L. 291. 228-230. 6 Curtiss. See Multiplication Clause. 344 Coppersmith. 343 Brown. 342 crude. 326 Clique. H. P. 337. 351 Catalan number. 128-132. 298 Binary tree. 148. 325-328 Cooley.. 19.J. 142 Boyer. 128. F. 293 CRAY.. 140. Batcher's sorting circuit. 184. 344 CYBER. 246 Black. 35... 343 Breadth-first search..M.. J. Telephone switching Classic. 106 Conditional asymptotic notation.. 325 Cook. H. 322 Complex number. S. 269 Cycle detection. 343. 291. S.A. P. 234 . 222. 343 Carlsson.. C. See p-correct probabilistic algorithm correction loop.H. 326 Connected component. 275. 183 Biased probabilistic algorithm. 293 Compiler generator. 343 Bottom-up technique. 325 Borodin. 80. 35. C. 276. D. L. See Conjunctive normal form Cohen. 140. 155. 276. 350 Chang. 274 Bunch. D.. 104. 298-299 of circuits. 343 Candidate. 48-51 Context-free language. 78... 341 Correct. 24 Binomial coefficient. 339 Certificate of primality. 204. 292-299 Compound interest. A. 4. 276 Cubic algorithm.. 78.. 124. 265 Bicoherent.J. 173 Co-NP. 321. 167 Ceiling. See Graph colouring Comment. 345 Bentley. 69. 333 Christofides. 343 Boyer-Moore algorithm. 227 Conjunctive normal form. 267 Confidence interval. 235. 275. 290.B. 275. 140. J. 325 Boolean variable. Merge circuit.O. 36. 104. See Adder. 291. 344 Chromatic number. 276 choose. 342 Berliner. G. L. 343 Chess. 245 Colouring. 168 Continued fraction algorithm. 302-304 of NP-complete problems. J. 216. 35. 275. C. 291 Crepeau. 309-314 of sorting. 315 count. 104. 343 Carter. 79 Canonical object.. 323 Collision. 31 Carasso. 342 Berlekamp. 25 Binary search. 342. 314-315 of matrix problems. 109-115. 275.S. 342. 174-176 Biconnected.. E. 292-337 on average. 216-222.. 336. 222.. 275.. 78. 349 Buffon's needle. 279-290 Complexity. 35. J. M. 78 Conditional probability.. 71 Cheriton. 23 Chinese remainder theorem. 350 Branch-and-bound.. 336 Chained matrix multiplication.H. 342 Berge. 243. 141. 10.K. 196 Blum. 35.A. 275. 322 Consistent probabilistic algorithm. 343. 323.. 2 Comparison sort. 143. 344 Coprime. 15 Corasick.. 72-75 Characteristic equation.... 182-184 Brigham. 65.. 52. 220 Boolean expression.L. 343 Bordvka. 342. Tally circuit. 308-309. 336. 225. 337. 275. C.R. 263 Constructive induction. 165. 204. 35. 168. 204. 304-308 of large integer arithmetic. 206 Complementary problem.

275 Endomorphic cryptosystem. 16. 345 Fermat's theorem.A. 57 Euclid's algorithm. 35. 78. 275 Elementary operation. 222 Fischer. 16. 84. 35. 58. 192 fib2. 167. 167. 87-92. See Nontrivial factor Factorization. 36. 306 FORTRAN. 344 De Moivre's formula. 222 Even. 171-182. 210 Floor.. 17-18. R. See Fast Fourier transform Fibonacci..J. P. 6 Exponentiation. 203 Euler's function.. See Large integer arithmetic Factor. S. 10. 58-59 fib3. 286. 294 trip. 345 Fredman. J. 350 Floyd's algorithm. 140 Execution time. 269 extended. 316 Eigenvalue. 309 Discrete logarithm.D. 2 delete-max.. 14-15. 136. 270 Evaluation of polynomial: with fast Fourier transform. 242.E. 344 Eventually nondecreasing function. 19.W. 35.. 104. L. 336. 298 Efficient algorithm. See Large integer arithmetic. 267. 104 Fibonacci sequence.E. 124. 140. 185-188..C. 349 Depth of circuit. 19. 19. 19. 336 div. 209-211. 91. 131. 345. 345 . R. 276 False witness of primality. 134-136. 182 Diffie. I. 167. 351 Flajolet. 344 Denning. 150-153. 235 Data Encryption Standard. 18. 143 See also De Moivre's formula fib 1. 344 Dijkstra's algorithm. 9-11 Exact order of (e). 300 many-one. 344 dexpo. 35. 142-168. 129. 17. B. 306 Deo... 21. J. 344 dfs. 59. See Double ended heap Decision problem. Polynomial arithmetic Dixon. 293 verdict. 279. 103. 30-34.K. 276. 344 Dijkstra. 336. 344 Dixon's algorithm.Index 355 Danielson. See Dicrete Fourier transform Fox. 344 darts. 224 Exponential algorithm. 344 Dense graph.. 16. 46 pruned. 151. 270 Deyong.. 78.. 237 Eigenvector. 230 Encrypted instance. 18.. 30. C. 305. 343. 87. 17. 136 find. 275. E. 127. 344 Dynamic programming. 11. 291 with preconditioning. 35. 345. 16. 275 Data Strutures. 242 Double ended heap.. 20-34 de Bruijn. 6 Expected time. 135. D. 235. 301.. 237 Eight queens problem. 341. 105-141. 279.. L. 30. A. N. 318 Decision tree. 45.L. S. 293 Declaration. 262. 142. 167. 78. 317 Erdo s. 56-57 Devroye. 323 Fourier transform.L. 35. 140. 176-182 Discrete Fourier transform. 241 Disjoint sets. 11. 248-252.. 292-299 Equivalence: linear. 104 Euler path. M. 67 Deadline.A. 24 Depth-first search. 167. 95-100 Deadlock. 204 Determinant of matrix. 302-315 Division. M. 66. G. 78.W. 345 Floating point. 318 polynomial.. 225. 342 Dromey. 256-260 dlogRH. 339 Floyd. 258. 34 See also Disjoint sets find-max. 276. 256-260. 242. 334 dexpoiler. 60-63. 151 Dinic. 57. 275. 130. 36 Dreyfus. 76. 79 Feigenbaum. 42 Exchange of two sections in an array. 226. 279-280 Feasible. 254 Euclidean travelling salesperson. 344 Euclid. 10 Fibonacci heap.G. 204.. 342 Directed graph. 58. 275. 336. 19. 16. 344 Dewdney. 204. 276 Faradzev. P.. 168. 226. 343. 275.. 342 Fast Fourier transform. 3 Divide-and-conquer.R. N. E. 290. 104. 138 Depth of node. 171 dfs'. 270. 317. 192-196. 13 Four colour theorem. I1-12. 1. 17-18. 98-100 Disjunctive normal form. 28 Finite state automaton. 261 Deap. 252 FFT. 36. 204. 275. 336. 293 valid.. 140.G. 143. 318 sense of Turing. 128. 104... W. 28 Demars.

347 Kannan. S. 222. See Bicoherent Itai.L. 345 Garey. 345 Game graph. 293-298.. 274.. 23 Invariance. B. 260. 347.C. 315 Isthmus-free. 140. 347 Kaliski. J. 53-54. 347 Kepler. 35. 346 Harel. R. 195. 115. R. R. 345 Gold. 100 Janko. 248-252 Greene. 347 Jensen. 167. 35.. 26. M. 257. Y. 319. B. 170 insert. 346 Handscomb. 35.. 189 Integration. 64 Godbole.C. 81-92.. 350 Hu. 310 of an integer. 16.. 337. 336. 346 goodproduct. 336. 273 Generator of cyclic group. D. 259 gcd. 104. C.. 119. 320. D. 141. M. 167.J. 291. See Numeric integration Internal node. 185-202 Indentation.N. 169-204. 346 Hanoi. 168.. 101-104. 346 Gonnet. 345 Gleick.. 167... 337.. E. 78. 291.-D. 56 Height.. See Backgammon. 100-104. J. 184. 302-304 of a polynomial. 346 Good. 68-72 Inorder. 316. 202 See also Double ended heap. 275. A. 347 Karp. S. 140. G.H. 28 Instance. 189 Inherently difficult problem. 345 Full adder. 275.. xiii. 336 Hidden constant.A. 204.B.. 276. 332 Hamiltonian circuit. 35. 275. 91-92. 277 . 116. 346 Hellman. D. 276.. xiv.. 346 Haken. 115. M. 204.. 346 Hashing. xiv. T. 291. A. 15.. 104.. 336. 25. 346 Harmonic series. 209 Horowitz. Chess.H. 347 Iterated logarithm. 204. 276. 168. 332 Hamiltonian path.. 166.A. 330-332 Greatest common divisor.S. D. 36. 290. 104 Hammersley.. 184. 341 Hudson's Bay.. 186. 349 Golden ratio. A. 35. 336.. W. M. 20. S. Fibonacci heap Heap property. 280-284 modulo p. 336. 222. 254 of a matrix. 165 Huffman code. See Robyn Hode Homogeneous recurrence. 204. M. V. 36.C.. 346 Hell. W. 21-22. 7. 84. 274 Goutier.E.S. 78. 78. 36.H. 30. 168. 276. 245-247 Heap. 276. 35-36. 337. See Towers of Hanoi Hardy. 347 Johnson. 151. 104. 344.. 79-104.. 135. 101-102..M. C. 315..Index 356 Freivalds. 25-30. 24 See also Average height Held... 159. 13. 345 God. 140. 291. 7. 347 Jarm'k. 35. T. D.. 267 Gentleman. I. 347 Huang. 43. 54 Harrison. 346 Gondran. 85. 345 Gauss-Jordan elimination. M. 346 See also Quicksort Hode. K. 346 Homer's rule. 343.. 337 Inhomogeneous recurrence.. M. E.. D.R. See Principle of invariance Inverse: Fourier transform. 56.. 120. See Adder Furman. 345 Golomb. 104 Implicit graph. 291. 128. G... 128 genrand. 140. 276. 345 Kahn. 13.. 274. 276. 3 Infinite graph.. 304-308 Graph colouring. 117 Insertion sort. D. 55-56. 204. 167. 140. 140. 350 Goldwasser. A. 168. 346 Gries. 341.E. 276. 346. 13. 346 Heuristic. 346 Graph.R. 35. 230 Hoare. 336. 295-299 insert-node.. 65-68 Hopcroft. 315 Greedy algorithm. M. 115. 237. J. 204 Games.. 128. 347 Kasimi.. 347 Kanada. D. 275.. 42. 336. 346 Halting problem. 150-153. 275. 342 Graham. 104. S. 345 Gilbert. 36.E. 6 hitormiss. See Greatest common divisor Generation of permutation. 67 Goldner. 342 Hall. 15. 4 Instant Insanity. 20.M. P. 291. 323.... 189-198. 85 Heapsort. 343 Karatsuba.. J. 63. 104. 347 Johnson. 140. Nim Gardner. 336 heapsort.

342 Kruskal. 0. 140. 342. 341. Inverse of a matrix.W. 248-251 Kranakis. 168. 92. 204.N. 348 Martin. Jr. 345. H. 167. S. 268-269 make-heap.E. M.. 349 Meyer. A. 104 k-smooth integer. 343 Lewis. 35. 349. G. 213-215. 185.. 6. 141. E. 168. 92-94 Minimum spanning tree. D. 348 Median. 347 Kleene. 336. 346 Key for' .. 140. 277 See also Discrete logarithm. 348. 344 Lexicographic sorting. J.H. H.L. 82-85.K.. See Atomic research Lower bound theory. Symmetric matrix. 55 Manders. 128-131. 121-124. Reduction Marienbad. 308-315 square root. 138. 348 Kruskal's algorithm. 117.. 290.. 325. 344 Landis.A. 245 Logarithm. 35. 165 Koml6s.. 297 mergesort. 19. 193. 141. 40. 345 Klamkin. 341 Large integer arithmetic. 247-262. 308-315 addition. 128-132. 167. See Equivalence. 35. 348 Levy. D. 349 Miller.A. 308 reductions. 140. 348. 124-128. Reductions among martix problems. 168. 315 squaring. 137. 286-290. 169 Minoux. 347 KMP. 290. 23 Lecarme. 276. 162-164. 168.. 227. 336 L'Hopital's rule. 78. 343. 81-87.. See Nim Marsh. 275. 308. 290. 333 LauriBre.C. 349 Memory function. 104. 175 Lucas. 308. 350 Korsh. 128. 275.. A. 341. 309. 346. H. 22. 290 . 315 multiplication. 258 k th smallest element. 140. M. M... M. 13. 24 Lehmer. 222. 140.L. 115-116. 35. 91. 128. L. K. 33 See also Disjoint sets Merge sort. D. P.S.. 228.R. 344 Lenstra. 15.. 35. 323 Lueker. 315 Knight's tour problem. 344. 260.. 1-4. 140. 293. 140.. J. 286-290. 35. 336. 268 majMC.. 348 Kronrod. 19. 87.. 326 Load factor. 241 Label.. 227 reduction. 7 Merge circuit. 238-240. 291. L. 252-256. G.. 228. 275 algorithm. 124 division.. Triangular matrix. 300 programming. 301 greatest common divisor. 92. 315 exponentiation. 224. Jr. 24 Levin.. 184. 204 Minimization of waiting time.R. 188. 3 Modular arithmetic. 309 Las Vegas algorithm. L.. 348 Leclerc. 237. 276. J-L... Iterated logarithm Longest simple path. 140. 341 Many-one. 195-197. J. 348 Leader. 167. 202. 347 Minimax principle. 168. 348 Macroprocessor. 13-14. S. 78 357 Linear: algebra. 211 maj. 348 Knuth-Morris-Pratt's algorithm.R. 275. 301 equations..E. 115 Mersenne. 7.A.. 275 Melhom. Multiplication of matrices. J. 323 Metropolis.. 20-21 Literal. 348 Lewis. 187. 272 Memory space. See Chained matrix multiplication. D. 104. 237 equivalence. 300 lisgraph. See Canonical object Labyrinth.. 120-124.B. 276. G.Index Kerr. 351 Micali. 268 maj 2. 341 Konig.. Strassen's algorithm. 213.. 254. 346 Levin. 276... E.H. 35. 139 merge. 351 Level.. 275. Unitary matrix McDiarmid. 224... 343 k-promising. G. K. 276. 275. 276. 29. C. C. 78. 104 Knuth. 348. 348 Left-hand child. 216. F.M. 35. Determinant of matrix. 36. IN. 336 lowest. 203-204 Lanczos.. 238-240. 346 mod. 5. 204. 252-253.. 275. 269 Majority element.. 336. 274. 348 Lawler. 345 Michie. 346. E.. 345 Matrix. 13-14. G. 144. 103. 347. 274. 348 Lenstra. 215 Knapsack. See: Choice of leader Leaf. 315 Los Alamos. 276. 78. 274. 172 List. 204.J. 120. 78 Limit. 119. 271 Kilian.S. 124-132.. 40. 128. 222 Koksoak.

-L. 5. 222. 166. 35. 342. G. 27. 276. 143. Polynomial arithmetic a la russe. 349 Pointer.. 205. 281. 275. 209-211.. 322 See also Numeric integration. 330-332 Optimal search tree. 343. 216. 102. 278. Unbiased probabilistic algorithm Probabilistic counting. J. 116. 6. 10 Newton's method. R. 78 Operations on asymptotic notation. S. 205-211..C.. 315-335. 309-314 reductions. 332-335. 124 classic. 320 . 174. 170. E. 212-222 p-correct probabilistic algorithm.I. See also Large integer arithmetic. J. See Biased probabilistic algorithm. 228. 346 Napier. 208 Preorder. See Theory of numbers Numeric integration: Monte Carlo. 333 Nontrivial factor. 140.H. 117 Planar graph. 239-240 pivot.Index 358 Monet. 144-159.. 228-234. 141. 349 Nim. 345 Moore. 349 Pivot. 103. 24 Nebut. 228-237.. 276. J. 315... 207 Optimality. 349 Nilsson. 247. 51. 343 Morris. See Equivalence. V. 302-308 Munro. 92 Primality testing. 349 Precomputation. 298 Priority list. 277 n-ary-node. 25. 275. J. 286.. 45 Newman. 316 Polynomial arithmetic: division. J. See Eventually nondecreasing function Non-determinism. p-correct probabilistic algorithm. 36. 291. 332-335 NP-complete. 167-168. 248 Ofman. Reduction Pomerance. 222. 349 parent. 62 Pattern. 3. 263.M. 336. 275 Objective function. 232. Las Vegas algorithm. 315 Nievergelt. 227. 43-45 Optimal graph colouring. 347 Omega (a). 227. 318 Pan. 275 Probability of success. 2-4. Y.. 230-232. 344.S.. 104. 237. 35. 215. 309-314 evaluation. 4 Program. C. 276. 35. 337. 232-237. 79-80 obstinate. 286. L.C. 205 Pascal's triangle. 37 P. V. 349 Polynomial algorithm. 23 Pascal (computer language). N. 222. 2. 291. 269-271. Random permutation Pi (it). J. See Generation of Permutation. 136. 209-211. 6. 1. C. 278 of matrices.. D. 275 multiple.. 256. 170. 349 Percolate.. 263 Peralta. 30. 140. 41 One-way equality. 140. 91 percolate.H. 204. 308-314 Polynomial. 204. 20. 153. 9. 348. 85-87. 140. 208 Prim. 79 See also k-promising Proof space. 27.R. 35. 124-132. 6. 2 Promising. 343. 347 Newton's binomial. 332 Pohl.. P. 348 Multiplication.F. 2 Programming language. 240-242. 26. 13-14.. 349 Prim's algorithm. Consistent probabilistic algorithm.. 349 Papadimitriou. 35. 341. 231 Numerical probabilistic algorithm. 120. 222. 140. Monte Carlo algorithm. 1. 349 Nested asymptotic notation. 27 Permutation. 8-17. 320. 224. Quasi Monte Carlo Montgomery. 310 Monier. R. 208 Pratt. 204. 291. 208 Postorder. 349 postnum. 124. 333 Problem... 20 Pollard. Numerical probabilistic algorithm. 324-332 NP-completeness.L. 146-150. 290-291 Pippenger. 336-337 Number theory. 336.J. 334 NP. 281 Principle of invariance. 337. 334 See also Certificate of primality primeND. 349 Moore. J. 336 Non-deterministic algorithm. 336. 35. 276. N. 313. 38 Principle of optimality. 167. 106. 334 Principal root of unity. 35. 225. 262-274. 348.. 143. 13-14. 211-222 Preconditioning. 274. 136.190-194 Node. 204. 278 prenum. 136. 342. 199 Probabilistic algorithm. 284-286. 3. 132-133. 133. 286 multiplication. 154-159. 256. 167. 34. 336. Sherwood algorithm. 33. 28-30. 291 interpolation. 204. 193. 275 trapezoidal. 349 Monte Carlo algorithm. 321. 348 Nemhauser. 173. 180. 276. 154-159. 343 Monic polynomial. 211. 146 Path compression. 21 node.... 21 Nondecreasing. See Principle of optimality Order of (0).

311-313 Scheduling. 6. 228. 242-245 record. 241 Right-hand child. 154-159. 115. 347 . 75 Rank. 276 Shakespeare. L. 8.-M. 140. 350 Robyn Hode. Jr... 270. M. 79. 257 sieve. See coprime repeatMC. 189 Rudin. 234 Purdom P. See Asymptotic recurrence. J. 104. 290 Schrage. 65-78 return. 226.S.B.L. W. E. 315-335 polynomial problems. 290. 350 Schonhage-Strassen algorithm. 7. 290. 271-273.M. 350 Robert. E. D. 52. 342 Scaling factor. 347 Roman figures. J. 345 SAT.. 60-63. 227 See also Binomial queue Quicksort.Index Proof system. 139-140 Schonhage. 146 Set difference. Inhomogeneous recurrence. 291. 276. 19.E. 304-308 matrix problems. 336. 35. 349 Rabin's algorithm. 36.W. 275. 224... 78. 23 Rooted tree. 239 Selection sort. 274 Shing. 211-222 in a graph. R. A. 275 Queens. 336 Saxe. A. 350 Search tree. 110 series. 24 359 Ring of processors.. See Satisfiability Satisfiability. M. 275. 337. 204. 343 Schwartz. 104. 13. 78. 78.. 260 Quasi Monte Carlo. 349 Relatively prime. W. 341 Runge. 232. 350 Rumely... 349 Radix sort. 302-304 NP-complete problems. See Alpha-beta pruning pseudomed. B. 35.. 207 Sedgewick. 275 Pseudorandom walk. 252 residue. 320 Pruned. 35. J. 23-25 Rosenthal. 320 Pseudorandom generation.. 317. 171-184 in a sorted array. 44 Set equality. 106 Reduction: linear. 301 nonresidue. Resolution of recurrences Recursivity. 242-245 in a tree. 266 Resolution of recurrences. 238-247. 140. 350 Sande. C. 309-314 Reed. 325. 276.. 80 selectionRH. 275. 318 polynomial. 141. 255 Root of a tree. 348 Reflexive transitive closure.M. 293-295 quicksort. 270... 304-308 Regular expression. I. 13. G. 316. 167 Reingold. 347. 347 Sherwood algorithm. 314-315 graph problems.R. 350 Seed of a pseudorandom generator.. 275. 350 Sahni. 170-171. 290.. 273 Range transformation. 7 Selection.. 117 Rabin. 350 Sherman.. 122-124. J. 207 Searching: for a string. 235 Shallit. 276 Rabiner. J. 188. 204.S. 182. 350 Rubik's Cube. 276. 342 Robson. 326-332. 226. 277 rootLV. 349 Quadratic: algorithm. 116-119. 153. 350 Rytter. Homogeneous recurrence. 199. 275. 228 Rodeh. AT. 291. 167.. 25. S. 254 rootLV2. 154-159. 350 Schwartz. See kth smallest element selection.R. 239 Pseudoprime.. R. 347.. 276. 227. 25... See Decision tree Pruning. 252. 276. 240. 140. 247. 92-100. 34. 276.. 318 sense of Turing. 308-309. 249 Queue. 300 many-one. 167. See Binary search in a sorted list. 222. 261 select. 122 Pseudomedian. 15. 260-262 Rivest. 35. 342 Shanks.. 343.A. R. L..O. 3 RH. 291. A. 204. 295-299 Sequential search... 272 Random permutation.. 120 Selection function. 317 Reductions among: arithmetic problems. 293 Random function. 54. 342 Shamir. 250 QueensLV. 347. 238. 3 Recurrence. 350 Shamos. M.

.. 78 Transformation: of the domain.M. 150-153. 27. 86. 336 Telephone switching. 21.M. 151 Special path. A. Transformation sort Source. 277-291 function. 35. 123 Tijdeman.A. 43 Threshold for divide-and-conquer. 103. 204. 276. 150 Sparse graph. 275 Sink. 226. 213-222 Tarjan. 211-213. 22. 199-202. 23 Sift-down. 15. 275 Steele. 19.. 280 Signature. 342 Standish. 320 Strongly connected.. See Numeric integration Traub. 108. 273 Simple path.S. xiv. 167. 140. 121. 39. 55-56. 211 Text processing. 144-146 Towers of Hanoi. Reduction Turing machine. 182. 138 Size of instance. 277 sort. Radix sort. See Amplification of stochastic advantage Stochastic preconditioning. 74. H. Traversal of tree treenode. 87-92. 190 Tournament. I. 302 Trip. 319 See also Euclidean travelling salesperson Traversal of tree. 336 Transformed domain. 69. 107-109 ultimate.J. Rooted tree. 342 shuffle. V. 305. 351 2-edge-connected. 270.J. See Ad hoc sorting. 87. 87. J. 204. 301 Sobol'. 140. Comparison sort. 91. 211 Theory of numbers. 345. 337.F. 350 Sort.. 35.Index 360 Shortest path. countsort. 153 Shub. 20 Tally circuit. Searching in a tree. 256. 240-242 Stockmeyer. 350 Stirling's formula. 168. 275. 199 Stanat. 350. 104.A. M. Preorder. 35.W. Batcher's sorting circuit. R. 132. 203. 337.F.. Balanced tree. 315 Simplex.. 137 Tamura. 134-136 Simula. 343. 315. 350 Stone. 25. 19. 304 Syntactic analysis. 257 modulo p. J. 277 Transformed function. 227 Simplification. 349 Stinson. 140. 350 Statistical test.. Quicksort... 350 Strassen. 178-179.M. 64. 27 sift-up. 102.. 28. 308 String. 275. 167. Lexicographic sorting. Binary tree.. 351 See also Equivalence. 326-327. 275.. 139-140. Heapsort. 301 tablist. 279. 345. 304-308 Shortest simple path. 348. 159-162. See k-smooth integer Smooth problem. Searching Tree. 106 determination. 233 Stochastic advantage. See Bicoherent 2-3 tree. 91 Switch. 103-104. 28. 35 Tukey. 168. 301 Smooth integer. 153. 344 Turing. N. 179-182 Strongly quadratic. Minimum spanning tree. 204. 78. R. 349. J. Y. 35. E. 137.. 36. 222. 28-30. 30. 277 Trapezoidal algorithm. Topological sorting.. 341 sift-down.. 132-133. 290. 140. See Decision tree TRS-80.. 35.M. 252-256 Stack. 23 Triangle inequality. 153. 291. 211.. 22 See also Ancestry in a rooted tree. See Searching Strong pseudoprime. T. 30. 37. See also Large integer arithmetic modulo n. 203 Size of circuit.E. 276 Square root. 103 Triangular matrix. 293. 336 Text editor. Decision tree. L. 351 Tautology. See Polynomial arithmetic Symmetric matrix. 350 Solovay. 350 slow-make-heap. 109-115. 168. 104. 87. 346. 276. J. R. 88-90 Splitting. Postorder. 351 Travelling salesperson. 347 Target string. 141. K. 139-140 Top-down technique. See Percolate Signal processing. 351 Timetable. 56 Smooth algorithm. See Inorder. D. 205 Szemeredi. 350 Strassen's algorithm. 325. 46. 106. 144.. 35. 141. D. 276 Threshold for asymptotic notation. 291. 107. Optimal search tree. 142 Topological sorting. 291. 301 Supra quadratic. 301 Smooth function. 275. Insertion sort. See Telephone switching Symbolic manipulation of polynomials.. 336. 128-132. 167.R. 205 Simulation. 227. 5 Sloane. 327 Turk. 349. 290. Selection sort. 241 Sibling. 342 Steiglitz.W. 35 .

D.. 168. R. 347 Wood. 347 Younger. 332. 351 Zuffellato. 351 Yoshino. 141.. 351 VAX. 21. R. J. 275. 304 Universal hashing. 344 Well-characterized problem. 275-276 Unpredictable pseudorandom generator.. 35. J. 272. 291. 276. 225 Unit cost... 273 Wagner. 193. 351 Warshall's algorithm. 308 Yao. 167. D.. 276. D. See Elementary operation Unitary matrix. 351 See also Heap. 275. 337. 78. 35.. 341. N. 276 Undecidable problem.V. 13. 167. 275.. S. S. F. 351 Wirth. 336.. D.W. 276... 196 Widget. See Decision tree Valois. 167. 222. S. 263 Worst case. 168. 342 Urbanek. 351 Williams. 291. 104. 36. 344.W. 168. 10 Winograd. H. F.A.. 346 Ultimate.. 351 Wegman.J. See Decision tree Vickery. 331 Williams. 36...N. S. 337. 343. 351 Ushiro. 317. 274. See Biconnected Unbiased probabilistic algorithm. 343 . 351 Welch. 164. 351 Yao. 351 361 Warusfel.. 290.J. 349 Ullman. 56. 228 White. J. E.. 204. 276. Y. See Analysis of algorithms Wright. A. 35. 275. P.M.. M. 204.C. 168. 236 World Series. 291. 351 Warshall.. 133. C. 144-146. D. 104. Heapsort Wilson's theorem. 153. 351 Zippel. 347 Valid. A. 140. 337 Undirected graph. 78..W..Index Ulam.F. 245-247. 348 wordcnt. 275.. 35. 168.W. 140.E.. See Threshold for divide-and-conquer Unarticulated.. U. 351 Virtual initialization. 351 Verdict. 13 Vazirani. 276. 346 Wright. 8. 9. 171-176 Uniform random generation.D. 291..H. 275.

Now. symbolic computation. numerical analysis. computing in the humanities. this innovative new book gives readers the basic tools they need to develop their own algorithms--in whatever field of application they may be required! CONTENT HIGHLIGHTS: Concentrates on the techniques needed to design and analyze algorithms. linear algebra. Details each technique in full.11fl Mcpo)[PT TIPMT0T( GILLES BRASSARD PAUL BRATLEY The computer explosion has made calculations once thought impossible seem routine. among others. However. Englewood Cliffs. Contains approximately 500 exercises-many of which call for an algorithm to be implemented on a computer so that its efficiency may be measured experimentally and compared to the efficiency of alternative solutions. cryptography. Presents real-life applications for most algorithms. Illustrates each technique with concrete examples of algorithms taken from such different applications as optimization. another factor has had an even more important effect in extending the frontiers of feasible computation: the use of efficient algorithms. artificial intelligence. PRENTICE HALL. NJ 07632 ISBN 0-13-023243-2 . operations research.