        Next: Growth Rates Up: Introduction to Algorithms Previous: BestWorst, and Average-Case

# The Big Oh Notation

We have agreed that the best, worst, and average-case complexities of a given algorithm are numerical functions of the size of the instances. However, it is difficult to work with these functions exactly because they are often very complicated, with many little up and down bumps, as shown in Figure . Thus it is usually cleaner and easier to talk about upper and lower bounds of such functions. This is where the big Oh notation comes into the picture. Figure: Upper and lower bounds smooth out the behavior of complex functions

We seek this smoothing in order to ignore levels of detail that do not impact our comparison of algorithms. Since running our algorithm on a machine that is twice as fast will cut the running times of all algorithms by a multiplicative constant of two, such constant factors would be washed away in upgrading machines. On the RAM we ignore such constant factors anyway and therefore might as well ignore them when comparing two algorithms. We use the following notation:

• f(n) = O(g(n)) means is an upper bound on f(n). Thus there exists some constant c such that f(n) is always , for large enough n.
• means is a lower bound on f(n). Thus there exists some constant c such that f(n) is always , for large enough n.
• means is an upper bound on f(n) and is a lower bound on f(n), for large enough n. Thus there exists constants and such that and . This means that g(n) is a nice, tight bound on f(n). Figure: Illustrating the (a) O, (b) , and (c) notations

Got it? These definitions are illustrated in Figure . All of these definitions imply a constant beyond which they are always satisfied. We are not concerned about small values of n, i.e. anything to the left of . After all, do you really care whether one sorting algorithm sorts six items faster than another algorithm, or which one is faster when sorting 1,000 or 1,000,000 items? The big Oh notation enables us to ignore details and focus on the big picture.

Make sure you understand this notation by working through the following examples. We choose certain constants in the explanations because they work and make a point, but you are free to choose any constant that maintains the same inequality:   The big Oh notation provides for a rough notion of equality when comparing functions. It is somewhat jarring to see an expression like , but its meaning can always be resolved by going back to the definitions in terms of upper and lower bounds. It is perhaps most instructive to read `=' as meaning one of the functions that are. Clearly, is one of functions that are .        Next: Growth Rates Up: Introduction to Algorithms Previous: BestWorst, and Average-Case

Algorithms
Mon Jun 2 23:33:50 EDT 1997