        Next: Logarithms Up: Introduction to Algorithms Previous: The Big Oh Notation

# Growth Rates

In working with the big Oh notation, we cavalierly discard the multiplicative constants. The functions and are treated identically, even though g(n) is a million times larger than f(n) for all values of n. The method behind this madness is illustrated by Figure , which tabulates the growth rate of several functions arising in algorithm analysis, on problem instances of reasonable size. Specifically, Figure shows how long it takes for algorithms that use f(n) operations to complete on a fast computer where each operation takes one nanosecond ( seconds). Study the table for a few minutes and the following conclusions become apparent:

Figure:   Growth rates of common functions measured in nanoseconds

• All of these algorithms take about the same amount of time for n=10.
• The algorithm whose running time is n! becomes useless well before n=20.
• The algorithm whose running time is has a greater operating range, but it becomes impractical for n > 40.
• The algorithm whose running time is is perfectly reasonable up to about n=100, but it quickly deteriorates with larger inputs. For n > 1,000,000 it likely to be hopeless.
• Both the n and algorithms remain practical on inputs of up to one billion items.
• You can't hope to find a real problem where an algorithm is going to be too slow in practice.

The bottom line is that even by ignoring constant factors, we can get an excellent idea of whether a given algorithm will be able to run in a reasonable amount of time on a problem of a given size. An algorithm whose running time is seconds will beat one whose running time is seconds only so long as . Such enormous constant factor differences between algorithms occur in practice far less frequently than such large problems do.        Next: Logarithms Up: Introduction to Algorithms Previous: The Big Oh Notation

Algorithms
Mon Jun 2 23:33:50 EDT 1997