In working with the big Oh notation, we cavalierly discard the multiplicative constants. The functions and are treated identically, even though g(n) is a million times larger than f(n) for all values of n. The method behind this madness is illustrated by Figure , which tabulates the growth rate of several functions arising in algorithm analysis, on problem instances of reasonable size. Specifically, Figure shows how long it takes for algorithms that use f(n) operations to complete on a fast computer where each operation takes one nanosecond ( seconds). Study the table for a few minutes and the following conclusions become apparent:
Figure: Growth rates of common functions measured in nanoseconds
The bottom line is that even by ignoring constant factors, we can get an excellent idea of whether a given algorithm will be able to run in a reasonable amount of time on a problem of a given size. An algorithm whose running time is seconds will beat one whose running time is seconds only so long as . Such enormous constant factor differences between algorithms occur in practice far less frequently than such large problems do.