Linear time
|
In computational complexity, an algorithm is said to take linear time, or O(n) time, if the time it requires is proportional to the size of the input, which is usually denoted n. Put another way, the running time increases linearly with the size of the input. For example, a procedure that adds up the elements of a list requires time proportional to the length of the list.
This description is slightly inaccurate, since the running time can significantly deviate from a precise proportionality, especially for small n. Technically, it's only necessary that for large enough n, the algorithm takes more than <math>''an''<math> time and less than <math>bn<math> time for some positive real constants a,b. For more information, see the article on Big O notation.
Linear time is often viewed as a desirable attribute for an algorithm. Much research has been invested into creating algorithms exhibiting (nearly) linear time or better. This research includes both software and hardware methods. In the case of hardware, some algorithms which, mathematically speaking, can never achieve linear time with the standard computation model are now able to run in linear time. There are several hardware technologies which exploit parallelism to provide this. An example is associative memory.
For a given sorting algorithm, for example, it can be proven that there exists an order of elements over which this sorting algorithm will execute in linear time. However, for the general case, no sorting algorithm can perform better than O(n log(n)). Such a proof of lower bound complexity is covered by omega notation; a generalised sorting algorithm is said to be of Ω(n log(n)). Likewise, it can be shown that finding the maximum of a set of elements is Ω(n), as logically one has to perform at least (n-1) comparisons to ascertain the largest element.
See also: Polynomial time