Competitive analysis
|
Competitive analysis shows how on-line algorithms perform and demonstrates the power of randomization in algorithms.
For many algorithms, the performance is not dependent on the values of the data, only the amount. An example of one that is data dependent is the quicksort algorithm, which sorts an array of elements. Such data dependent algorithms are analysed for average case and worst case data. Competitive analysis is a way of doing worst case analysis for on-line and randomized algorithms, which are typically data dependent.
In competitive analysis, one imagines an "adversary" (hence the name "competitive") that deliberately chooses difficult data, to maximize the ratio of the cost of the algorithm being studied and some optimal algorithm. Adversaries range in power from the oblivious adversary, which has no knowledge of the algorithm pitted against it, to ones that have full knowledge of how an algorithm works and its state at any point during its operation on some set of data. The kind of adversary that has knowledge of the algorithm but cannot examine an algorithm's state is the one that randomized algorithms do well against, compared to deterministic algorithms. In the case of a deterministic algorithm, an adversary can simply compute what state that algorithm must have at any time in the future, and choose difficult data accordingly.
For example, the quicksort algorithm chooses one element, called the "pivot" that is, on average, not too far from the center value of the data being sorted, and then separates the data into two piles, one of which contains all elements with value less than the value of the pivot, and the other containing the rest of the elements. If quicksort chooses the pivot in some deterministic fashion (for instance, always choosing the first element in the list), then it is easy for an adversary to arrange the data beforehand so that quicksort will perform in worst case time. If, however, quicksort chooses some random element to be the pivot, then an adversary without knowledge of what random numbers are coming up cannot arrange the data to guarantee worst case execution time for quicksort.
The classic on-line problem first analysed with competitive analysis is the List Update problem: Given a list of items and a sequence of requests for the various items, minimize the cost of accessing the list where the elements closer to the front of the list cost less to access. (Typically, the cost of accessing an item is equal to its position in the list.) After an access, the list may be rearranged. Most rearrangements have a cost. The Move-To-Front algorithm simply moves the requested item to the front after the access, at no cost. The Transpose algorithm swaps the accessed item with the item immediately before it, also at no cost. Classical methods of analysis showed that Transpose is optimal in certain contexts. In practice, Move-To-Front performed much better. Competitive analysis was used to show that an adversary can make Transpose perform arbitrarily badly compared to an optimal algorithm, whereas Move-To-Front can never be made to incur more than twice the cost of an optimal algorithm.
References
- "Amortized Efficiency of List Update and Paging Rules", Sleator and Tarjan, Communications of the ACM, Feb. 1985.