Reduction (complexity)
|
In computability theory and computational complexity theory, a reduction is a transformation of one problem into another problem. Depending on the transformation used this can be used to define complexity classes on a set of problems.
Roughly speaking, if a problem A reduces to a problem B, this means A is no harder than B, and B is no easier than A. We write A ≤ B, usually with a subscript on the ≤ to indicate the type of reduction being used.
Contents |
Definition
Given two subsets A and B of N and a set of functions F from N to N which is closed under composition, A is called reducible to B under F iff
- <math>\exists f \in F \mbox{ . } \forall x \in \mathbb{N} \mbox{ . } x \in A \Leftrightarrow f(x) \in B<math>
We write
- <math>A \leq_{F} B<math>
Let S be a subset of P(N) and ≤ a reduction, then S is called closed under ≤ if
- <math>\forall s \in S \mbox{ . } \forall A \in \mathbb{N} \mbox{ . } A \leq S \Leftrightarrow A \in S<math>
A subset A of N is called hard for S if
- <math>\forall s \in S \mbox{ . } s \leq A<math>
A subset A of N is called complete for S if A is hard for S and A is in S.
Notes
A reduction is a preordering, that is a reflexive and transitive relation, on P(N)×P(N), where P(N) is the power set of the natural numbers.
A problem is complete for a complexity class if every problem in the class reduces to that problem, and it is also in the class itself. In this sense the problem represents the class, since any solution to it can, in combination with the reductions, be used to solve every problem in the class.
Examples
- To show that a decision problem is undecidable a computable function can be used to transform it into another decision problem which is already known to be undecidable.
- The complexity classes P, NP and PSPACE are closed under polynomial-time reductions.
- The complexity classes L, NL, P, NP and PSPACE are closed under log-space reduction.
Gentle Introduction
Often we find ourselves trying to solve a problem that is similar to a problem we've already solved. In these cases, often a quick way of solving the new problem is to transform each instance of the new problem into instances of the old problem, solve these using our existing solution, and then use these to obtain our final solution. This is perhaps the most obvious use of reductions.
Another, more subtle use is this: suppose we have a problem that we've proven is hard to solve, and we have a similar new problem. We might suspect that it, too, is hard to solve. We argue by contradiction: suppose the new problem is easy to solve. Then, if we can show that every instance of the old problem can be solved easily by transforming it into instances of the new problem and solving those, we have a contradiction. This establishes that the new problem is also hard.
A very simple example of a reduction is from squaring to multiplication. Suppose all we know how to do is to is add, subtract, and take squares. We can use this knowledge, combined with the following formula, to obtain the product of any two numbers:
- 2 × a × b = (a + b)2 - a2 - b2.
We also have a reduction in the other direction; obviously, if we can multiply two numbers, we can square a number. This seems to imply that these two problems are equally hard. This kind of reduction corresponds to Turing reduction.
However, the reduction becomes much harder if we add the restriction that we can only use the squaring function one time, and only at the end. In this case, even if we're allowed to use all the basic arithmetic operations, including multiplication, no reduction exists in general, because we may have to compute an irrational number like <math>\sqrt{2}<math> from rational numbers. Going in the other direction, however, we can certainly square a number with just one multiplication, only at the end. Using this limited form of reduction, we have shown the unsurprising result that multiplication is harder in general than squaring. This corresponds to many-one reduction.
Types and applications of reductions
As described in the example above, there are two main types of reductions used in computational complexity, the many-one reduction and the Turing reduction. Many-one reductions map instances of one problem to instances of another; Turing reductions compute the solution to one problem, assuming the other problem is easy to solve. Many-one reduction is weaker than Turing reduction. Weaker reductions are more effective at separating problems, but they have less power, making reductions harder to design.
However, in order to be useful, reductions must be easy. For example, it's quite possible to reduce a difficult-to-solve NP-complete problem like the boolean satisfiability problem to a trivial problem, like determining if a number equals zero, by having the reduction machine solve the problem in exponential time and output zero only if there is a solution. But this doesn't achieve much, because even though we can solve the new problem, performing the reduction is just as hard as solving the old problem.
Therefore, the appropriate notion of reduction depends on the complexity class being studied. When studying the complexity class NP and harder classes, polynomial-time many-one reduction is used. When defining classes in the polynomial hierarchy, polynomial-time Turing reduction is used. When studying classes within P such as NC and NL), log-space reduction is used. Reduction is also used in computability theory to show whether problems are or are not solvable by machines at all; in this case, reductions are restricted only to computable functions.