Second-order logic
|
In mathematical logic, second-order logic is an extension of either propositional logic or first-order logic which contains variables in predicate positions (rather than only in term positions, as in first-order logic), and quantifiers binding them. So:
- <math>\forall F\ F(\mathrm{jones}) \lor \neg F(\mathrm{jones})\, <math>
which might express the principle of bivalence with respect to Jones: For every property, either Jones has it or he doesn't.
It permits of various interpretations; frequently it is thought of as containing quantification over subsets of a domain, or functions from the domain into itself, rather than only over individual members of the domain. Thus, for example, if the domain is the set of all real numbers, one can assert in first-order logic the existence of an additive inverse of each real number by writing
- <math>\forall x\ \exists y\ x+y=0<math>
but one needs second-order logic to assert the least-upper-bound property of the real numbers:
- <math>\forall A\subseteq R\ [\cdots\cdots]<math>
and insert in place of the dots a statement that if A is nonempty and has an upper bound in R then A has a least upper bound in R.
(table of mathematical symbols)
Contents |
Why second-order logic is not reducible to first-order logic
An optimist might attempt to reduce second-order logic to first-order logic in the following way. Expand the domain from the set of all real numbers to the union of that set with the set of all sets of real numbers. Add a new binary predicate to the language: the membership relation. Then sentences that were second-order become first-order.
But notice that the domain was asserted to include all sets of real numbers. That requirement has not been reduced to a first-order sentence! But might there be some way to accomplish the reduction? The classic Löwenheim-Skolem theorem entails that there is not. That theorem implies that there is some countably infinite subset of R, whose members we will call internal numbers, and some countably infinite set of sets of internal numbers, whose members we will call "internal sets", such that the domain consisting of internal numbers and internal sets satisfies all of the first-order sentences satisfied by the domain of real-numbers-and-sets-of-real-numbers. In particular, it satisfies a sort of least-upper-bound axiom that says, in effect:
- Every nonempty internal set that has an internal upper bound has a least internal upper bound.
Countability of the set of all internal numbers (in conjunction with the fact that those form a densely ordered set) necessarily implies that that set does not satisfy the full least-upper-bound axiom. Countability of the set of all internal sets necessarily implies that is not the set of all subsets of the set of all internal numbers (since Cantor's theorem implies that the set of all subsets of a countably infinite set is an uncountably infinite set).
Yet another profound difference between first-order and second-order logic is the topic of the next section.
Second-order logic lacks soundness and completeness theorems
It is a corollary of Gödel's incompleteness theorem that one cannot have any notion of provability of second-order formulas that simultaneously satisfies these three desired attributes:
- (Soundness) Every provable second-order sentence is universally valid, i.e., true in all domains.
- (Completeness) Every universally valid second-order formula is provable.
- ("Effectiveness") There is a proof-checking algorithm. (This third condition is often taken so much for granted that it is not explicitly stated.)
This is sometimes expressed by saying that second-order logic does not admit a proof theory.
In this respect second-order logic differs from first-order logic, and this is at least one of the reasons logicians have shied away from its use. (Quine occasionally pointed to this as a reason for thinking of second-order logic as not logic, properly speaking.) As George Boolos has pointed out, though, this incompleteness enters only with polyadic second-order logic: logic quantifying over n-place predicates, for n > 1. But monadic second-order logic, restricted to one-place predicates, is not only complete and consistent but decidable--that is, a proof of every true theorem is not only possible but determinately accessible by a mechanical method. In this respect, monadic second-order logic fares better than polyadic first-order logic: monadic first-order logic is complete, consistent and decidable, but polyadic first-order logic, though consistent and complete, is no longer decidable (See halting problem).
The history and disputed value of second-order logic
When predicate logic was invented by Frege, he did use different variables to distinguish quantification over objects from quantification over properties and sets; but he did not see himself as doing two different kinds of logic. After the discovery of Russell's paradox it was realized that something was wrong with his system. Eventually logicians found that restricting Frege's logic in various ways—to what is now called first-order logic—eliminated this problem: sets and properties cannot be quantified over in first-order-logic alone. The now-standard hierarchy of orders of logics dates from this time.
It was found that set theory could be formulated as an axiomatized system within the apparatus of first-order logic (at the cost of several kinds of completeness, but nothing so bad as Russell's paradox), and this was done (see Zermelo-Fraenkel set theory), as sets are vital for mathematics. Arithmetic, mereology, and a variety of other powerful logical theories could be formulated axiomatically without appeal to any more logical apparatus than first-order quantification, and this led to a general decline in work in second (or any higher) order logic.
This rejection was actively advanced by some logicians, most notably W. V. Quine. Quine advanced the view that in predicate-language sentences like Fx the "x" is to be thought of as a variable or name denoting an object and hence can be quantified over, as in "For all things, it is the case that . . ." but the "F" is to be thought of as an abbreviation for an incomplete sentence, not the name of an object (not even of an abstract object like a property). For example, it might mean " . . . is a dog." But it makes no sense to think we can quantify over something like this. (Such a position is quite consistent with Frege's own arguments on the concept-object distinction).
In recent years second-order logic has made something of a recovery, buoyed by George Boolos' interpretation of second-order quantification as plural quantification over the same domain of objects as first-order quantification. Boolos furthermore points to the nonfirstorderizability of sentences such as "Some critics admire only each other" and "Some of Fianchetto's men went into the warehouse unaccompanied by anyone else."
Power of the existential fragment
The existential fragment (EMSO) of monadic second-order logic (MSO) is second-order logic without the universal quantifier. Over words <math>w \in \Sigma^*<math>, every MSO formula can be converted into a deterministic finite state machine. This again can be converted into an EMSO formula. Thus EMSO and MSO are equivalent over words. For trees as input, this result holds as well. However, over the finite grid <math>\Sigma^{++}<math>, this property does not hold any more, since the languages recognized by tiling systems are not closed under complement. Since a universal quantifier is equivalent to a negated existential quantifier, which cannot be expressed, alternations of universal and existential quantifiers generate bigger and bigger classes of languages over <math>\Sigma^{++}<math>.
Applications to complexity
The expressibility of various forms of second-order logic is intimately tied to computational complexity theory. In particular:
- NP is the set of languages expressible by existential second-order logic.
- co-NP is the set of languages expressible by universal second-order logic.
- PH is the set of languages expressible by second-order logic.
- PSPACE is the set of languages expressible by second-order logic with an added transitive closure operator.
- EXPTIME is the set of languages expressible by second-order logic with an added least fixed point operator.
Relationships among these classes directly impact the relative expressiveness of the logics; for example, if PH=PSPACE, then adding a transitive closure operator to second-order logic does not make it any more expressive.zh:二階邏輯