Proof that the sum of the reciprocals of the primes diverges
|
In the third century BC, Euclid proved the existence of infinitely many prime numbers. In the 18th century, Leonhard Euler proved a stronger statement: the sum of the reciprocals of all prime numbers diverges to infinity. Here, we present a number of proofs of this result.
Contents |
The harmonic series
First, we describe how Euler originally discovered the result. He was considering the harmonic series
- <math>\sum_{n=1}^\infty \frac{1}{n} =
1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots <math>
He had already used the following "product formula" to show the existence of infinitely many primes.
- <math>\sum_{n=1}^\infty \frac{1}{n} = \prod_{p} \frac{1}{1-p^{-1}}
=\prod_{p} \left( 1+\frac{1}{p}+\frac{1}{p^2}+\cdots \right) <math>
(Here, the product is taken over all primes p; in the following, a sum or product taken over p always represents a sum or product taken over a specified set of primes, unless noted otherwise.)
Such infinite products are today called Euler products. The product above is a reflection of the fundamental theorem of arithmetic. (Multiply out the right hand side as you would like to do.) Of course, the above "equation" is nonsense, because the harmonic series is known (by other means) to diverge. This type of formal manipulation was common at the time, when mathematicians were still experimenting with the new tools of calculus.
Euler noted that if there were only a finite number of primes, then the product on the right would clearly converge, contradicting the divergence of the harmonic series. (In modern language, we now say that the existence of infinitely many primes is reflected by the fact that the Riemann zeta function has a simple pole at s = 1.)
First proof
Euler took the above product formula and proceeded to make a sequence of audacious leaps of logic. First, he took the logarithm of each side, then he used the Taylor series expansion for ln(1 − x) as well as the sum of a geometric series:
- <math>\ln \left( \sum_{n=1}^\infty \frac{1}{n}\right) = \ln \left( \prod_{p} \frac{1}{1-p^{-1}}\right)
= \sum_{p} \ln \left( \frac{1}{1-p^{-1}}\right) = \sum_{p} - \ln(1-p^{-1}) <math>
- <math>= \sum_{p} \left( \frac{1}{p} + \frac{1}{2p^2} + \frac{1}{3p^3} + \cdots \right) = \left( \sum_{p}\frac{1}{p} \right) + \sum_{p} \frac{1}{p^2} \left( \frac{1}{2} + \frac{1}{3p} + \frac{1}{4p^2} + \cdots \right)<math>
- <math>< \left( \sum_{p}\frac{1}{p} \right) + \sum_{p} \frac{1}{p^2} \left( 1 + \frac{1}{p} + \frac{1}{p^2} + \cdots \right) = \left( \sum_{p} \frac{1}{p} \right) + \left( \sum_{p} \frac{1}{p(p-1)} \right) < \left( \sum_{p} \frac{1}{p} \right) + C<math>
for some fixed constant C. Since the sum of the reciprocals of the first n positive integers is asymptotic to ln(n), (i.e. their ratio approaches one as n approaches infinity), Euler then concluded
- <math>\frac{1}{2} + \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + \frac{1}{11} + \cdots = \ln \ln (+ \infty)<math>
This "equation" appears bizarre to modern eyes; however, it is almost certain that Euler intended it to be interpreted as saying that the sum of the reciprocals of the primes less than n is asymptotic to ln(ln(n)) as n approaches infinity. It turns out this is indeed the case; Euler had reached a correct result by questionable means.
The above proof can be slightly altered to meet the demands of present-day rigour. [to come...]
Second proof
A proof by contradiction follows.
Assume that the sum of the reciprocals of the primes converges:
Define <math>p_i<math> as the ith prime number. We have:
- <math> \sum_{k=1}^\infty{1\over p_{k}} = c<math>
There exists a positive integer i such that:
- <math> \sum_{k=1}^\infty{1\over p_{i+k}} < {1 \over 2}<math>
Define N(x) as the number of positive integers n not exceeding x and not divisible by a prime other than the first i ones. Let us write this n as <math>km^2<math> with k square-free (which can be done with any integer). Since there are only i primes which could divide k, there are at most <math>2^i<math> choices for k. Together with the fact that there are at most <math>\sqrt{x}<math> possible values for m, this gives us:
- <math>N(x) \le 2^i\sqrt{x}<math>
The number of positive integers not exceeding x and divisible by a prime other than the first i ones is equal to x - N(x).
Since the number of integers not exceeding x and divisible by p is at most x/p, we get:
- <math> x - N(x) < \sum_{k=1}^\infty{x\over p_{i+k}} < {x \over 2}<math>
or:
- <math> {x \over 2} < N(x) \le 2^i\sqrt{x} <math>
But this is impossible for all x larger than (or equal to) <math> 2^{2i+2} <math>.
Third proof
Here is another proof that actually gives an estimate for the sum; in particular, it shows that the sum grows at least as large as ln ln n. The proof is an adaptation of the product expansion idea of Euler. In the following, a sum or product taken over p always represents a sum or product taken over a specified set of primes.
The proof rests upon the following facts:
- Every positive integer n can be expressed as the product of a square-free integer and a square. This gives the inequality
- <math> \sum_{i=1}^n{\frac{1}{i}} \le \prod_{p \le n}{\left(1+\frac{1}{p}\right)}\sum_{k=1}^n{\frac{1}{k^2}} <math>
The product corresponds to the square-free part of n and the sum corresponds to the square part of n. (See fundamental theorem of arithmetic.)
- The inequality
- <math> \ln n < \sum_{i=1}^n{\frac{1}{i}} <math>
which can be obtained by considering approximating rectangles in the integral definition of ln n. (See natural logarithm.)
- The inequality 1 + x < exp(x), which holds for all x > 0. (See exponential.)
- The identity
- <math> \sum_{k=1}^\infty{\frac{1}{k^2}} = \frac{\pi^2}{6} <math>
Actually, the exact sum is not necessary; we just need to know that the sum converges, and this can be shown using the p-test for series. (See series.)
Combining all these facts, we see that
- <math> \ln n < \sum_{i=1}^n{\frac{1}{i}} \le \prod_{p \le n}{\left(1+\frac{1}{p}\right)}\sum_{k=1}^n{\frac{1}{k^2}} < \frac{\pi^2}{6}\prod_{p \le n}{\exp\left(\frac{1}{p}\right)} = \frac{\pi^2}{6}\exp\left(\sum_{p \le n}{\frac{1}{p}}\right) <math>
Dividing through by π2/6 and taking the natural log of both sides gives
- <math> \ln \ln n - \ln\left(\frac{\pi^2}{6}\right) < \sum_{p \le n}{\frac{1}{p}} <math>
In fact it turns out that
- <math>\lim_{n \rightarrow \infty } \left(
\sum_{p \leq n} \frac{1}{p} - \ln(\ln(n)) \right)= M<math>
where M is the Meissel-Mertens constant (somewhat analogous to the much more famous Euler-Mascheroni constant).
See also
External link
- Chris K. Caldwell: "There are infinitely many primes, but, how big of an infinity?", http://www.utm.edu/research/primes/infinity.shtmlfr:démonstration que la série des inverses des nombres premiers diverge