Fundamental theorem of calculus
|
The fundamental theorem of calculus is the statement that the two central operations of calculus, differentiation and integration, are inverses of each other. This means that if a continuous function is first integrated and then differentiated, the original function is retrieved. This theorem is of such central importance in calculus that it deserves to be called the fundamental theorem for the entire field of study. An important consequence of this, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated. In his 2003 book (page 394), James Stewart credits the idea that led to the fundamental theorem to the English mathematician Isaac Barrow.
Contents |
Intuition
Intuitively, the theorem simply says that the sum of infinitesimal changes in a quantity over time (or some other quantity) add up to the net change in the quantity.
To get a feeling for the statement, we will start with an example. Suppose a particle travels in a straight line with its position given by x(t) where t is time. The derivative of this function is equal to the infinitesimal change in x per infinitesimal change in time (of course, the derivative itself is dependent on time). Let us define this change in distance per time as the speed v of the particle. In Leibniz's notation:
- <math>\frac{dx}{dt} = v(t) <math>
Rearranging that equation, it is clear that:
- <math>dx = v(t)\,dt <math>
By the logic above, a change in x, call it <math>\Delta x<math>, is the sum of the infinitesimal changes dx. It is also equal to the sum of the infinitesimal products of the derivative and time. This infinite summation is integration; hence, the integration operation allows the recovery of the original function from its derivative. Clearly, this operation works in reverse as we can differentiate the result of our integral to recover the speed function.
Formal statements
Stated formally, the theorem says: [1] (http://mathworld.wolfram.com/FundamentalTheoremsofCalculus.html)
I.
- Let f be a continuous real-valued function defined on a closed interval [a, b]. If F is the function defined for x in [a, b] by
- <math>F(x) = \int_a^x f(t)\, dt<math>
- then
- <math>F'(x) = f(x)\,<math>
- for every x in [a, b].
II.
- Let f be a continuous real-valued function defined on a closed interval [a, b]. If F is a function such that
- <math>f(x) = F'(x)\,<math> for all x in [a, b]
- then
- <math>\int_a^b f(x) dx = F(b) - F(a)<math>.
Corollary
Let f be a real-valued function defined on a closed interval [a, b]. If F is a function such thatthen
- <math>f(x) = F'(x)\,<math> for all x in [a, b]
and
- <math>F(x) = \int_a^x f(t) dt + F(a)<math>
- <math>f(x) = \frac{d}{dx} \int_a^x f(t) dt<math>.
Proof
Part I
It is given that
- <math>F(x) = \int_{a}^{x} f(t) dt<math>
Let there be two numbers x1 and x1 + Δx in [a, b]. So we have
- <math>F(x_1) = \int_{a}^{x_1} f(t) dt<math>
and
- <math>F(x_1 + \Delta x) = \int_{a}^{x_1 + \Delta x} f(t) dt<math>.
Subtracting the two equations gives
- <math>F(x_1 + \Delta x) - F(x_1) = \int_{a}^{x_1 + \Delta x} f(t) dt - \int_{a}^{x_1} f(t) dt \qquad (1)<math>.
It can be shown that
- <math>\int_{a}^{x_1} f(t) dt + \int_{x_1}^{x_1 + \Delta x} f(t) dt = \int_{a}^{x_1 + \Delta x} f(t) dt <math>.
- (The sum of the areas of two adjacent regions is equal to the area of both regions combined.)
Manipulating this equation gives
- <math>\int_{a}^{x_1 + \Delta x} f(t) dt - \int_{a}^{x_1} f(t) dt = \int_{x_1}^{x_1 + \Delta x} f(t) dt <math>.
Substituting the above into (1) results in
- <math>F(x_1 + \Delta x) - F(x_1) = \int_{x_1}^{x_1 + \Delta x} f(t) dt \qquad (2)<math>.
According to the mean value theorem for integration, there exists a c in [x1, x1 + Δx] such that
- <math>\int_{x_1}^{x_1 + \Delta x} f(t) dt = f(c) \Delta x <math>.
Substituting the above into (2) we get
- <math>F(x_1 + \Delta x) - F(x_1) = f(c) \Delta x \,<math>.
Dividing both sides by Δx gives
- <math>\frac{F(x_1 + \Delta x) - F(x_1)}{\Delta x} = f(c) <math>.
- Notice that the expression on the left side of the equation is Newton's difference quotient for F at x1.
Take the limit as Δx → 0 on both sides of the equation.
- <math>\lim_{\Delta x \to 0} \frac{F(x_1 + \Delta x) - F(x_1)}{\Delta x} = \lim_{\Delta x \to 0} f(c) <math>
The expression on the left side of the equation is the definition of the derivative of F at x1.
- <math>F'(x_1) = \lim_{\Delta x \to 0} f(c) \qquad (3) <math>.
To find the other limit, we will use the squeeze theorem. The number c is in the interval [x1, x1 + Δx], so x1 ≤ c ≤ x1 + Δx.
Also, <math>\lim_{\Delta x \to 0} x_1 = x_1<math> and <math>\lim_{\Delta x \to 0} x_1 + \Delta x = x_1<math>.
Therefore, according to the squeeze theorem,
- <math>\lim_{\Delta x \to 0} c = x_1<math>.
Substituting into (3), we get
- <math>F'(x_1) = \lim_{c \to x_1} f(c)<math>.
The function f is continuous at c, so the limit can be taken inside the function. Therefore, we get
- <math>F'(x_1) = f(x_1) \,<math>.
which completes the proof.
(Leithold et al, 1996)
Part II
This is a limit proof by Riemann Sums.
Let f be continuous on the interval [a, b], and let F be an antiderivative of f. Begin with the quantity
- <math>F(b) - F(a)\,<math>.
Let there be numbers x1 thru xn such that <math>a = x_0 < x_1 < x_2 < \ldots < x_{n-1} < x_n = b<math>. It follows that
- <math>F(b) - F(a) = F(x_n) - F(x_0) \,<math>.
Now, we add each F(xi) along with its additive inverse, so that the resulting quantity is equal:
- <math>\begin{matrix} F(b) - F(a) & = & F(x_n)\,+\,[-F(x_{n-1})\,+\,F(x_{n-1})]\,+\,\ldots\,+\,[-F(x_1) + F(x_1)]\,-\,F(x_0) \, \\
& = & [F(x_n)\,-\,F(x_{n-1})]\,+\,[F(x_{n-1})\,+\,\ldots\,-\,F(x_1)]\,+\,[F(x_1)\,-\,F(x_0)] \, \end{matrix}<math>
The above quantity can be written as the following sum:
- <math>F(b) - F(a) = \sum_{i=1}^n [F(x_i) - F(x_{i-1})] \qquad (1)<math>
Here we employ the Mean Value Theorem. In brief, it is as follows:
Let f be continuous on the closed interval [a, b] and differentiable on the open interval (a, b). Then there exists some c in (a, b) such that
- <math>f'(c) = \frac{f(b) - f(a)}{b - a}<math>.
It follows that
- <math>f'(c)(b - a) = f(b) - f(a) \,<math>.
The function F is differentiable on the interval [a, b]; therefore, it is also differentiable and continuous on each interval xi-1. Therefore, according to the Mean Value Theorem (above),
- <math>F(x_i) - F(x_{i-1}) = F'(c_i)(x_i - x_{i-1}) \,<math>.
Substituting the above into (1), we get
- <math>F(b) - F(a) = \sum_{i=1}^n [F'(c_i)(x_i - x_{i-1})]<math>.
The assumption implies <math>F'(c_i) = f(c_i)<math>. Also, <math>x_i - x_{i-1}<math> can be expressed as <math>\Delta x<math> of partition <math>i<math>.
- <math>F(b) - F(a) = \sum_{i=1}^n [f(c_i)(\Delta x_i)] \qquad (2)<math>
Notice that we are describing the area of a rectangle, with the width times the height, and we are adding the areas together. Each rectangle, by virtue of the Mean Value Theorem, describes an approximation of the curve section it is drawn over. Also notice that <math>\Delta x_i<math> does not need to be the same for any value of <math>i<math>, or in other words that the width of the rectangles can differ. What we have to do is approximate the curve with <math>n<math> rectangles. Now, as the size of the partitions get smaller and n increases, resulting in more partitions to cover the space, we will get closer and closer to the actual area of the curve.
By taking the limit of the expression as the norm of the partitions approaches zero, we arrive at the Riemann integral. That is, we take the limit as the largest of the partitions approaches zero in size, so that all other partitions are smaller and the number of partitions approaches infinity.
So, we take the limit on both sides of (3). This gives us
- <math>\lim_{\| \Delta \| \to 0} F(b) - F(a) = \lim_{\| \Delta \| \to 0} \sum_{i=1}^n [f(c_i)(\Delta x_i)]\,dx<math>
The expressions F(b) and F(a) are not dependent on ||Δ||, so the limit on the left side remains F(b) - F(a).
- <math>F(b) - F(a) = \lim_{\| \Delta \| \to 0} \sum_{i=1}^n [f(c_i)(\Delta x_i)]<math>
The expression on the right side of the equation defines an integral over f from a to b. Therefore, we obtain
- <math>F(b) - F(a) = \int_{a}^{b} f(x)\,dx<math>
which completes the proof.
Examples
As an example, suppose you need to calculate
- <math>\int_2^5 x^2\, dx <math>
Here, <math>f(x) = x^2<math> and we can use <math>F(x) = (1/3) x^3<math> as antiderivative. Therefore:
- <math>\int_2^5 x^2\, dx = F(5) - F(2) = {125 \over 3} - {8 \over 3} = {117 \over 3} = 39.<math>
Generalizations
We don't need to assume continuity of f on the whole interval. Part I of the theorem then says: if f is any Lebesgue integrable function on <math>[a, b]<math> and <math>x_0<math> is a number in <math>[a, b]<math> such that <math>f<math> is continuous at <math>x_0<math>, then
- <math>F(x) = \int_a^x f(t)\, dt<math>
is differentiable for <math>x = x_0<math> with <math>F'(x_0) = f(x_0)<math>. We can relax the conditions on f still further and suppose that it is merely locally integrable. In that case, we can conclude that the function F is differentiable almost everywhere and F'(x)=f(x) almost everywhere. This is sometimes known as Lebesgue's differentiation theorem.
Part II of the theorem is true for any Lebesgue integrable function f which has an antiderivative F (not all integrable functions do, though).
The version of Taylor's theorem which expresses the error term as an integral can be seen as a generalization of the Fundamental Theorem.
There is a version of the theorem for complex functions: suppose U is an open set in C and f: U -> C is a function which has a holomorphic antiderivative F on U. Then for every curve γ : [a, b] -> U, the curve integral can be computed as
- <math>\oint_{\gamma} f(z) \,dz = F(\gamma(b)) - F(\gamma(a)).<math>
The fundamental theorem can be generalized to curve and surface integrals in higher dimensions and on manifolds.
The most powerful statement in this direction is Stokes' theorem.
References
- Stewart, J. (2003). Fundamental Theorem of Calculus. In Integrals. In Calculus: early transcendentals. Belmont, California: Thomson/Brooks/Cole.
- Larson, Ron, Bruce H. Edwards, David E. Heyd. Calculus of a single variable. 7th ed. Boston: Houghton Mifflin Company, 2002.
- Leithold, L. (1996). The calculus 7 of a single variable. 6th ed. New York: HarperCollins College Publishers.de:Fundamentalsatz der Analysis
es:Teorema fundamental del cálculo integral fr:Théorème fondamental de l'analyse he:המשפט היסודי של החשבון הדיפרנציאלי והאינטגרלי ko:미적분학의 기본정리 ja:微分積分学の基本定理 sv:Analysens fundamentalsats th:ทฤษฎีบทมูลฐานของแคลคูลัส