Projective transformation
|
A projective transformation is a transformation used in projective geometry: it is the composition of a pair of perspective projections. It describes what happens to the perceived positions of observed objects when the point of view of the observer changes. Projective transformations do not preserve sizes or angles but do preserve incidence and cross-ratio: two properties which are important in projective geometry. A projective transformation can also be called a projectivity.
A projective transformation can be in the (real) one-dimensional projective line RP1, the two-dimensional projective plane RP2, and the three-dimensional projective 3-space RP3.
Transformations on the projective line
Let X be a point on the x-axis. A projective transformation can be defined geometrically for this line by picking a pair of points P, Q, and a line m, all within the same x-y plane which contains the x-axis upon which the transformation will be performed.
Draw line l through points P and X. Line l crosses line m at point R. Then draw line n through points Q and R: line n will cross the x-axis at point T. Point T is the transform of point X [Paiva].
Points P and Q represent two different observers, or points of view. Point R is the position of some object they are observing. Line m is the objective world which they are observing, and the x-axis is the subjective perception of m.
Analysis
The above is a synthetic description of a one-dimensional projective transformation. It is now desired to convert it to an analytical (Cartesian) description.
Let point X have coordinates (x0,0). Let point P have coordinates <math> (P_x,P_y) <math>. Let point Q have coordinates <math> (Q_x,Q_y) <math>. Let line m have slope m (m is being overloaded in meaning).
The slope of line l is
- <math> P_y \over P_x - x_0 <math>,
so an arbitrary point (x,y) on line l is given by the equation
- <math> {y \over x - x_0} = {P_y \over P_x - x_0} <math>,
- <math> y = {P_y \over P_x - x_0} (x - x_0). \qquad \qquad (1) <math>
On the other hand, any point (x,y) on line m is described by
- <math> y = m x + b. \qquad \qquad (2) <math>
The intersection of lines l and m is point R, and it is obtained by combining equations (1) and (2):
- <math> m x + b = {P_y x \over P_x - x_0} - {P_y x_0 \over P_x - x_0}. <math>
Joining the x terms yields
- <math> \left( {P_y \over P_x - x_0} - m \right) x
= b + {P_y x_0 \over P_x - x_0} <math>
and solving for x we obtain
- <math> x_1 = {b (P_x - x_0) + P_y x_0 \over P_y - m (P_x - x_0)}. <math>
x1 is the abscissa of R. The ordinate of R is
- <math> y_1 = m \left[ {b (P_x - x_0) + P_y x_0 \over P_y - m (P_x - x_0)} \right] + b. <math>
Now, knowing both Q and R, the slope of line n is
- <math> {y_1 - Q_y \over x_1 - Q_x} .<math>
We want to find the intersection of line n and the x-axis, so let
- <math> (Q_x, Q_y) + \lambda (x_1 - Q_x, y_1 - Q_y) = (x,0) \qquad \qquad (3) <math>
The value of λ must be adjusted so that both sides of vector equation (3) are equal. Equation (3) is actually two equations, one for abscissas and one for ordinates. The one for ordinates is
- <math> Q_y + \lambda (y_1 - Q_y) = 0 <math>
Solve for lambda,
- <math> \lambda = {-Q_y \over y_1 - Q_y} \qquad \qquad (4) <math>
The equation for abscissas is
- <math> x = Q_x + \lambda (x_1 - Q_x) <math>
which together with equation (4) yields
- <math> x = Q_x - Q_y \left( {x_1 - Q_x \over y_1 - Q_y} \right) \qquad \qquad (5) <math>
which is the abscissa of T.
Substitute the values of x1 and y1 into equation (5),
- <math> x = Q_x - Q_y \left[ { {b (P_x - x_0) + P_y x_0 \over P_y - m (P_x - x_0)} - Q_x \over {m b (P_x - x_0) + m P_y x_0 \over P_y - m (P_x - x_0)} + b - Q_y} \right]. <math>
Dissolve the fractions in both numerator and denominator:
- <math> x = Q_x - Q_y \left[ {b (P_x - x_0) + P_y x_0 - Q_x P_y + m Q_x (P_x - x_0) \over m b (P_x - x_0) + m P_y x_0 + b P_y - m b (P_x - x_0) - Q_y P_y + m Q_y (P_x - x_0) } \right]. <math>
Simplify and relabel x as t(x):
- <math> t(x) = Q_x - Q_y \left[ { (P_x - x_0) (b + m Q_x) + P_y (x_0 - Q_x) \over (P_x - x_0) m Q_y + P_y (m x_0 + b - Q_y) } \right]. <math>
t(x) is the projective transformation.
Transformation t(x) can be simplified further. First, add its two terms to form a fraction:
- <math> t(x) = { (m Q_x P_y - Q_y P_y + b Q_y) x + (b Q_x P_y - b Q_y P_x) \over m (P_y - Q_y) x + (m P_x Q_y + P_y (b - Q_y)) } \qquad \qquad (6) <math>
Then, define the coefficients α, β, γ and δ to be the following
- <math> \alpha = m Q_x P_y - Q_y P_y + b Q_y, <math>
- <math> \beta = b Q_x P_y - b Q_y P_x, <math>
- <math> \gamma = m (P_y - Q_y), <math>
- <math> \delta = m P_x Q_y + P_y (b - Q_y). <math>
Substitute these coefficients into equation (6), in order to produce
- <math> t(x) = { \alpha x + \beta \over \gamma x + \delta } <math>
This is the Möbius transformation or bilinear transformation (so called because it has a linear numerator and a linear denominator. Actually, it is bilinear because the composition of projections is a binary linear operator, similar to matrix multiplication).
Inverse transformation
It is clear from the synthetic definition that the inverse transformation is obtained by exchanging points P and Q. This can also be shown analytically. If P ↔ Q, then α → α′, β → β′, γ → γ′, and δ → δ′, where
- <math> \alpha' = m P_x Q_y - P_y Q_y + b P_y = \delta, <math>
- <math> \beta' = b P_x Q_y - b P_y Q_x = - \beta, <math>
- <math> \gamma' = m (Q_y - P_y) = - \gamma, <math>
- <math> \delta' = m Q_x P_y + b Q_y - Q_y P_y = \alpha. <math>
Therefore if the forwards transformation is
- <math> t(x) = {\alpha x + \beta \over \gamma x + \delta} <math>
then the transformation t′ obtained by exchanging P and Q (P ↔ Q) is:
- <math> t'(x) = {\delta x - \beta \over - \gamma x + \alpha }. <math>
Then
- <math> t'(t(x)) = {\delta \left( {\alpha x + \beta \over \gamma x + \delta} \right) - \beta \over - \gamma \left( {\alpha x + \beta \over \gamma x + \delta} \right) + \alpha} <math>.
Dissolve the fractions in both numerator and denominator of the right side of this last equation:
- <math> t'(t(x)) = {\alpha \delta x + \beta \delta - \beta \gamma x - \beta \delta \over - \alpha \gamma x - \beta \gamma + \alpha \gamma x + \alpha \delta} <math>
- <math> = {\alpha \delta x - \beta \gamma x \over \alpha \delta - \beta \gamma} = x <math>.
Therefore t′(x) = t−1(x): the inverse projective transformation is obtained by exchanging observers P and Q, or by letting α ↔ δ, β → −β, and γ → −γ. This is, by the way, analogous to the procedure for obtaining the inverse of a two-dimensional matrix:
- <math> \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix} \begin{bmatrix} \delta & - \beta \\ - \gamma & \alpha \end{bmatrix} = \Delta \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} <math>
where Δ = α δ − β γ is the determinant.
Identity transformation
Also analogous with matrices is the identity transformation, which is obtained by letting α = 1, β = 0, γ = 0, and δ = 1, so that
- <math> t_I(x) = x. <math>
Composition of transformations
It remains to show that there is closure in the composition of transformations. One transformation operating on another transformation produces a third transformation. Let the first transformation be t1 and the second one be t2:
- <math> t_1(x) = {\alpha_1 x + \beta_1 \over \gamma_1 x + \delta_1 }, <math>
- <math> t_2(x) = {\alpha_2 x + \beta_2 \over \gamma_2 x + \delta_2 }. <math>
The composition of these two transformations is
- <math> t_2(t_1(x)) = {\alpha_2 \left( {\alpha_1 x + \beta_1 \over \gamma_1 x + \delta_1} \right) + \beta_2 \over \gamma_2 \left( {\alpha_1 x + \beta_1 \over \gamma_1 x + \delta_1 } \right) + \delta_2 } <math>
- <math> = {\alpha_2 \alpha_1 x + \alpha_2 \beta_1 + \beta_2 \gamma_1 x + \beta_2 \delta_1 \over \gamma_2 \alpha_1 x + \gamma_2 \beta_1 + \delta_2 \gamma_1 x + \delta_2 \delta_1 } <math>
- <math> = {(\alpha_2 \alpha_1 + \beta_2 \gamma_1) x + (\alpha_2 \beta_1 + \beta_2 \delta_1) \over (\gamma_2 \alpha_1 + \delta_2 \gamma_1) x + (\gamma_2 \beta_1 + \delta_2 \delta_1)}. <math>
Define the coefficients α3, β3, γ3 and δ3 to be equal to
- <math> \alpha_3 = \alpha_2 \alpha_1 + \beta_2 \gamma_1, <math>
- <math> \beta_3 = \alpha_2 \beta_1 + \beta_2 \delta_1, <math>
- <math> \gamma_3 = \gamma_2 \alpha_1 + \delta_2 \gamma_1, <math>
- <math> \delta_3 = \gamma_2 \beta_1 + \delta_2 \delta_1. <math>
Substitute these coefficients into <math> t_2(t_1(x)) <math> to obtain
- <math> t_2(t_1(x)) = { \alpha_3 x + \beta_3 \over \gamma_3 x + \delta_3}. <math>
Projections operate in a way analogous to matrices. In fact, the composition of transformations can be obtained by multiplying matrices:
- <math> \begin{bmatrix} \alpha_2 & \beta_2 \\ \gamma_2 & \delta_2 \end{bmatrix} \begin{bmatrix} \alpha_1 & \beta_1 \\ \gamma_1 & \delta_1 \end{bmatrix} = \begin{bmatrix} \alpha_2 \alpha_1 + \beta_2 \gamma_1 & \alpha_2 \beta_1 + \beta_2 \delta_1 \\ \gamma_2 \alpha_1 + \delta_2 \gamma_1 & \gamma_2 \beta_1 + \delta_2 \delta_1 \end{bmatrix} = \begin{bmatrix} \alpha_3 & \beta_3 \\ \gamma_3 & \delta_3 \end{bmatrix}. <math>
Since matrices multiply associatively, it follows that composition of projections is also associative.
Projections have: an operation (composition), associativity, an identity, an inverse and closure, so they form a group.
The cross-ratio defined by means of a projection
Let there be a transformation ts such that ts(A) = <math>\infty<math>, ts(B) = 0, ts(C) = 1. Then the value of ts(D) is called the cross-ratio of points A, B, C and D, and is denoted as [A, B, C, D]s:
- <math> [A,B,C,D]_s = t_s(D). <math>
Let
- <math> t_s(x) = {\alpha x + \beta \over \gamma x + \delta}, <math>
then the three conditions for ts(x) are met when
- <math> t_s(A) = {\alpha A + \beta \over \gamma A + \delta} = \infty, \qquad \qquad (7) <math>
- <math> t_s(B) = {\alpha B + \beta \over \gamma B + \delta} = 0, \qquad \qquad (8) <math>
- <math> t_s(C) = {\alpha C + \beta \over \gamma C + \delta} = 1. \qquad \qquad (9) <math>
Equation (7) implies that <math> \gamma A + \delta = 0 <math>, therefore <math> \delta = - \gamma A <math>. Equation (8) implies that <math> \alpha B + \beta = 0 <math>, so that <math> \beta = - \alpha B <math>. Equation (9) becomes
- <math> {\alpha C - \alpha B \over \gamma C - \gamma A} = 1, <math>
which implies
- <math> \gamma = \alpha {C - B \over C - A}. <math>
Therefore
- <math> t_s(D) = {\alpha D - \alpha B \over \alpha \left( {C - B \over C - A} \right) D - \gamma A} = {\alpha (D - B) \over \alpha \left( {C - B \over C - A} \right) D - \alpha \left( {C - B \over C - A} \right) A} <math>
- <math> = {D - B \over C - B} {C - A \over D - A} = {A - C \over A - D} {B - D \over B - C}. \qquad \qquad (10) <math>
In equation (10), it is seen that ts(D) does not depend on the coefficients of the projection ts. It only depends on the positions of the points on the "subjective" projective line. This means that the cross-ratio depends only on the relative distances among four collinear points, and not on the projective transformation which was used to obtain (or define) the cross-ratio. The cross ratio is therefore
- <math> [A,B,C,D] = {A - C \over A - D} {B - D \over B - C}. \qquad \qquad (11) <math>
Conservation of cross-ratio
Transformations on the projective line preserve cross ratio. This will now be proven. Let there be four (collinear) points A, B, C, D. Their cross-ratio is given by equation (11). Let S(x) be a projective transformation:
- <math> S(x) = {\alpha x + \beta \over \gamma x + \delta} <math>
where <math> \alpha \delta \ne \beta \gamma <math>. Then
- <math> [S(A) S(B) S(C) S(D)] = {{\alpha A + \beta \over \gamma A + \delta} - {\alpha C + \beta \over \gamma C + \delta} \over {\alpha A + \beta \over \gamma A + \delta} - {\alpha D + \beta \over \gamma D + \delta}} \cdot {{\alpha B + \beta \over \gamma B + \delta} - {\alpha D + \beta \over \gamma D + \delta} \over {\alpha B + \beta \over \gamma B + \delta} - {\alpha C + \beta \over \gamma C + \delta}} <math>
- <math> = { [(\alpha A + \beta) (\gamma C + \delta) - (\alpha C + \beta) (\gamma A + \delta)] [(\alpha B + \beta) (\gamma D + \delta) - (\alpha D + \beta) (\gamma B + \delta)] \over [(\alpha A + \beta) (\gamma D + \delta) - (\alpha D + \beta) (\gamma A + \delta)] [(\alpha B + \beta) (\gamma C + \delta) - (\alpha C + \beta) (\gamma B + \delta)] } <math>
- <math> = { [\alpha A \delta + \beta \gamma C - \alpha C \delta - \beta \gamma A] [\alpha B \delta + \beta \gamma D - \alpha D \delta - \beta \gamma B] \over [\alpha A \delta + \beta \gamma D - \alpha D \delta - \beta \gamma A] [\alpha B \delta + \beta \gamma C - \alpha C \delta - \beta \gamma B]} <math>
- <math> = { [\alpha \delta (A - C) + \beta \gamma (C - A)] [\alpha \delta (B - D) + \beta \gamma (D - B)] \over [\alpha \delta (A - D) + \beta \gamma (D - A)] [\alpha \delta (B - C) + \beta \gamma (C - B)]} <math>
- <math> = {(\alpha \delta - \beta \gamma) (A - C) (\alpha \delta - \beta \gamma) (B - D) \over (\alpha \delta - \beta \gamma) (A - D) (\alpha \delta - \beta \gamma) (B - C)} <math>
- <math> = {A - C \over A - D} \cdot {B - D \over B - C} <math>
Therefore [S(A) S(B) S(C) S(D)] = [A B C D], Q.E.D.
Transformations on the projective plane
Two-dimensional projective transformations are a type of automorphism of the projective plane onto itself.
Planar transformations can be defined synthetically as follows: point X on a "subjective" plane must be transformed to a point T also on the subjective plane. The transformations uses these tools: a pair of "observation points" P and Q, and an "objective" plane. The subjective and objective planes and the two points all lie in three-dimensional space, and the two planes can intersect at some line.
Draw line l1 through points P and X. Line l1 intersects the objective plane at point R. Draw line l2 through points Q and R. Line l2 intersects the projective plane at point T. Then T is the projective transform of X.
Analysis
Let the xy-plane be the "subjective" plane and let plane m be the "objective" plane. Let plane m be described by
- <math> z = f(x,y) = m x + n y + b <math>
where the constants m and n are partial slopes and b is the z-intercept.
Let there be a pair of "observation" points P and Q,
- <math> P : (P_x, P_y, P_z), <math>
- <math> Q : (Q_x, Q_y, Q_z). <math>
Let point X lie on the "subjective" plane:
- <math> X : (x,y,0). <math>
Point X must be transformed to a point T,
- <math> T : (T_x, T_y, 0) <math>
also on the "subjective" plane.
The analytical results are a pair of equations, one for abscissa Tx and one for ordinate Ty:
- <math> T_x = {x (-m Q_x P_z - n Q_z P_y + Q_z (P_z - b)) + (n y + b) (Q_z P_x - Q_x P_z) \over (m x + n y) (Q_z - P_z) - (m P_x + n P_y) Q_z + (Q_z - b) P_z}, \qquad \qquad (12) <math>
- <math> T_y = {y (-n Q_y P_z - m Q_z P_x + Q_z (P_z - b)) + (m x + b) (Q_z P_y - Q_y P_z) \over (n y + m x) (Q_z - P_z) - (n P_y + m P_x) Q_z + (Q_z - b) P_z }. \qquad \qquad (13) <math>
There are (at most) nine degrees of freedom for defining a 2D transformation: Px, Py, Pz, Qx, Qy, Qz, m, n, b. Notice that equations (12) and (13) have the same denominators, and that Ty can be obtained from Tx by exchanging m with n, and x with y (including subscripts of P and Q).
Trilinear transformations
Let
- <math> \alpha = -m Q_x P_z - n Q_z P_y + Q_z (P_z - b), <math>
- <math> \beta = n (Q_z P_x - Q_x P_z), <math>
- <math> \gamma = b (Q_z P_x - Q_x P_z), <math>
- <math> \delta = m (Q_z - P_z), <math>
- <math> \epsilon = n (Q_z - P_z), <math>
- <math> \zeta = - (m P_x + n P_y) Q_z + (Q_z - b) P_z, <math>
so that
- <math> T_x = {\alpha x + \beta y + \gamma \over \delta x + \epsilon y + \zeta}. \qquad \qquad (14) <math>
Also let
- <math> \eta = m (Q_z P_y - Q_y P_z), <math>
- <math> \theta = -m Q_z P_x - n Q_y P_z + Q_z (P_z - b), <math>
- <math> \kappa = b (Q_z P_y - Q_y P_z), <math>
so that
- <math> T_y = {\eta x + \theta y + \kappa \over \delta x + \epsilon y + \zeta}. \qquad \qquad (15) <math>
Equations (14) and (15) together describe the trilinear transformation.
Composition of trilinear transformations
If a transformation is given by equations (14) and (15), then such transformation is characterized by nine coefficients which can be arranged into a coefficient matrix
- <math> M_T = \begin{bmatrix} \alpha & \beta & \gamma
\\ \eta & \theta & \kappa \\ \delta & \epsilon & \theta \end{bmatrix}. <math>
If there are a pair T1 and T2 of planar transformations whose coefficient matrices are <math> M_{T_1} <math> and <math> M_{T_2} <math>, then the composition of these transformations is another planar transformation T3,
- <math> T_3 = T_2 \circ T_1 , <math>
such that
- <math> T_3(x,y) = T_2 ( T_1 (x,y) ). <math>
The coefficient matrix of T3 can be obtained by multiplying the coefficient matrices of T2 and T1:
- <math> M_{T_3} = M_{T_2} \, M_{T_1}. <math>
Proof
Given T1 defined by
- <math> T_{1x} = {\alpha_1 x + \beta_1 y + \gamma_1 \over \delta_1 x + \epsilon_1 y + \zeta_1}, <math>
- <math> T_{1y} = {\eta_1 x + \theta_1 y + \kappa_1 \over \delta_1 x + \epsilon_1 y + \zeta_1}, <math>
and given T2 defined by
- <math> T_{2x} = {\alpha_2 x + \beta_2 y + \gamma_2 \over \delta_2 x + \epsilon_2 y + \zeta_2}, <math>
- <math> T_{2y} = {\eta_2 x + \theta_2 y + \kappa_2 \over \delta_2 x + \epsilon_2 y + \zeta_2}, <math>
then T3 can be calculated by substituting T1 into T2,
- <math> T_{3x} = T_{2x} ( T_{1x}, T_{1y} ) = { \alpha_2 \left( {\alpha_1 x + \beta_1 y + \gamma_1 \over \delta_1 x + \epsilon_1 y + \zeta_1} \right) + \beta_2 \left( {\eta_1 x + \theta_1 y + \kappa_1 \over \delta_1 x + \epsilon_1 y + \zeta_1} \right) + \gamma_2 \over \delta_2 \left( {\alpha_1 x + \beta_1 y + \gamma_1 \over \delta_1 x + \epsilon_1 y + \zeta_1} \right) + \epsilon_2 \left( {\eta_1 x + \theta_1 y + \kappa_1 \over \delta_1 x + \epsilon_1 y + \zeta_1} \right) + \zeta_2}. <math>
Multiply numerator and denominator by the same trinomial,
- <math> T_{3x} = {\alpha_2 (\alpha_1 x + \beta_1 y + \gamma_1) + \beta_2 (\eta_1 x + \theta_1 y + \kappa_1) + \gamma_2 (\delta_1 x + \epsilon_1 y + \zeta_1) \over \delta_2 (\alpha_1 x + \beta_1 y + \gamma_1) + \epsilon_2 (\eta_1 x + \theta_1 y + \kappa_1) + \zeta_2 (\delta_1 x + \epsilon_1 y + \zeta_1)}. <math>
Group the coefficients of x, y, and 1:
- <math> T_{3x} = { x (\alpha_2 \alpha_1 + \beta_2 \eta_1 + \gamma_2 \delta_1) + y (\alpha_2 \beta_1 + \beta_2 \theta_1 + \gamma_2 \epsilon_1) + (\alpha_2 \gamma_1 + \beta_2 \kappa_1 + \gamma_2 \zeta_1) \over x (\delta_2 \alpha_1 + \epsilon_2 \eta_1 + \zeta_2 \delta_1) + y (\delta_2 \beta_1 + \epsilon_2 \theta_1 + \zeta_2 \epsilon_1) + (\delta_2 \gamma_1 + \epsilon_2 \kappa_1 + \zeta_2 \zeta_1)} = {\alpha_3 x + \beta_3 y + \gamma_3 \over \delta_3 x + \epsilon_3 y + \zeta_3}. <math>
These six coefficients of T3 are the same as those obtained through the product
- <math> \begin{bmatrix} \alpha_2 & \beta_2 & \gamma_2 \\
\eta_2 & \theta_2 & \kappa_2 \\ \delta_2 & \epsilon_2 & \zeta_2 \end{bmatrix} \begin{bmatrix} \alpha_1 & \beta_1 & \gamma_1 \\ \eta_1 & \theta_1 & \kappa_1 \\ \delta_1 & \epsilon_1 & \zeta_1 \end{bmatrix} = \begin{bmatrix} \alpha_3 & \beta_3 & \gamma_3 \\ \eta_3 & \theta_3 & \kappa_3 \\ \delta_3 & \epsilon_3 & \zeta_3 \end{bmatrix}. \qquad \qquad (16) <math>
The remaining three coefficients can be verified thus
- <math> T_{3y} = T_{2y} ( T_{1x}, T_{1y} ) = { \eta_2 \left( {\alpha_1 x + \beta_1 y + \gamma_1 \over \delta_1 x + \epsilon_1 y + \zeta_1} \right) + \theta_2 \left( {\eta_1 x + \theta_1 y + \kappa_1 \over \delta_1 x + \epsilon_1 y + \zeta_1} \right) + \kappa_2 \over \delta_2 \left( {\alpha_1 x + \beta_1 y + \gamma_1 \over \delta_1 x + \epsilon_1 y + \zeta_1} \right) + \epsilon_2 \left( {\eta_1 x + \theta_1 y + \kappa_1 \over \delta_1 x + \epsilon_1 y + \zeta_1} \right) + \zeta_2}. <math>
Multiply numerator and denominator by the same trinomial,
- <math> T_{3y} = {\eta_2 (\alpha_1 x + \beta_1 y + \gamma_1) + \theta_2 (\eta_1 x + \theta_1 y + \kappa_1) + \kappa_2 (\delta_1 x + \epsilon_1 y + \zeta_1) \over \delta_2 (\alpha_1 x + \beta_1 y + \gamma_1) + \epsilon_2 (\eta_1 x + \theta_1 y + \kappa_1) + \zeta_2 (\delta_1 x + \epsilon_1 y + \zeta_1)}. <math>
Group the coefficients of x, y, and 1:
- <math> T_{3x} = { x (\eta_2 \alpha_1 + \theta_2 \eta_1 + \kappa_2 \delta_1) + y (\eta_2 \beta_1 + \theta_2 \theta_1 + \kappa_2 \epsilon_1) + (\eta_2 \gamma_1 + \theta_2 \kappa_1 + \kappa_2 \zeta_1) \over
x (\delta_2 \alpha_1 + \epsilon_2 \eta_1 + \zeta_2 \delta_1) + y (\delta_2 \beta_1 + \epsilon_2 \theta_1 + \zeta_2 \epsilon_1) + (\delta_2 \gamma_1 + \epsilon_2 \kappa_1 + \zeta_2 \zeta_1)} = {\eta_3 x + \theta_3 y + \kappa_3 \over \delta_3 x + \epsilon_3 y + \zeta_3}. <math>
The three remaining coefficients just obtained are the same as those obtained through equation (16). Q.E.D.
Planar transformations of lines
The trilinear transformation given be equations (14) and (15) transforms a straight line
- <math> y = m x + b <math>
into another straight line
- <math> T_y = n T_x + c <math>
where n and c are constants and equal to
- <math> n = {m (\epsilon \kappa - \zeta \theta) + b (\delta \theta - \epsilon \eta) + (\delta \kappa - \zeta \eta) \over m (\epsilon \gamma - \zeta \beta) + b (\delta \beta - \epsilon \alpha) + (\delta \gamma - \zeta \alpha)} <math>
and
- <math> c = {m (\beta \kappa - \gamma \theta) + b (\alpha \theta - \beta \eta) + (\alpha \kappa - \gamma \eta) \over m (\beta \zeta - \gamma \epsilon) + b (\alpha \epsilon - \beta \delta) + (\alpha \zeta - \gamma \delta) }. <math>
Proof
Given y = m x + b, then plugging this into equations (14) and (15) yields
- <math> T_x = {\alpha x + \beta (m x + b) + \gamma \over \delta x + \epsilon (m x + b) + \zeta} = {(\alpha + \beta m) x + (\beta b + \gamma) \over (\delta + \epsilon m) x + (\epsilon b + \zeta)}, <math>
and
- <math> T_y = {(\eta + \theta m) x + (\theta b + \kappa) \over (\delta + \epsilon m) x + (\epsilon b + \zeta) }. <math>
If Ty = n Tx + c and n and c are constants, then
- <math> {\partial T_y \over \partial x} = n {\partial T_x \over \partial x} <math>
so that
- <math> n = {\partial T_y / \partial x \over \partial T_x / \partial y}. <math>
Calculation shows that
- <math> {\partial T_x \over \partial x} = { (\epsilon b + \zeta) (\alpha + \beta m) - (\beta b + \gamma) (\delta + \epsilon m) \over [(\delta + \epsilon m) x + (\epsilon b + \zeta)]^2 } <math>
and
- <math> {\partial T_y \over \partial x} = { (\epsilon b + \zeta) (\eta + \theta m) - (\theta b + \kappa) (\delta + \epsilon m) \over [(\delta + \epsilon m) x + (\epsilon b + \zeta)]^2 } <math>
therefore
- <math> n = {\partial T_y / \partial x \over \partial T_x / \partial y} =
{ (\epsilon b + \zeta) (\eta + \theta m) - (\theta b + \kappa) (\delta + \epsilon m) \over (\epsilon b + \zeta) (\alpha + \beta m) - (\beta b + \gamma) (\delta + \epsilon m) } . <math>
We should now obtain c to be
- <math> c = T_y - n T_x <math>
- <math> = {(\eta + \theta m) x + (\theta b + \kappa) - \left[ { (\epsilon b + \zeta) (\eta + \theta m) - (\theta b + \kappa) (\delta + \epsilon m) \over (\epsilon b + \zeta) (\alpha + \beta m) - (\beta b + \gamma) (\delta + \epsilon m) } \right] \cdot [ (\alpha + \beta m) x + (\beta b + \gamma) ] \over (\delta + \epsilon m) x + (\epsilon b + \zeta) }. <math>
Add the two fractions in the numerator:
- <math> c = { \left\{ [(\epsilon b + \zeta) (\alpha + \beta m) - (\beta b + \gamma) (\delta + \epsilon m)] [(\eta + \theta m) x + (\theta b + \kappa)] - [(\epsilon b + \zeta) (\eta + \theta m) - (\theta b + \kappa) (\delta + \epsilon m)] [(\alpha + \beta m) x + (\beta b + \gamma)] \right\}
\over [(\delta + \epsilon m) x + (\epsilon b + \zeta)] [(\epsilon b + \zeta) (\alpha + \beta m) - (\beta b + \gamma) (\delta + \epsilon m)] }. <math>
Distribute binomials in parentheses in the numerator, then cancel out equal and opposite terms:
- <math> c = { - (\beta b + \gamma) (\delta + \epsilon m) (\eta + \theta m) x + (\epsilon b + \zeta) (\alpha + \beta m) (\theta b + \kappa) + (\theta b + \kappa) (\delta + \epsilon m) (\alpha + \beta m) x - (\epsilon b + \zeta) (\eta + \theta m) (\beta b + \gamma) \over [(\delta + \epsilon m) x + (\epsilon b + \zeta)] [(\epsilon b + \zeta) (\alpha + \beta m) - (\beta b + \gamma) (\delta + \epsilon m)] }. <math>
Factor the numerator into a pair of terms, only one of them having the numerus cossicus (x). There is another numerus cossicus in the denominator. The objective now is to get both of these to cancel out.
- <math> c = { \left\{ [(\theta b + \kappa) (\alpha + \beta m) - (\beta b + \gamma) (\eta + \theta m)] (\delta + \epsilon m) x + [(\alpha + \beta m)(\theta b + \kappa) - (\eta + \theta m) (\beta b + \gamma)] (\epsilon b + \zeta) \right\} \over [(\delta + \epsilon m) x + (\epsilon b + \zeta)] [(\epsilon b + \zeta) (\alpha + \beta m) - (\beta b + \gamma) (\delta + \epsilon m)] }. <math>
Factor the numerator,
- <math> c = {[(\theta b + \kappa) (\alpha + \beta m) - (\beta b + \gamma) (\eta + \theta m)] [(\delta + \epsilon m) x + (\epsilon b + \zeta)] \over [(\epsilon b + \zeta) (\alpha + \beta m) - (\beta b + \gamma) (\delta + \epsilon m)] [(\delta + \epsilon m) x + (\epsilon b + \zeta)] }. <math>
The terms with the numeri cossici cancel out, therefore
- <math> c = { (\alpha + \beta m) (\theta b + \kappa) - (\beta b + \gamma) (\eta + \theta m) \over (\alpha + \beta m) (\epsilon b + \zeta) - (\beta b + \gamma) (\delta + \epsilon m) } <math>
is a constant. Q.E.D.
Comparing c with n, notice that their denominators are the same. Also, n is obtained from c by exchanging the following coefficients:
- <math> \alpha \leftrightarrow \delta, \ \beta \leftrightarrow \epsilon, \ \gamma \leftrightarrow \zeta . <math>
There is also the following exchange symmetry between the numerator and denominator of n:
- <math> \alpha \leftrightarrow \eta, \ \beta \leftrightarrow \theta, \ \gamma \leftrightarrow \kappa . <math>
The numerator and denominator of c also have exchange symmetry: <math> \{ \eta \leftrightarrow \delta, \ \theta \leftrightarrow \epsilon, \ \kappa \leftrightarrow \zeta \}. <math>
The exchange symmetry between n and c can be chunked into binomials:
- <math> n \leftrightarrow c \equiv \{ (\alpha + m \beta ) \leftrightarrow (\delta + m \epsilon ), \ (\gamma + b \beta ) \leftrightarrow (\zeta + b \epsilon ) \}. <math>
All of these exchange symmetries amount to exchanging pairs of rows in the coefficient matrix.
Planar transformations of conic sections
A trilinear transformation such as T given by equations (14) and (15) will convert a conic section
- <math> A x^2 + B y^2 + C x + D y + E x y + F = 0 \qquad \qquad (17) <math>
into another conic section
- <math> A' T_x^2 + B' T_y^2 + C' T_x + D' T_y + E' T_x T_y + F' = 0. \qquad \qquad (18) <math>
Proof
Let there be given a conic section described by equation (17) and a planar transformation T described by equations (15) and (16) which converts points (x,y) into points (Tx,Ty).
It is possible to find an inverse transformation T′ which converts back points (Tx,Ty) to points (x,y). This inverse transformation has a coefficient matrix
- <math> M_{T'} = \begin{bmatrix} \alpha' & \beta' & \gamma' \\
\eta' & \theta' & \kappa' \\ \delta' & \epsilon' & \zeta' \end{bmatrix}. <math>
Equation (17) can be expressed in terms of the inverse transformation:
- <math> A \left( {\alpha' T_x + \beta' T_y + \gamma' \over \delta' T_x + \epsilon' T_y + \zeta'} \right)^2 + B \left( {\eta' T_x + \theta' T_y + \kappa' \over \delta' T_x + \epsilon' T_y + \zeta'} \right)^2 + C \left( {\alpha' T_x + \beta' T_y + \gamma' \over \delta' T_x + \epsilon' T_y + \zeta'} \right) + D \left( {\eta' T_x + \theta' T_y + \kappa' \over \delta' T_x + \epsilon' T_y + \zeta'} \right) + E \left( {\alpha' T_x + \beta' T_y + \gamma' \over \delta' T_x + \epsilon' T_y + \zeta'} \right) \left( {\eta' T_x + \theta' T_y + \kappa' \over \delta' T_x + \epsilon' T_y + \zeta'} \right) + F = 0. <math>
The denominators can be "dissolved" by multiplying both sides of the equation by the square of a trinomial:
- <math> A (\alpha' T_x + \beta' T_y + \gamma')^2 + B (\eta' T_x + \theta' T_y + \kappa')^2 + C (\alpha' T_x + \beta' T_y + \gamma') (\delta' T_x + \epsilon' T_y + \zeta') + D (\eta' T_x + \theta' T_y + \kappa') (\delta' T_x + \epsilon' T_y + \zeta') + E (\alpha' T_x + \beta' T_y + \gamma') (\eta' T_x + \theta' T_y + \kappa') + F (\delta' T_x + \epsilon' T_y + \zeta')^2 = 0. <math>
Expand the products of trinomials and collect common powers of Tx and Ty:
- <math> \begin{matrix}
(A \alpha'^2 + B \eta'^2 + C \alpha' \delta' + D \eta' \delta' + E \alpha' \eta' + F \delta'^2) T_x^2 \\ + (A \beta'^2 + B \theta'^2 + C \beta' \epsilon' + D \theta' \epsilon' + E \beta' \theta' + F \epsilon'^2) T_y^2 \\ + (2 A \alpha' \gamma' + 2 B \eta' \kappa' + C (\alpha' \zeta' + \gamma' \delta') + D (\eta' \zeta' + \kappa' \delta') + E (\alpha' \kappa' + \gamma' \eta') + 2 F \delta' \zeta') T_x \\ + (2 A \beta' \gamma' + 2 B \theta' \kappa' + C (\beta' \zeta' + \gamma' \epsilon') + D (\theta' \zeta' + \kappa' \epsilon') + E (\beta' \kappa' + \gamma' \theta') + 2 F \epsilon' \zeta') T_y \\ + (2 A \alpha' \beta' + 2 B \eta' \theta' + C (\alpha' \epsilon' + \beta' \delta') + D (\eta' \epsilon' + \theta' \delta') + E (\alpha' \theta' + \beta' \eta') + 2 F \delta' \epsilon') T_x T_y \\ + (A \gamma'^2 + B \kappa'^2 + C \gamma' \zeta' + D \kappa' \zeta' + E \gamma' \kappa' + F \zeta'^2) = 0. \end{matrix} \qquad \qquad (19) <math>
Equation (19) has the same form as equation (18).
What remains to do is to express the primed coefficients in terms of the unprimed coefficients. To do this, apply Cramer's rule to the coefficient matrix MT to obtain the primed matrix of the inverse transformation:
- <math> M_{T'} = {1 \over \Delta} \begin{bmatrix}
\left| \begin{matrix} \theta &\kappa \\ \epsilon & \zeta \end{matrix} \right| & \left| \begin{matrix} \epsilon & \zeta \\ \beta & \gamma \end{matrix} \right| & \left| \begin{matrix} \beta & \gamma \\ \theta & \kappa \end{matrix} \right| \\ \quad & \quad & \quad \\ \left| \begin{matrix} \kappa & \eta \\ \zeta & \delta \end{matrix} \right| & \left| \begin{matrix} \zeta & \delta \\ \gamma & \alpha \end{matrix} \right| & \left| \begin{matrix} \gamma & \alpha \\ \kappa & \eta \end{matrix} \right| \\ \quad & \quad & \quad \\ \left| \begin{matrix} \eta & \theta \\ \delta & \epsilon \end{matrix} \right| & \left| \begin{matrix} \delta &\epsilon \\ \alpha & \beta \end{matrix} \right| & \left| \begin{matrix} \alpha & \beta \\ \eta & \theta \end{matrix} \right| \end{bmatrix} \qquad \qquad (20) <math> where Δ is the determinant of the unprimed coefficient matrix.
Equation (20) allows primed coefficients to be expressed in terms of unprimed coefficients. But performing these substitutions on the primed coefficients of equation (19) it can be noticed that the determinant Δ cancels itself out, so that it can be ignored altogether. Therefore
- <math> A' = A (\theta \zeta - \kappa \epsilon)^2
+ B (\kappa \delta - \eta \zeta)^2 + C (\theta \zeta - \kappa \epsilon) (\eta \epsilon - \theta \delta) + D (\kappa \delta - \eta \zeta) (\eta \epsilon - \theta \delta) + E (\theta \zeta - \kappa \epsilon) (\kappa \delta - \eta \zeta) + F (\eta \epsilon - \theta \delta)^2 <math>
- <math> B' = A (\epsilon \gamma - \zeta \beta)^2
+ B (\zeta \alpha - \delta \gamma)^2 + C (\epsilon \gamma - \zeta \beta) (\delta \beta - \epsilon \alpha) + D (\zeta \alpha - \delta \gamma) (\delta \beta - \epsilon \alpha) + E (\epsilon \gamma - \zeta \beta) (\zeta \alpha - \delta \gamma) + F (\delta \beta - \epsilon \alpha)^2 <math>
- <math> C' = 2 A (\theta \zeta - \kappa \epsilon) (\beta \kappa - \gamma \theta)
+ 2 B (\kappa \delta - \eta \zeta) (\gamma \eta - \alpha \kappa) + C [ (\theta \zeta - \kappa \epsilon) (\alpha \theta - \beta \eta) + (\beta \kappa - \gamma \theta) (\eta \epsilon - \theta \delta)] + D [ (\kappa \delta - \eta \zeta) (\alpha \theta - \beta \eta) +
(\gamma \eta - \alpha \kappa) (\eta \epsilon - \theta \delta) ]
+ E [ (\theta \zeta - \kappa \epsilon) (\gamma \eta - \alpha \kappa) +
(\beta \kappa - \gamma \theta) (\kappa \delta - \eta \zeta) ]
+ 2 F (\eta \epsilon - \theta \delta) (\alpha \theta - \beta \eta) <math>
- <math> D' = 2 A (\epsilon \gamma - \zeta \beta) (\beta \kappa - \gamma \theta)
+ 2 B (\zeta \alpha - \delta \gamma) (\gamma \eta - \alpha \kappa) + C [ (\epsilon \gamma - \zeta \beta) (\alpha \theta - \beta \eta) + (\beta \kappa - \gamma \theta) (\delta \beta - \epsilon \alpha) ] + D [ (\zeta \alpha - \delta \gamma) (\alpha \theta - \beta \eta) + (\gamma \eta - \alpha \kappa) (\delta \beta - \epsilon \alpha) ] + E [ (\epsilon \gamma - \zeta \beta) (\gamma \eta - \alpha \kappa) + (\beta \kappa - \gamma \theta) (\zeta \alpha - \delta \gamma) ] + 2 F (\delta \beta - \epsilon \alpha) (\alpha \theta - \beta \eta) <math>
- <math> E' = 2 A (\theta \zeta - \kappa \epsilon) (\epsilon \gamma - \zeta \beta)
+ 2 B (\kappa \delta - \eta \zeta) (\zeta \alpha - \delta \gamma) + C [(\theta \zeta - \kappa \epsilon) (\delta \beta - \epsilon \alpha) + (\epsilon \gamma - \zeta \beta) (\eta \epsilon - \theta \delta)] + D [ (\kappa \delta - \eta \zeta) (\delta \beta - \epsilon \alpha) +
(\zeta \alpha - \delta \gamma) (\eta \epsilon - \theta \delta)]
+ E [ (\theta \zeta - \kappa \epsilon) (\zeta \alpha - \delta \gamma) + (\epsilon \gamma - \zeta \beta) (\kappa \delta - \eta \zeta)] + 2 F (\eta \epsilon - \theta \delta) (\delta \beta - \epsilon \alpha) <math>
- <math> F' = A (\beta \kappa - \gamma \theta)^2
+ B (\gamma \eta - \alpha \kappa)^2 + C (\beta \kappa - \gamma \theta) (\alpha \theta - \beta \eta) + D (\gamma \eta - \alpha \kappa) (\alpha \theta - \beta \eta) + E (\beta \kappa - \gamma \theta) (\gamma \eta - \alpha \kappa) + F (\alpha \theta - \beta \eta)^2 <math>
The coefficients of the transformed conic have been expressed in terms of the coefficients of the original conic and the coefficients of the planar transformation T. Q.E.D.
Planar projectivities and cross-ratio
Let four points A, B, C, D be collinear. Let there be a planar projectivity T which transforms these points into points A′, B′, C′, and D′. It was already shown that lines are transformed into lines, so that the transformed points A′ through D′ will also be collinear. Then it will turn out that the cross-ratio of the original four points is the same as the cross-ratio of their transforms:
- <math> [A \ B \ C \ D] = [A' \ B' \ C' \ D']. <math>
Proof
If the two-dimensional coordinates of four points are known, and if the four points are collinear, then their cross-ratio can be found from their abscissas alone. It is possible to project the points onto a horizontal line by means of a pencil of vertical lines issuing from a point on the line at infinity:
- <math> [A \ B \ C \ D] = [A_x \ B_x \ C_x \ D_x]. <math>
The same is true for the ordinates of the points. The reason is that any mere rescaling of the coordinates of the points does not change the cross-ratio.
Let
- <math> A : (x_1, m x_1 + b), <math>
- <math> B : (x_2, m x_2 + b), <math>
- <math> C : (x_3, m x_3 + b), <math>
- <math> D : (x_4, m x_4 + b). <math>
Clearly these four points are collinear. Let
- <math> T_x (x,y) = {\alpha x + \beta y + \gamma \over \delta x + \epsilon y + \zeta} <math>
be the first half of a trilinear transformation. Then
- <math> T_x(A) = {\alpha x_1 + \beta (m x_1 + b) + \gamma \over \delta x_1 + \epsilon (m x_1 + b) + \zeta} = {(\alpha + \beta m) x_1 + (\beta b + \gamma) \over (\delta + \epsilon m) x_1 + (\epsilon b + \zeta)}, <math>
- <math> T_x(B) = {\alpha x_2 + \beta (m x_2 + b) + \gamma \over \delta x_2 + \epsilon (m x_2 + b) + \zeta} = {(\alpha + \beta m) x_2 + (\beta b + \gamma) \over (\delta + \epsilon m) x_2 + (\epsilon b + \zeta)}, <math>
- <math> T_x(C) = {\alpha x_3 + \beta (m x_3 + b) + \gamma \over \delta x_3 + \epsilon (m x_3 + b) + \zeta} = {(\alpha + \beta m) x_3 + (\beta b + \gamma) \over (\delta + \epsilon m) x_3 + (\epsilon b + \zeta)}, <math>
- <math> T_x(D) = {\alpha x_4 + \beta (m x_4 + b) + \gamma \over \delta x_4 + \epsilon (m x_4 + b) + \zeta} = {(\alpha + \beta m) x_4 + (\beta b + \gamma) \over (\delta + \epsilon m) x_4 + (\epsilon b + \zeta)}. <math>
The original cross-ratio is
- <math> [x_1 \ x_2 \ x_3 \ x_4] = {x_1 - x_3 \over x_1 - x_4} \cdot {x_2 - x_4 \over x_2 - x_3}. <math>
It is not necessary to calculate the transformed cross-ratio. Just let
- <math> S(x) = {(\alpha + \beta m) x + (\beta b + \gamma) \over (\delta + \epsilon m) x + (\epsilon b + \zeta)} <math>
be a bilinear transformation. Then S(x) is a one-dimensional projective transformation. But Tx(A)=S(A), Tx(B)=S(B), Tx(C)=S(C), and Tx(D)=S(D). Therefore
- <math> [T_x(A) \ T_x(B) \ T_x(C) \ T_x(D)] = [S(A) \ S(B) \ S(C) \ S(D)] <math>
but it has already been shown that bilinear transformations preserve cross-ratio. Q.E.D.
Example
The following is a rather simple example of a planar projectivity:
- <math> T_x = {1 \over x}, \qquad T_y = {y \over x}. <math>
The coefficient matrix of this projectivity T is
- <math> M_T = \begin{bmatrix} 0 & 0 & 1 \\
0 & 1 & 0 \\ 1 & 0 & 0 \end{bmatrix} <math> and it is easy to verify that MT is its own inverse.
The locus of points described parametrically as <math> ( \cos \theta, \, \sin \theta ) <math> describe a circle, due to the trigonometric identity
- <math> \cos^2 \theta + \sin^2 \theta = 1 <math>
which has the same form as the canonical equation of a circle. Applying the projectivity T yields the locus of points described parametrically by <math> (\sec \theta,\, \tan \theta) <math> which describe a hyperbola, due to the trigonometric identity
- <math> \sec^2 \theta - \tan^2 \theta = 1 <math>
which has the same form as the canonical equation of a hyperbola. Notice that points <math> (~-1,0) <math> and <math>(1,0)<math> are fixed points.
Indeed, this projectivity transforms any circle, of any radius, into a hyperbola centered at the origin with both of its foci lying on the x-axis, and vice versa. This projectivity also transforms the y-axis into the line at infinity, and vice versa:
- <math> T : (0, y) \rightarrow \left( {1 \over 0}, {y \over 0} \right) = (\pm \infty, \pm \infty), <math>
- <math> T: (\pm \infty, \pm \infty) \rightarrow \left( {1 \over \pm \infty}, {\pm \infty \over \pm \infty} \right) = (0, y). <math>
The ratio of infinity over infinity is indeterminate which means that it can be set to any value y desired.
This example emphasizes that in the real projective plane, RP2, a hyperbola is a closed curve which passes twice through the line at infinity. But what does the transformation do to a parabola?
Let the locus of points <math> (x,x^2) <math> describe a parabola. Its transformation is
- <math> T : (x,x^2) \rightarrow \left( {1 \over x}, {x^2 \over x} \right) = (x', 1/x') <math>
which is a hyperbola whose asymptotes are the x-axis and the y-axis and whose wings lie in the first quadrant and the third quadrant. Likewise, the hyperbola
- <math> y = {1 \over x} <math>
is transformed by T into the parabola
- <math> y = x^2 \quad <math>.
On the other hand, the parabola described by the locus of points <math> (x, \pm \sqrt{x}) <math> is transformed by T into itself: this demonstrates that a parabola intersects the line at infinity at a single point.
Transformations in projective 3-space
Three-dimensional transformations can be defined synthetically as follows: point X on a "subjective" 3-space must be transformed to a point T also on the subjective space. The transformations uses these elements: a pair of "observation points" P and Q, and an "objective" 3-space. The subjective and objective spaces and the two points all lie in four-dimensional space, and the two 3-spaces can intersect at some plane.
Draw line l1 through points X and P. This line intersects the objective space at point R. Draw line l2 through points R and Q. Line l2 intersects the projective plane at point T. Then T is the transform of X.
Analysis
Let
- <math> X : (x,y,z,0), <math>
- <math> T : (T_x,T_y,T_z,0), <math>
- <math> P : (P_x,P_y,P_z,P_t), <math>
- <math> Q : (Q_x,Q_y,Q_z,Q_t). <math>
Let there be an "objective" 3-space described by
- <math> t = f(x,y,z) = m x + n y + k z + b <math>
Draw line l1 through points P and X. This line intersects the objective plane at R. This intersection can be described parametrically as follows:
- <math> (1 - \lambda_1) X + \lambda_1 P = (R_x,R_y,R_z,m R_x + n R_y + k R_z + b). <math>
This implies the following four equations:
- <math> R_x = x + \lambda_1 (P_x - x) <math>
- <math> R_y = y + \lambda_1 (P_y - y) <math>
- <math> R_z = z + \lambda_1 (P_z - z) <math>
- <math> R_t = \lambda_1 P_t = m R_x + n R_y + k R_z + b <math>
Substitute the first three equations into the last one:
- <math> (m x + n y + k z) + \lambda_1 (m P_x + n P_y + k P_z - m x - n y - k z - P_t) + b = 0 <math>
Solve for λ1,
- <math> \lambda_1 = {-(b + m x + n y + k z) \over m (P_x - x) + n (P_y - y) + k (P_z - z) - P_t} = {\lambda_{1N} \over \lambda_{1D}}. <math>
Draw line l2 through points R and Q. This line intersects the subjective 3-space at T. This intersection can be represented parametrically as follows:
- <math> (1 - \lambda_2) R + \lambda_2 Q = (T_x,T_y,T_z,0) <math>
This implies the following four equations:
- <math> T_x = R_x + \lambda_2 (Q_x - R_x), <math>
- <math> T_y = R_y + \lambda_2 (Q_y - R_y), <math>
- <math> T_z = R_z + \lambda_2 (Q_z - R_z), <math>
- <math> R_t + \lambda_2 (Q_t - R_t) = 0. <math>
The last equation can be solved for λ2,
- <math> \lambda_2 = {R_t \over R_t - Q_t} <math>
which can then be substituted into the other three equations:
- <math> T_x = R_x + R_t {Q_x - R_x \over R_t - Q_t} = {R_t Q_x - R_x Q_t \over R_t - Q_t}, <math>
- <math> T_y = R_y + R_t {Q_y - R_y \over R_t - Q_t} = {R_t Q_y - R_y Q_t \over R_t - Q_t}, <math>
- <math> T_z = R_z + R_t {Q_z - R_z \over R_t - Q_t} = {R_t Q_z - R_z Q_t \over R_t Q_t}. <math>
Substitute the values for Rx, Ry, Rz, and Rt obtained from the first intersection into the above equations for Tx, Ty, and Tz,
- <math> T_x = {\lambda_1 P_t Q_x - [x + \lambda_1 (P_x - x)] Q_t \over \lambda_1 P_t - Q_t} = {\lambda_1 [P_t Q_x - Q_t (P_x - x)] - x Q_t \over \lambda_1 P_t - Q_t}, <math>
- <math> T_y = {\lambda_1 P_t Q_y - [y + \lambda_1 (P_y - y)] Q_t \over \lambda_1 P_t - Q_t} = {\lambda_1 [P_t Q_y - Q_t (P_y - y)] - y Q_t \over \lambda_1 P_t - Q_t}, <math>
- <math> T_z = {\lambda_1 P_t Q_z - [z + \lambda_1 (P_z - z)] Q_t \over \lambda_1 P_t - Q_t} = {\lambda_1 [P_t Q_z - Q_t (P_z - z)] - z Q_t \over \lambda_1 P_t - Q_t}. <math>
Multiply both numerators and denominators of the above three equations by the denominator of lambda1: λ1D,
- <math> T_x = {\lambda_{1N} [P_t Q_x - Q_t (P_x - x)] - x Q_t \lambda_{1D} \over P_t \lambda_{1N} - Q_t \lambda_{1D} }, <math>
- <math> T_y = {\lambda_{1N} [P_t Q_y - Q_t (P_y - y)] - y Q_t \lambda_{1D} \over P_t \lambda_{1N} - Q_t \lambda_{1D} }, <math>
- <math> T_z = {\lambda_{1N} [P_t Q_z - Q_t (P_z - z)] - z Q_t \lambda_{1D} \over P_t \lambda_{1N} - Q_t \lambda_{1D} }, <math>
Plug in the values of the numerator and denominator of lambda1:
- <math> \lambda_{1N} = b + m x + n y + k z <math>
- <math> \lambda_{1D} = P_t + m (x - P_x) + n (y - P_y) + k (z - P_z) <math>
to obtain
- <math> T_x = {T_{xN} \over T_{xD}} = {(b + m x + n y + k z) [P_t Q_x - Q_t (P_x - x)] - x Q_t [P_t + m (x - P_x) + n (y - P_y) + k (z - P_z)] \over P_t (b + m x + n y + k z) - Q_t [P_t + m (x - P_x) + n (y - P_y) + k (z - P_z)]}. <math>
- <math> T_y = {T_{yN} \over T_{xD}} <math>,
- <math> T_{yN} = (b + m x + n y + k z) [P_t Q_y - Q_t (P_y - y)] - y Q_t [P_t + m (x - P_x) + n (y - P_y) + k (z - P_z)], <math>
- <math> T_z = {T_{zN} \over T_{xD}} <math>.
The numerator TxN can be expanded. It will be found that second-degree terms terms of x, y, and z will cancel each other out. Then collecting terms with common x, y, and z yields
- <math> T_{xN} = x (m P_t Q_x + n P_y Q_t + k P_z Q_t + Q_t (b - P_t)) + y n (P_t Q_x - P_x Q_t) + z k (P_t Q_x - P_x Q_t) + b (P_t Q_x - P_x Q_t) <math>
Likewise, the denominator becomes
- <math> T_{xD} = (m x + n y + k z) (P_t - Q_t) + (m P_x + n P_y + k P_z) Q_t + P_t (b - Q_t). <math>
The numerator TyN, when expanded and then simplified, becomes
- <math> T_{yN} = x m (P_t Q_y - P_y Q_t) + y (m P_x Q_t + n P_t Q_y + k P_z Q_t + Q_t (b - P_t)) + z k (P_t Q_y - P_y Q_t) + b (P_t Q_y - P_y Q_t). <math>
Likewise, the numerator TzN becomes
- <math> T_{zN} = x m (P_t Q_z - P_z Q_t) + y n (P_t Q_z - P_z Q_t) + z (m P_x Q_t + n P_y Q_t + k P_t Q_z + Q_t (b - P_t)) + b (P_t Q_z - P_z Q_t). <math>
Quadrilinear transformations
Let
- <math> \alpha = m P_t Q_x + n P_y Q_t + k P_z Q_t + Q_t (b - P_t), <math>
- <math> \beta = n (P_t Q_x - P_x Q_t), <math>
- <math> \gamma = k (P_t Q_x - P_x Q_t), <math>
- <math> \delta = b (P_t Q_x - P_x Q_t), <math>
- <math> \epsilon = m (P_t - Q_t), <math>
- <math> \zeta = n (P_t - Q_t), <math>
- <math> \eta = k (P_t - Q_t), <math>
- <math> \theta = (m P_x + n P_y + k P_z) Q_t + P_t (b - Q_t), <math>
- <math> \iota = m (P_t Q_y - P_y Q_t), <math>
- <math> \kappa = m P_x Q_t + n P_t Q_y + k P_z Q_t + Q_t (b - P_t), <math>
- <math> \lambda = k (P_t Q_y - P_y Q_t), <math>
- <math> \mu = b (P_t Q_y - P_y Q_t), <math>
- <math> \nu = m (P_t Q_z - P_z Q_t), <math>
- <math> \xi = n (P_t Q_z - P_z Q_t), <math>
- <math> o = m P_x Q_t + n P_y Q_t + k P_t Q_z + Q_t (b - P_t), <math>
- <math> \rho = b (P_t Q_z - P_z Q_t). <math>
Then the transformation in 3-space can be expressed as follows,
- <math> T_x = {\alpha x + \beta y + \gamma z + \delta \over \epsilon x + \zeta y + \eta z + \theta}, <math>
- <math> T_y = {\iota x + \kappa y + \lambda z + \mu \over \epsilon x + \zeta y + \eta z + \theta}, <math>
- <math> T_x = {\nu x + \xi y + o z + \rho \over \epsilon x + \zeta y + \eta z + \theta}. <math>
The sixteen coefficients of this transformation can be arranged in a coefficient matrix
- <math> M_T = \begin{bmatrix} \alpha & \beta & \gamma & \delta \\
\iota & \kappa & \lambda & \mu \\ \nu & \xi & o & \rho \\ \epsilon & \zeta & \eta & \theta \end{bmatrix}. <math>
Whenever this matrix is invertible, its coefficients will describe a quadrilinear transformation.
Transformation T in 3-space can also be represented in terms of homogeneous coordinates as
- <math> T : [x : y : z : 1] \rightarrow [\alpha x + \beta y + \gamma z + \delta : \iota x + \kappa y + \lambda z + \mu : \nu x + \xi y + o z + \rho : \epsilon x + \zeta y + \eta z + \theta ]. <math>
This means that the coefficient matrix of T can operate directly on 4-component vectors of homogeneous coordinates. Transformation of a point can be effected simply by multiplying the coefficient matrix with the position vector of the point in homogeneous coordinates. Therefore, if T transforms a point on the plane at infinity, the result will be
- <math> T : [x : y : z : 0] \rightarrow [\alpha x + \beta y + \gamma z : \iota x + \kappa y + \lambda z : \nu x + \xi y + o z : \epsilon x + \zeta y + \eta z ]. <math>
If ε, ζ, and η are not all equal to zero, then T will transform the plane at infinity into a locus of points which lie mostly in affine space. If ε, ζ, and η are all zero, then T will be a special kind of projective transformation called an affine transformation, which transforms affine points into affine points and ideal points (i.e. points at infinity) into ideal points.
The group of affine transformations has a subgroup of affine rotations whose matrices have the form
- <math> M_{AR} = \begin{bmatrix} \alpha & \beta & \gamma & 0 \\
\iota & \kappa & \lambda & 0 \\ \nu & \xi & o & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} <math> such that the submatrix
- <math> \begin{bmatrix} \alpha & \beta & \gamma \\
\iota & \kappa & \lambda \\ \nu & \xi & o \end{bmatrix} <math> is orthogonal.
Properties of quadrilinear transformations
Given a pair of quadrilinear transformations T1 and T2, whose coefficient matrices are <math> M_{T_1} <math> and <math> M_{T_2} <math>, then the composition of these pair of transformations is another quadrilinear transformation T3 whose coefficient matrix <math> M_{T_3} <math> is equal to the product of the first and second coefficient matrices,
- <math> (T_3 = T_2 \circ T_1) \leftrightarrow (M_{T_3} = M_{T_2} M_{T_1}). <math>
The identity quadrilinear transformation TI is the transformation whose coefficient matrix is the identity matrix.
Given a spatial projectivity T1 whose coefficient matrix is <math> M_{T_1} <math>, the inverse of this projectivity is another projectivity T−1 whose coefficient matrix <math> M_{T_{-1}} <math> is the inverse of T1′s coefficient matrix,
- <math> (T_{-1} \circ T_1 = T_I) \leftrightarrow (M_{T_{-1}} M_{T_1} = I) <math>.
Composition of quadrilinear transformations is associative, therefore the set of all quadrilinear transformations, together with the operation of composition, form a group.
This group of quadrilinear transformations contains subgroups of trilinear transformations. For example, the subgroup of all quadrilinear transformations whose coefficient matrices have the form
- <math> \begin{bmatrix} \alpha & \beta & 0 & \delta \\
\iota & \kappa & 0 & \mu \\ 0 & 0 & 0 & 0 \\ \epsilon & \zeta & 0 & \theta \end{bmatrix} <math>
is isomorphic to the group of all trilinear transformations whose coefficient matrices are
- <math> \begin{bmatrix} \alpha & \beta & \delta \\
\iota & \kappa & \mu \\ \epsilon & \zeta & \theta \end{bmatrix}. <math>
This subgroup of quadrilinear transformations all have the form
- <math> T : (x, y, z) \rightarrow \left( {\alpha x + \beta y + \delta \over \epsilon x + \zeta y + \theta} , {\iota x + \kappa y + \mu \over \epsilon x + \zeta y + \theta}, 0 \right). <math>
This means that this subgroup of transformations will act on the plane z = 0 just like a group of trilinear transformations.
Spatial transformations of planes
Projective transformations in 3-space transform planes into planes. This can be demonstrated more easily using homogeneous coordinates.
Let
- <math> z = m x + n y + b <math>
be the equation of a plane. This is equivalent to
- <math> m x + n y - z + b = 0. \qquad \qquad (21) <math>
Equation (21) can be expressed as a matrix product:
- <math> [m \ n \ -1 \ b] \begin{bmatrix} x \\
. \ . \\ y \\ . \ . \\ z \\ . \ . \\ 1 \end{bmatrix} = 0. <math>
A permutation matrix can be interposed between the two vectors, in order to make the plane vector have homogeneous coordinates:
- <math> [ m : n : b : 1 ] \begin{bmatrix} 1 & 0 & 0 & 0 \\
\ & \ & \ & \ \\ 0 & 1 & 0 & 0 \\ \ & \ & \ & \ \\ 0 & 0 & 0 & 1 \\ \ & \ & \ & \ \\ 0 & 0 & -1 & 0 \end{bmatrix} \begin{bmatrix} x \\ . \ . \\ y \\ . \ . \\ z \\ . \ . \\ 1 \end{bmatrix} = 0. \qquad \qquad (22) <math>
A quadrilinear transformation should convert this to
- <math> [ T_m : T_n : T_b : 1 ] \begin{bmatrix} 1 & 0 & 0 & 0 \\
\ & \ & \ & \ \\ 0 & 1 & 0 & 0 \\ \ & \ & \ & \ \\ 0 & 0 & 0 & 1 \\ \ & \ & \ & \ \\ 0 & 0 & -1 & 0 \end{bmatrix} \begin{bmatrix} T_x \\ . \ . \\ T_y \\ . \ . \\ T_z \\ . \ . \\ 1 \end{bmatrix} = 0 \qquad \qquad (23) <math>
where
- <math> \begin{bmatrix} T_x \\
. \ . \\ T_y \\ . \ . \\ T_z \\ . \ . \\ 1 \end{bmatrix} = \begin{bmatrix} \alpha & \beta & \gamma & \delta \\ \ & \ & \ & \ \\ \iota & \kappa & \lambda & \mu \\ \ & \ & \ & \ \\ \nu & \xi & o & \rho \\ \ & \ & \ & \ \\ \epsilon & \zeta & \eta & \theta \end{bmatrix} \begin{bmatrix} x \\ . \ . \\ y \\ . \ . \\ z \\ . \ . \\ 1 \end{bmatrix}. \qquad \qquad (24) <math>
Equation (22) is equivalent to <math> [ m : n : b : 1 ] \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \end{bmatrix} \begin{bmatrix} \bar{\alpha} & \bar{\iota} & \bar{\nu} & \bar{\epsilon} \\ \bar{\beta} & \bar{\kappa} & \bar{\xi} & \bar{\zeta} \\ \bar{\gamma} & \bar{\lambda} & \bar{o} & \bar{\eta} \\ \bar{\delta} & \bar{\mu} & \bar{\rho} & \bar{\theta} \end{bmatrix} \begin{bmatrix} \alpha & \beta & \gamma & \delta \\ \iota & \kappa & \lambda & \mu \\ \nu & \xi & o & \rho \\ \epsilon & \zeta & \eta & \theta \end{bmatrix} \begin{bmatrix} x \\ . \ . \\ y \\ . \ . \\ z \\ . \ . \\ 1 \end{bmatrix} = 0 \qquad \qquad (25) <math> where
- <math> \bar{\alpha} = \left| \begin{matrix} \kappa & \lambda & \mu \\
\xi & o & \rho \\ \zeta & \eta & \theta \end{matrix} \right| ; \qquad \bar{\beta} = \left| \begin{matrix} \lambda & \mu & \iota \\ o & \rho & \nu \\ \eta & \theta & \epsilon \end{matrix} \right|, <math> etc.
Applying equation (24) to equation (25) yields
- <math> [ m : n : b : 1 ] \begin{bmatrix} 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \end{bmatrix} \begin{bmatrix} \bar{\alpha} & \bar{\iota} & \bar{\nu} & \bar{\epsilon} \\ \bar{\beta} & \bar{\kappa} & \bar{\xi} & \bar{\zeta} \\ \bar{\gamma} & \bar{\lambda} & \bar{o} & \bar{\eta} \\ \bar{\delta} & \bar{\mu} & \bar{\rho} & \bar{\theta} \end{bmatrix} \begin{bmatrix} T_x \\ . \ . \\ T_y \\ . \ . \\ T_z \\ . \ . \\ 1 \end{bmatrix} = 0. \qquad \qquad (26) <math> Combining equations (26) and (23) produces
- <math> \begin{bmatrix} \bar{\alpha} & \bar{\beta} & \bar{\gamma} & \bar{\delta} \\
\bar{\iota} & \bar{\kappa} & \bar{\lambda} & \bar{\mu} \\ \bar{\nu} & \bar{\xi} & \bar{o} & \bar{\rho} \\ \bar{\epsilon} & \bar{\zeta} & \bar{\eta} & \bar{\theta} \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} m \\ . \ . \\ n \\ . \ . \\ b \\ . \ . \\ 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} T_m \\ . \ . \\ T_n \\ . \ . \\ T_b \\ . \ . \\ 1 \end{bmatrix}. <math> Solve for <math> [ T_m : T_n : T_b : 1 ]^T <math>,
- <math> \begin{bmatrix} T_m \\
. \ . \\ T_n \\ . \ . \\ T_b \\ . \ . \\ 1 \end{bmatrix} = \begin{bmatrix} \bar{\alpha} & \bar{\beta} & \bar{\delta} & -\bar{\gamma} \\ \bar{\iota} & \bar{\kappa} & \bar{\mu} & -\bar{\lambda} \\ \bar{\epsilon} & \bar{\zeta} & \bar{\theta} & -\bar{\eta} \\ -\bar{\nu} & -\bar{\xi} & -\bar{\rho} & \bar{o} \end{bmatrix} \begin{bmatrix} m \\ . \ . \\ n \\ . \ . \\ b \\ . \ . \\ 1 \end{bmatrix}. \qquad \qquad (27) <math>
Equation (27) describes how 3-space transformations convert a plane (m, n, b) into another plane (Tm, Tn, Tb) where
- <math> T_m = {\bar{\alpha} m + \bar{\beta} n + \bar{\delta} b - \bar{\gamma} \over - \bar{\nu} m - \bar{\xi} n - \bar{\rho} b + \bar{o}}, <math>
- <math> T_n = {\bar{\iota} m + \bar{\kappa} n + \bar{\mu} b - \bar{\lambda} \over - \bar{\nu} m - \bar{\xi} n - \bar{\rho} b + \bar{o}}, <math>
- <math> T_b = {\bar{\epsilon} m + \bar{\zeta} n + \bar{\theta} b - \bar{\eta} \over - \bar{\nu} m - \bar{\xi} n - \bar{\rho} b + \bar{o}}. <math>
Reference
- Introduction to Projective Transformations by J. C. Alvarez Paiva (http://www.math.poly/~alvarez/teaching/projective-geometry/Inaugural-Lecture/page_3.html)