Rate distortion theory
|
Rate distortion theory is the branch of information theory addressing the problem of determining the minimal amount of entropy (or information) R that should be communicated over a channel such that the source (input signal) can be reconstructed at the receiver (output signal) with given distortion D. As such, rate distortion theory gives theoretical bounds for how much compression can be achieved using lossy data compression methods. Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit rate allocation procedures that capitalize on the general shape of rate-distortion functions.
Rate distortion theory was created by Claude Shannon in his founding work on information theory.
Contents |
Introduction
In rate distortion theory, the rate is usually understood as the number of bits per data sample to be stored or transmitted. The notion of distortion is a subject of on-going discussion. In the most simple case (which is actually used in most cases), the distortion is defined as the variance of the difference between input and output signal (i.e., the mean-squared error of the difference). However, since we know that most lossy compression techniques operate on data that will be perceived by humans (listen to music, watch pictures and video) the distortion measure preferably should include some aspects of human perception. In audio compression perceptual models, and therefore perceptual distortion measures, are well developed and routinely used in compression techniques such as MP3 or Vorbis, but often not easy to include in rate-distortion theory. In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighing (quantization, normalization) matrix.
Rate-distortion functions
The functions that relate the rate and distortion are found as the solution of the following minimization problem:
- <math>\min_{Q_{Y|X}(y|x)} I_Q(Y;X)\ \mbox{subject to}\ D_Q \le D^*.<math>
Here QY | X(y | x) is the conditional probability density function (PDF) of the communication channel output (compressed signal) Y for a given input (original signal) X, and IQ(Y ; X) is the mutual information between Y and X defined as
- <math>I(Y;X) = H(Y) - H(Y|X)<math>
where H(Y) and H(Y | X) are the entropy of the output signal Y and the conditional entropy of the output signal given the input signal, respectively:
- <math> H(Y) = \int_{-\infty}^{\infty} P_Y (y) \log_{2} (P_Y (y))\,dy <math>
- <math> H(Y|X) =
\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Q_{Y|X}(y|x) P_X (x) \log_{2} (Q_{Y|X} (y|x))\, dx\, dy. <math>
The mutual information can be understood as a measure for prior uncertainty the receiver has about the sender's signal (H(Y)), diminished by the uncertainty that is left after receiving information about the sender's signal (H(Y | X)). Of course the decrease in uncertainty is due to the communicated amount of information, which is I(Y; X).
As an example, in case there is no communication at all, then H(Y |X) = H(Y) and I(Y; X) = 0. Alternatively, if the communication channel is perfect and the received signal Y is identical to the signal X at the sender, then H(Y | X) = 0 and I(Y; X) = H(Y) = H(X).
In the definition of the rate-distortion function, DQ and D* are the distortion between X and Y for a given QY | X(y | x) and the prescribed maximum distortion, respectively. When we use the mean squared error as distortion measure, we have (for amplitude-continuous signals):
- <math>D_Q = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
P_{X,Y}(x,y) (x-y)^2\, dx\, dy = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Q_{Y|X}(y|x)P_{X}(x) (x-y)^2\, dx\, dy <math> <p>
As the above equations show, calculating a rate-distortion function require the stochastic description of the input X in terms of the PDF PX(x), and then aims at finding the conditional PDF QY | X(y | x) that minimize rate for a given distortion D*.
Unfortunately, solving this minimization problem can be done only for few cases, of which the following two are the most well known ones. However, although exact solutions are only available in a few cases, measured rate-distortion functions in real life tend to have very similar forms.
Memoryless (independent) Gaussian source
If we assume that PX(x) is Gaussian with variance σ2, and if we assume that successive samples of the signal X are stochastically independent (or, if your like, the source is memoryless, or the signal is uncorrelated), we find the following analytical expression for the rate-distortion function:
- <math> R(D) = \left\{ \begin{matrix}
\frac{1}{2}\log_2(\sigma_x^2/D ), & \mbox{if } D \le \sigma_x^2 \\ \\ 0, & \mbox{if } D > \sigma_x^2 \end{matrix} \right.
<math>
The following figure shows what this function looks like:
Missing image
RDCurve.jpg
Image:RDCurve.jpg
Rate distortion theory tell us that no compression system exists that performs outside the green dotted area. The closer a practical compression system is to the red (lower) bound, the better it performs. It should be emphasized that this rate-distortion function holds only for Gaussian memoryless sources. The performance of a practical compression system working on -- say -- images, may well below the R(D) lower bound shown.
Gaussian source with memory
Performance of practical compression systems
See also
External links
- VcDemo Image and Video Compression Learning Tool (http://www-ict.its.tudelft.nl/vcdemo)