Lossless data compression
|
Lossless data compression is a class of data compression algorithms that allows the original data to be reconstructed exactly from the compressed data. Contrast with lossy data compression.
Lossless data compression is used in software compression tools such as the highly popular ZIP file format, used by PKZIP, WinZip and Mac OS 10.3, and the Unix programs bzip2, gzip and compress. Other popular formats include Stuffit, RAR and 7z.
Lossless compression is used when it is important that the original and the decompressed data are exactly identical, or when no assumption can be made on whether certain deviation is uncritical. Typical examples are executable programs and source code. Some image file formats, notably PNG, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. GIF uses a technically lossless compression method, but most GIF implementations are incapable of representing full color, so they quantize the image (often with dithering) to 256 or fewer colors before encoding as GIF. Color quantization is a lossy process, but reconstructing the color image and then re-quantizing it produces no additional loss. (Some rare GIF implementations make multiple passes over an image, adding 255 new colors on each pass.)
Contents |
Lossless data compression must always make some files longer
Lossless data compression algorithms cannot guarantee to compress (that is make smaller) all input data sets. In other words for any (lossless) data compression algorithm there will be an input data set that does not get smaller when processed by the algorithm. This is easily proven with elementary mathematics using a counting argument, as follows:
- Assume that each file is represented as a string of bits of some arbitrary length.
- Suppose that there is a compression algorithm that transforms every file into a distinct shorter file. (If the output files are not all distinct, the compression cannot be reversed without losing some data.)
- Consider the set of all files of length at most N bits. This set has 1 + 2 + 4 + ... + 2N = 2N+1-1 members, if we include the zero-length file.
- Now consider the set of all files of length at most N-1 bits. There are 1 + 2 + 4 + ... + 2N-1 = 2N-1 such files, if we include the zero-length file.
- But this is smaller than 2N+1-1. We cannot map all the members of the larger set uniquely into the members of the smaller set.
- This contradiction implies that our original hypothesis (that the compression function makes all files smaller) must be untrue.
Notice that the difference in size is so marked that it makes no difference if we simply consider files of length exactly N as the input set: it is still larger (2N members) than the desired output set.
If we make all the files a multiple of 8 bits long (as in standard computer files) there are even fewer files in the smaller set, and this argument still holds.
Thus any lossless compression algorithm that makes some files shorter must necessarily make some files longer. Good compression algorithms are those that achieve shorter output on input distributions that occur in real-world data.
Lossless compression techniques
Lossless compression methods may be categorized according to the type of data they are designed to compress. The three main types of targets for compression algorithms are text, images, and sound. Whilst, in principle, any general-purpose lossless compression algorithm (general-purpose means that they can handle all binary input) can be used on any type of data, many are unable to achieve significant compression on data that is not of the form that they are designed to deal with. Sound data, for instance, cannot be compressed well with conventional text compression algorithms.
Most lossless compression programs use two different kinds of algorithm: one which generates a statistical model for the input data, and another which maps the input data to bit strings using this model in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data. Often, only the former algorithm is named, while the second is implied (through common use, standardization etc.) or unspecified.
Statistical modelling algorithms for text (or text-like binary data such as executables) include:
- Burrows-Wheeler transform (block sorting preprocessing that makes compression more efficient)
- DEFLATE
- LZW
Encoding algorithms to produce bit sequences are:
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the USA and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source activists encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing image files in favor of JPEG (for still true color images) or Portable Network Graphics PNG (for still indexed images).
Many of the lossless compression techniques used for text also work reasonably well for indexed images, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that colour images usually have a preponderance to a limited range of colours out of those representable in the colour space).
As mentioned previously, lossless sound compression is a somewhat specialised area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data - essentially using models to predict the "next" value and encoding the (hopefully small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the "error") tends to be small, then certain difference values (like 0, +1, -1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression, of an image). This is called delta compression (from the Greek letter Δ which is commonly used in mathematics to denote a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta compression from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Lossless compression methods
Audio compression
- Comparison of Lossless Audio Compressors (http://wiki.hydrogenaudio.org/index.php?title=Lossless_comparison) at Hydrogenauido Wiki
- Apple Lossless - Apple Lossless - ALAC
- DST - Direct Stream Transfer
- FLAC - Free Lossless Audio Codec
- Monkey's_Audio - Monkey's Audio APE
- RealPlayer - RealAudio Lossless
- SHN - Shorten
- WavPack - WavPack lossless
- WMA Lossless - Windows Media Lossless
- TTA - The True Audio codec
Graphic compression
See also
- Audio compression
- David A. Huffman
- Information entropy
- Lossless Transform Audio Compression (LTAC)
- Lossy data compressionde:Verlustfreie Datenkompression
ja:可逆圧縮 pl:Kompresja bezstratna sv:Icke-frstrande komprimering