Arithmetic coding

Arithmetic coding is a method for lossless data compression. It is a form of entropy encoding, but where other entropy encoding techniques separate the input message into its component symbols and replace each symbol with a code word, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 ≤ n < 1.0).

Contents

How arithmetic coding works

Arithmetic coders produce near-optimal output for a given set of symbols and probabilities. Compression algorithms that use arithmetic coding start by determining a model of the data -- basically a prediction of what patterns will be found in the symbols of the message. The more accurate this prediction is, the closer to optimality the output will be.

Example: a simple, static model for describing the output of a particular monitoring instrument over time might be:

  • 60% chance of symbol NEUTRAL
  • 20% chance of symbol POSITIVE
  • 10% chance of symbol NEGATIVE
  • 10% chance of symbol END-OF-DATA. (The presence of this symbol means that the stream will be 'internally terminated', as is fairly common in data compression; the first and only time this symbol appears in the data stream, the decoder will know that the entire stream has been decoded.)

Models can handle other alphabets than the simple four-symbol set chosen for this example, of course. More sophisticated models are also possible: higher-order modelling changes its estimation of the current probability of a symbol based on the symbols that precede it (the context), so that in a model for English text, for example, the percentage chance of "u" would be much higher when it followed a "Q" or a "q". Models can even be adaptive, so that they continuously change their prediction of the data based on what the stream actually contains. Whatever model the encoder uses, however, the decoder must also have.

Each step of the encoding process, except for the very last, is the same; the encoder has basically just three pieces of data to consider:

  • The next symbol that needs to be encoded
  • The current interval (at the very start of the encoding process, the interval is set to [0,1), but that will change)
  • The probabilities the model assigns to each of the various symbols that are possible at this stage (as mentioned earlier, higher-order or adaptive models mean that these probabilities are not necessarily the same in each step.)

The encoder divides the current interval into sub-intervals, each representing a fraction of the current interval proportional to the probability of that symbol in the current context. Whichever interval corresponds to the actual symbol that is next to be encoded becomes the interval used in the next step.

Example: for the four-symbol model above:

  • the interval for NEUTRAL would be [0, 0.6)
  • the interval for POSITIVE would be [0.6, 0.8)
  • the interval for NEGATIVE would be [0.8, 0.9)
  • the interval for END-OF-DATA would be [0.9, 1).

When all symbols have been encoded, the resulting interval identifies, unambiguously, the sequence of symbols that produced it. Anyone who has the final interval and the model used can reconstruct the symbol sequence that must have entered the encoder to result in that final interval.

It is not necessary to transmit the final interval, however; it is only necessary to transmit one fraction that lies within that interval. In particular, it is only necessary to transmit enough digits (in whatever base) of the fraction so that all fractions that begin with those digits fall into the final interval.

Example: we are now trying to decode a message encoded with the four-symbol model above. The message is encoded in the fraction 0.538 (for clarity, we are using decimal, instead of binary; we are also assuming that whoever gave us the encoded message gave us only as many digits as needed to decode the message. We will discuss both issues later.)

We start, as the encoder did, with the interval [0,1), and using the same model, we divide it into the same four sub-intervals that the encoder must have. Our fraction 0.538 falls into the sub-interval for NEUTRAL, [0, 0.6); this indicates to us that the first symbol the encoder read must have been NEUTRAL, so we can write that down as the first symbol of our message.

We then divide the interval [0, 0.6) into sub-intervals:

  • the interval for NEUTRAL would be [0, 0.36) -- 60% of [0, 0.6)
  • the interval for POSITIVE would be [0.36, 0.48) -- 20% of [0, 0.6)
  • the interval for NEGATIVE would be [0.48, 0.54) -- 10% of [0, 0.6)
  • the interval for END-OF-DATA would be [0.54, 0.6). -- 10% of [0, 0.6)

Our fraction of .538 is within the interval [0.48, 0.54); therefore the second symbol of the message must have been NEGATIVE.

Once more we divide our current interval into sub-intervals:

  • the interval for NEUTRAL would be [0.48, 0.516)
  • the interval for POSITIVE would be [0.516, 0.528)
  • the interval for NEGATIVE would be [0.528, 0.534)
  • the interval for END-OF-DATA would be [0.534, 0.540).

Our fraction of .538 falls within the interval of the END-OF-DATA symbol; therefore, this must be our next symbol. Since it is also the internal termination symbol, it means our decoding is complete. (If the stream was not internally terminated, we would need to know where the stream stops from some other source -- otherwise, we would continue the decoding process forever, mistakenly reading more symbols from the fraction than were in fact encoded into it.)

The same message could have been encoded by the equally short fractions .534, .535, .536, .537 or .539 suggests that our use of decimal instead of binary introduced some inefficiency. This is correct; the information content of a three-digit decimal is approximately 9.966 bits; we could have encoded the same message in the binary fraction .10001010 (equivalent to .5390625 decimal) at a cost of only 8 bits. This is only slightly larger than the information content, or entropy of our message, which with a probability of 0.6% has an entropy of approximately 7.381 bits. (Note that the final zero must be specified in the binary fraction, or else the message would be ambiguous.)

Precision and renormalization

The above explanations of arithmetic coding contain some simplification. In particular, they are written as if the encoder first calculated the fractions representing the endpoints of the interval in full, using infinite precision, and only converted the fraction to its final form at the end of encoding. Rather than try to simulate infinite precision, most arithmetic coders instead operate at a fixed limit of precision that they know the decoder will be able to match, and round the calculated fractions to their nearest equivalents at that precision. An example shows how this would work if the model called for the interval [0,1) to be divided into thirds, and this was approximated with 8 bit precision. Note that now that the precision is known, so are the binary ranges we'll be able to use.

Symbol Probability (expressed as fraction) Interval reduced to eight-bit precision (as fractions) Interval reduced to eight-bit precision (in binary) Range in binary
A 1/3 [0, 85/256) [0.00000000, 0.01010101) 00000000 - 01010100
B 1/3 [85/256, 171/256) [0.01010101, 0.10101011) 01010101 - 10101010
C 1/3 [171/256, 1) [0.10101011, 1.00000000) 10101011 - 11111111

A process called renormalization keeps the finite precision from becoming a limit on the total number of symbols that can be encoded. Whenever the range is reduced to the point where all values in the range share certain beginning digits, those digits are sent to the output. However many digits of precision the computer can handle, it is now handling fewer than that, so the existing digits are shifted left, and at the right, new digits are added to expand the range as widely as possible. Note that this result occurs in two of the three cases from our previous example.

Symbol Probability Range Digits that can be sent to output Range after renormalization
A 1/3 00000000 - 01010100 0 00000000 - 10101001
B 1/3 01010101 - 10101010 None 01010101 - 10101010
C 1/3 10101011 - 11111111 1 01010110 - 11111111

Connections between arithmetic coding and other compression methods

Huffman coding

There is great similarity between arithmetic coding and Huffman coding -- in fact, it has been shown that Huffman is just a specialized case of arithmetic coding -- but because arithmetic coding translates the entire message into one number represented in base b, rather than translating each symbol of the message into a series of digits in base b, it will often approach optimal entropy encoding much more closely than Huffman can.

Range encoding

There is profound similarity between arithmetic coding and range encoding, so much so that their performances can usually be expected to be almost identical, with range encoding only being a few bits behind if there is indeed any difference. Range encoding, unlike arithmetic coding, is generally believed not to be covered by any company's patents.

The idea behind range encoding is that, instead of starting with the interval [0,1) and dividing it into sub-intervals proportional to the probability of each symbol, the encoder starts with a large range of non-negative integers, such as 000,000,000,000 to 999,999,999,999, and divides it into sub-ranges proportional to the probability of each symbol. When the sub-ranges get narrowed down sufficiently that the leading digits of the final result are known, those digits may be shifted "left" out of the calculation, and replaced by digits shifted in on the "right" -- each time this happens, it is roughly equivalent to a retroactive multiplication of the size of the initial range.

US patents on arithmetic coding

A variety of specific techniques for arithmetic coding have been protected by US patents. Some of these patents may be essential for implementing the algorithms for arithmetic coding that are specified in some formal international standards. When this is the case, such patents are generally available for licensing under what are called reasonable and non-discriminatory (RAND) licensing terms (at least as a matter of standards-committee policy). In some well-known instances (including some involving IBM patents) such licenses are available for free, and in other instances, licensing fees are required. The availability of licenses under RAND terms does not necessarily satisfy everyone who might want to use the technology, as what may be "reasonable" fees for a company preparing a proprietary software product may seem much less reasonable for a free software or open source project.

One company well known for innovative work and patents in the area of arithmetic coding is IBM. Some commenters feel that the notion that no kind of practical and effective arithmetic coding can be performed without infringing on valid patents held by IBM or others is just a persistent urban legend in the data compression community (especially considering that effective designs for arithmetic coding have now been in use long enough for many of the original patents to have expired). However, since patent law provides no "bright line" test that proactively allows you to determine whether a court would find a particular use to infringe a patent, and as even investigating a patent more closely to determine what it actually covers could actually increase the damages awarded in an unfavorable judgement, the patenting of these techniques has nevertheless caused a chilling effect on their use. At least one significant compression software program, bzip2, deliberately discontinued the use of arithmetic coding in favor of Huffman coding due to the patent situation.

Some US patents relating to arithmetic coding are listed below.

  • Patent 4,122,440 — (IBM) Filed March 4, 1977, Granted Oct 24, 1978 (Now expired)
  • Patent 4,286,256 — (IBM) Granted Aug 25, 1981 (presumably now expired)
  • Patent 4,467,317 — (IBM) Granted Aug 21, 1984 (presumably now expired)
  • Patent 4,652,856 — (IBM) Granted Feb 4, 1986 (presumably now expired)
  • Patent 4,891,643 — (IBM) Filed 1986/09/15, granted 1990/01/02
  • Patent 4,905,297 — (IBM) Granted Feb 27, 1990
  • Patent 4,933,883 — (IBM) Granted Jun 12, 1990
  • Patent 4,935,882 — (IBM) Granted Jun 19, 1990
  • Patent 4,989,000 — (???) Filed 1989/06/19, granted 1991/01/29
  • Patent 5,099,440
  • Patent 5,272,478 — (Ricoh)

Note: This list is not exhaustive. See the following link for a list of more patents. [1] (http://www.faqs.org/faqs/compression-faq/part1/)

Patents on arithmetic coding may exist in other jurisdictions, see software patents for a discussion of the patentability of software around the world.


See also

An earlier (open content) version of the above article was posted on PlanetMath (http://planetmath.org/encyclopedia/ArithmeticEncoding.html).

External links

Navigation

  • Art and Cultures
    • Art (https://academickids.com/encyclopedia/index.php/Art)
    • Architecture (https://academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (https://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (https://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)

Information

  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Toolbox
Personal tools