# Lookup table

In computer science, a lookup table is a data structure, usually an array or associative array, used to replace a runtime computation with a simpler lookup operation. The speed gain can be significant, since retrieving a value from memory is often faster than undergoing an expensive computation.

A classic example is a trigonometry table. Calculating the sine of a value every time such a sine is needed can be prohibitively slow in some applications. To avoid this, the application can take a few seconds when it first starts to precalculate the sine of a number of values, for example for each whole number of degrees. Later, when the program wants the sine of a value, it uses the lookup table to retrieve the sine of a nearby value from a memory address instead of calculating it using a mathematical formula.

There are intermediate solutions that use tables in combination with a small amount of computation, often using interpolation. This allows better accuracy for values falling between two precomputed values. This requires slightly more time but can greatly enhance accuracy in applications that require it. Depending on the values being precomputed, this technique can also be used to shrink the lookup table size while retaining about the same accuracy.

In image processing, lookup tables are often called LUTs, and they link index numbers to output values. One common LUT, called the colormap, is used to determine the colors and intensity values with which a particular image will be displayed.

It's important to note that, while often effective, lookup tables can result in a severe penalty if the computation it replaces is relatively simple, not only because retrieving the result from memory may require more time, but also because it may increase memory requirements and pollute the cache. This is increasingly becoming an issue as microprocessors outrace memory. A similar issue appears in rematerialization, a compiler optimization.

## Examples

### Computing sine

Most computers, which only perform basic arithmetic operations, cannot directly calculate the sine of a given value. Instead, they use a complex formula such as the following Taylor series to compute the value of sine to a high degree of precision:

[itex]\operatorname{sin}(x) \approx x - \frac{x^3}{6} + \frac{x^5}{120} - \frac{x^7}{5040}[itex] (for x close to 0)

However, this can be expensive to compute, especially on slow processors, and there are many applications, particularly in traditional computer graphics, that need to compute many thousands of sine values every second. A common solution is to initially compute the sine of many evenly distributed values, and then to find the sine of x we choose the sine of the value closest to x. This will be close to the correct value because sine is a continuous function. For example:

 real array sine_table[-1000..1000]
for x from -1000 to 1000
sine_table[x] := sine(x/1000/pi)

 function lookup_sine(x)
return sine_table[round(x/1000/pi)]

Missing image
Interpolation_example_linear.png
Linear interpolation on a portion the sine function

Unfortunately, the table requires quite a bit of space: if IEEE double-precision floating-point numbers are used, over 16,000 bytes would be required. We can use less samples, but then our precision will significantly worsen. One good solution is linear interpolation, which draws a line between the two points in the table on either side of the value and locates the answer on that line. This is still quick to compute, and much more accurate for smooth functions such as the sine function. Here is our example using linear interpolation:

 function lookup_sine(x)
x1 := floor(x/1000/pi)
y1 := sine_table[x1]
y2 := sine_table[x1+1]
return y1 + 1000*pi*(y2-y1)*(x-x1)


When using interpolation, it's often beneficial to use nonuniform sampling, which means that where the function is close to straight, we use few sample points, while where it changes value quickly we use more sample points to keep the approximation close to the real curve. For more information, see interpolation.

### Counting 1 bits

Another, more discrete problem that is expensive to solve on many computers is that of counting the number of bits which are set to 1 in a number, sometimes called the population function. For example, the number 37 is 100101 in binary, so it contains three set bits. A simple piece of C code designed to count the 1 bits in a 32-bit int might look like this:

 int count_ones(unsigned int x) {
int i, result = 0;
for(i=0; i<32; i++) {
if (x & 1) result++;
x = x >> 1;
}
return result;
}


Unfortunately, this simple algorithm can take potentially hundreds of cycles on a modern architecture, because it makes many branches and branching is slow. This can be ameliorated using loop unrolling and some other more clever tricks, but there is a simple and fast solution using table lookup: simply construct a table bits_set with 256 entries giving the number of one bits set in each possible byte value. We then use this table to find the number of ones in each byte of the integer and add up the results. With no branches, four memory accesses, and almost no arithmetic, this can be dramatically faster than the algorithm above:

 int count_ones(unsigned int x) {
return bits_set[x & 255] + bits_set[(x >> 8) & 255]
+ bits_set[(x >> 16) & 255] + bits_set[(x >> 24) & 255];
}


• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy