Hash table

In computer science, a hash table is an associative array data structure that associates keys with values. The primary operation it supports efficiently is a lookup, where it is given a key, an identifier for the information to be found such as a person's name, and asked to find the corresponding value. It works by transforming the key using a hash function into a hash, a number that the hash table uses to locate the desired value.

Hash tables are often used to implement associative arrays, sets and caches. Like arrays, hash tables can provide constant-time O(1) lookup on average, regardless of the number of items in the table. However, the rare worst-case lookup time can be as bad as O(n). Compared to other associative array data structures, hash tables are most useful when a large number of records of data are to be stored.

Hash tables store data in pseudo-random locations, so accessing the data in a sorted manner, is at best, a very time consuming operation. Other datastructures such as self-balancing binary search tree generally operate slightly more slowly and are rather more complex to implement than hash tables but maintain a sorted data structure at all times. See a comparison of hash tables and self-balancing binary search trees.

Contents

Overview

The basic operations that hash tables generally support are:

insert(key, data)
lookup(key) which returns data

Most, but not all hash tables support delete(key). Other services like iterating over the table, growing the table, emptying the table may be provided. Some hash tables allow multiple pieces of data to be stored with the same key, and in some cases the same data can be stored under multiple keys.

Keys may be pretty much anything, a number, a string, a record; with some hash tables even a reference to the data being stored.

Hash tables use an array, which is called the hash table, to contain the records, which in turn contain the keys and data.

Because the number of valid keys are typically much larger than the range of valid indexes into the array, a way is needed to convert each key into a valid index. This is achieved using a hash function, which is a simple function that takes a key and produces an index into an array . The indexed entry in turn, should contain the record that is associated with that key.

However, when there are more potential keys than array indexes, it can be shown (by the pigeonhole principle) that two or more potential keys will have the same hash; this is called a collision. It is the hash function designer's job to attempt to design the hash function to minimise the number of collisions for any array slot. But even so, statistically and in practice, even with excellent hash functions, and even if the array were to be allocated with space for a million entries, there is a 95% chance of at least one collision occurring before it contains 2500 items (see birthday paradox, birthday attack).

So, since only one item can be stored at any one place in an array, a collision resolution strategy must be used. For example the colliding item may be inserted in the next free space, or the hash table slots can store a linked list.

Even though some collisions occur, with good hash functions and when the table is up to around 80% full, collisions are relatively rare and performance is very good, very few comparisons will be made on average. However, if the table becomes too full, performance becomes poor, and the hash table's array must be enlarged. Enlarging the table means that effectively all the items in the hash table have to be added all over again. This is typically an expensive, albeit infrequent, operation; so the amortized time for each operation remains low. Some hash table implementations, notably in real-time systems, cannot pay the price of enlarging the table and must be initially allocated with enough space to remain efficient.

Common uses of hash tables

Hash tables are found in a wide variety of programs. Most programming languages provide them in standard libraries. Most interpreted or scripting languages have special syntactic support (examples being Python, Perl, PHP, Ruby, Io, Smalltalk, Lua and ICI). In these languages, hash tables tend to be used extensively as data structures, sometimes replacing records and arrays.

Hash tables are commonly used for symbol tables, caches, and sets.

In computer chess, a hash table is generally used to implement the transposition table.

Choosing a good hash function

A good hash function is essential for good hash table performance. Hash collisions are generally resolved by some form of linear search, so if a hash function tends to produce similar values for some common set of keys, slow searches will result.

In an ideal hash function, changing any single bit in the key (including extending or shortening the key) would change half the bits of the hash, and this change would be independent of the changes caused by any other bits of the key. Because a good hash function can be hard to design, or computationally expensive to execute, much research has been devoted to collision resolution strategies that mitigate poor hashing performance. However, none of them is as effective as using a good hash function in the first place.

It is desirable to use the same hash function for arrays of any conceivable size. To do this, the index into the hash table's array is generally calculated in two steps:

  1. A generic hash value is calculated which fills a natural machine integer
  2. This value is reduced to a valid array index by finding its modulus with the array's size.

Hash table array sizes are sometimes chosen to be prime numbers. This is done to avoid any tendency for the large integer hash to have common divisors with the hash table size, which would otherwise induce collisions after the modulus operation. However, a prime table size is no substitute for a good hash function.

A common alternative to prime sizes is to use a size which is a power of two, with simple bit masking to achieve the modulus operation. Such bit masking may be significantly computationally cheaper than the division operation. In either case it is often a good idea to arrange the generic hash value to be constructed using prime numbers that share no common divisors (or is coprime) with the table length.

One surprisingly common problem that can occur with hash functions is clustering. Clustering occurs when the structure of the hash function causes commonly used keys to tend to fall closely spaced or even consecutively within the hash table. This can cause significant performance degradation as the table fills when using certain collision resolution strategies, such as linear probing.

When debugging the collision handling in a hash table, it is sometimes useful to use a hash function that always returns a constant value, such as 1, which causes collisions on almost every insert.

Collision resolution

If two keys hash to the same value, they cannot be stored in the same location. We must find a place to store a new value if its ideal location is already occupied. There are a number of ways to do this, but the most popular are chaining and open addressing.

Chaining

In the simplest chained hash table technique, each slot in the array references a linked list of inserted values that collide to the same slot. Insertion requires finding the correct slot, and appending to either end of the list in that slot; deletion requires searching the list and removal.

Chaining hash tables have advantages over open addressed hash tables in that the removal operation is simple and resizing the table can be postponed for a much longer time because performance degrades more gracefully even when every slot is used. Indeed, many chaining hash tables may not require resizing at all since performance degradation is linear as the table fills. For example, a chaining hash table containing twice its recommended capacity of data would only be about twice as slow on average as the same table at its recommended capacity.

Disadvantages of chained hash tables are due to the common use of linked lists. Particularly when storing small elements, the percentage overhead of the linked list can be large. An additional disadvantage is that traversing a linked list has poor cache performance.

Alternative data structures can be used for chains instead of linked lists. By using a red-black tree, for example, the theoretical worst-case time of a hash table can be brought down to O(log n) rather than O(n). However, since each list is intended to be short, this approach is usually inefficient unless the hash table is designed to run at full capacity or there are unusually high collision rates, as might occur in input designed to cause collisions. Dynamic arrays can also be used to decrease space overhead and improve cache performance when elements are small.

Chained hash tables have a number of benefits over the open addressing discussed in the next section:

  • They are simple to implement effectively and only require basic data structures.
  • From the point of view of writing suitable hash functions, chained hash tables are insensitive to clustering, only requiring minimization of collisions. On the other hand, open addressing depends upon better hash functions to avoid clustering. This is particularly important if novice programmers can add their own hash functions, but even experienced programmers can be caught out by unexpected clustering effects.
  • They degrade in performance more gracefully. Although chains grow longer as the table fills, a chained hash table cannot "fill up" and does not exhibit the sudden increases in lookup times that occur in a near-full table with open addressing.

Open addressing

Open addressing hash tables store all the records within the array. A hash collision is resolved by searching through alternate locations in the array (the probe sequence) until either the target record is found, or an unused array slot is found, which indicates that there is no such key in the table. Well known probe sequences include:

linear probing 
in which the interval between probes is fixed--often at 1,
quadratic probing 
in which the interval between probes increases linearly (hence, the indices are described by a quadratic function), and
double hashing 
in which the interval between probes is fixed for each record but is computed by another hash function.

The main tradeoffs between these methods is that linear probing has the best cache performance but is most sensitive to clustering, while double hashing has poor cache performance but exhibits virtually no clustering; quadratic hashing falls in-between in both areas. Double hashing can also require more computation than other forms of probing.

A critical influence on performance of an open addressing hash table is the load factor; that is, the proportion of the slots in the array that are used. As the load factor increases towards 100%, the number of probes that may be required to find or insert a given key rises dramatically. Once the table becomes full, probing algorithms may even fail to terminate. Even with good hash functions, load factors are normally limited to 80%. A poor hash function can exhibit poor performance even at very low load factors by generating significant clustering. What causes hash functions to cluster is not well understood, and it is easy to unintentionally write a hash function which causes severe clustering.

The benefits of open addressing compared to chaining lies in their efficiency:

  • They are more space-efficient than chaining because they don't need to store any pointers or allocate any additional space outside the hash table. They can even be implemented in the absence of a memory allocator.
  • Particularly with linear probing, they have better locality of reference than chaining, allowing them to exploit the data cache to execute more quickly.
  • They can be easier to serialize, because they don't use pointers.

Perfect hashing

If all of the keys that will be used are known ahead of time, and there are no more keys than can fit the hash table, perfect hashing can be used to create a perfect hash table, in which there will be no collisions. If minimal perfect hashing is used, every location in the hash table can be used as well.

Probabilistic hashing

Perhaps the simplest solution to a collision is to replace the value that is already in the slot with the new value, or slightly less commonly, drop the item that is to be inserted. In later searches, this may result in a search not finding an item which has been inserted. This technique is particularly useful for implementing caching.

An even more space-efficient solution which is similar to this is to keep only one bit for each bucket, each indicating whether or not a value has been inserted in its bucket. False negatives cannot occur, but false positives can, since if the search finds a 1 bit, it will say the value was found, even if it was just another value that hashed into the same bucket by coincidence. In reality, such a hash table is merely a specific type of Bloom filter.

Example pseudocode

The following pseudocode is an implementation of an open addressing hash table with linear probing and single-slot stepping, a common approach that is effective if the hash function is good. Each of the lookup, set and remove functions use a common internal function findSlot to locate the array slot that either does or should contain a given key.

 record pair { key, value }
 var pair array slot[0..numSlots-1]
 
 function findSlot(key) {
     i := hash(key) modulus numSlots
     loop {
         if slot[i] is not occupied or slot[i].key = key
             return i
         i := (i + 1) modulus numSlots
     }
 }
 
 function lookup(key)
     i := findSlot(key)
     if slot[i] is occupied   // key is in table
         return slot[i].value
     else                     // key is not in table
         return not found     
 
 function set(key, value) {
     i := findSlot(key)
     if slot[i] is occupied
         slot[i].value := value
     else {
         if the table is almost full
             rebuild the table larger (note 1)
         i := findSlot(key)
         slot[i].key   := key
         slot[i].value := value
     }
 }
note 1 
Rebuilding the table requires allocating a larger array and recursively using the set operation to insert all the elements of the old array into the new larger array. It is common to increase the array size exponentially, for example by doubling the old array size.
 function remove(key)
     i := find_slot(key)
     if slot[i] is unoccupied
         return   // key is not in the table
     j := i
     loop
         j := (j+1) modulus numSlots
         if slot[j] is unoccupied
             exit loop
         k := hash(slot[j].key) modulus numSlots
         if (j > i and (k <= i or k > j)) or
            (j < i and (k <= i and k > j)) (note 2)
             slot[i] := slot[j]
             i := j
     mark slot[i] as unoccupied
note 2 
For all records in a cluster, there must be no vacant slots between their natural hash position and their current position (else lookups will terminate before finding the record). At this point in the pseudocode, i is a vacant slot that might be invalidating this property for subsequent records in the cluster. j is such as subsequent record. k is the raw hash where the record at j would naturally land in the hash table if there were no collisions. This test is asking if the record at j is invalidly positioned with respect to the required properties of a cluster now that i is vacant.

Another technique for removal is simply to mark the slot as deleted. However this eventually requires rebuilding the table simply to remove deleted records. The methods above provide O(1) updating and removal of existing records, with occasional rebuilding if the high water mark of the table size grows.

The O(1) remove method above is only possible in linearly probed hash tables with single-slot stepping. In the case where many entries are to be deleted in one operation, marking the slots for deletion and later rebuilding may be more efficient.

Problems with hash tables

Although hash table lookups give constant time on average, the time spent can be significant. Evaluating a good hash function can be a slow operation, as well as comparing large keys to check that the item accessed is the correct one. In particular, if simple array indexing can be used instead, this is usually faster.

Hash tables in general exhibit poor locality of reference — that is, the data to be accessed is distributed seemingly at random in memory. Because hash tables cause access patterns that jump around, this can trigger microprocessor cache misses that cause long delays (see CPU cache). Compact data structures such as arrays, searched with linear search, may be faster if the table is relatively small and keys are cheap to compare, such as with simple integer keys. Due to Moore's Law, cache sizes are growing exponentially and so what is considered "short" may be increasing. The optimum performance point varies from system to system; for example, a trial on Parrot shows that its hash tables outperform linear search in all but the most trivial cases (one to three entries).

More significantly, hash tables are more difficult and error-prone to write and use. Hash tables require the design of an effective hash function for each key type, which in many situations is more difficult and time-consuming to design and debug than the mere comparison function required for a self-balancing binary search tree. Hash tables are also not persistent data structures.

Additionally, in some applications, a black hat with knowledge of the hash function may be able to supply information to a hash which creates worst-case behavior by causing excessive collisions, resulting in very poor performance (i.e. a denial of service attack). In critical applications, a data structure with better worst-case guarantees may be preferable. For details, see Crosby and Wallach's Denial of Service via Algorithmic Complexity Attacks (http://www.cs.rice.edu/~scrosby/hash/CrosbyWallach_UsenixSec2003.pdf).

See also

Implementations

While most programming languages already provide hash table functionality, there are several independant implementations worth mentioning.

  • Google Sparse Hash (http://goog-sparsehash.sourceforge.net/) The Google SparseHash project contains several hash-map implementations in use at Google, with different performance characteristics, including an implementation that optimizes for space and one that optimizes for speed. The memory-optimized one is extremely memory-efficient with only 2 bits/entry of overhead.

References

de:Hashtabelle es:Tabla hash fr:Table de hachage it:Hash table lt:Dėstymo lentelės pl:Tablica mieszająca ja:ハッシュテーブル zh:哈希表

Navigation

  • Art and Cultures
    • Art (https://academickids.com/encyclopedia/index.php/Art)
    • Architecture (https://academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (https://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (https://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)

Information

  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Toolbox
Personal tools