Bit
|
A bit (abbreviated b) is the most basic information unit used in computing and information theory. A single bit is a one or a zero, a true or a false, a "flag" which is "on" or "off", or in general, the quantity of information required to distinguish two mutually exclusive states from each other.
Claude E. Shannon first used the word bit in a 1948 paper. Shannon's bit is a portmanteau word for binary digit (or possibly binary digit). He attributed its origin to John W. Tukey.
A byte is a collection of bits, originally variable in size but now almost always eight bits. Eight-bit bytes, also known as octets, can represent 256 values (28 values, 0–255). A four-bit quantity is known as a nibble, and can represent 16 values (24 values, 0–15).
"Word" is a term for a slightly larger group of bits, but it has no standard size. In the IA-32 architecture, 16 bits are called a "word" (with 32 bits being a "double word" or dword), but other architectures have word sizes of 32, 64 or others.
Terms for large quantities of bits can be formed using the standard range of prefixes, e.g., kilobit (kbit), megabit (Mbit) and gigabit (Gbit). Note that much confusion exists regarding these units and their abbreviations, see binary prefixes. Although it is clearer symbology to use "bit" for the bit and "b" for the byte, "b" is often used for bit and "B" for byte. (In SI, B stands for the bel.)
Certain bitwise computer processor instructions (such as xor) operate at the level of manipulating bits rather than manipulating data interpreted as an aggregate of bits.
Telecommunications or computer network transfer rates are usually described in terms of bits per second.
The bit is the smallest unit of storage currently used in computing, although much research is ongoing in quantum computing with qubits.