InfiniBand
|
InfiniBand is a high-speed serial computer bus, intended for both internal and external connections. It is the result of merging two competing designs, Future I/O, developed by Compaq, IBM, and Hewlett-Packard, with Next Generation I/O (ngio), developed by Intel, Microsoft, and Sun Microsystems. From the Compaq side, the roots were derived from Tandem's ServerNet. For a short time before the group came up with a new name, InfiniBand was called System I/O.
InfiniBand uses a bidirectional serial bus for low cost and low latency. Nevertheless it is very fast with 2.5 gigabit per second links in each direction. Links used 8 bit/10 bit encoding such that the actual data rate is 2 gigabits per second. Links can be aggregated in units of 4 (called 4X) and 12 (called 12X) for 8 gigabits per second and 24 gigabits per second respectively. InfiniBand also supports double and quad data rates where the rate of one link goes to 5 gigabits per second (4 gigabits per second of real data) 10 gigabits per second (8 gigabits per second of real data). This makes the maximum total speed a 12X quad data rate link that runs at 120 gigabits per second. Most systems today use 4X single data rate for 10 gigabits per second on the wire and 8 gigabits per second of real data. InfiniBand uses a switched fabric topology so several devices can share the network at the same time (as opposed to a bus topology). Data is transmitted in packets of up to 4 kilobytes, that are taken together to form a message. A message can be a direct memory access read or write operation from/to a remote node (Remote DMA or RDMA), a channel send or receive, a transaction-based operation (that can be reversed), or a multicast transmission.
Like the channel model used in most mainframes, all transmissions begin or end with a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service.
The primary aim of InfiniBand appears to be to connect CPUs and their high speed devices into clusters for "back-office" applications. In this role it will replace PCI, Fibre Channel, and various machine-interconnect systems like Ethernet. Instead, all of the CPUs and peripherals will be connected into a single larger InfiniBand fabric. This has a number of advantages in addition to greater speed, not the least of which is that normally "hidden" devices like PCI cards can be used by any device on the fabric. In theory this should make the construction of clusters much easier, and potentially less expensive, because more devices can be shared.
External links
- An Introduction to the InfiniBand Architecture (http://www.oreillynet.com/pub/a/network/2002/02/04/windows.html)
- The InfiniBand Trade Association homepage (http://www.infinibandta.org/)
- Linux InfiniBand™ Project (http://infiniband.sourceforge.net/)
- Open InfiniBand Alliance (http://www.openib.org/)
- MPI over InfiniBand Project (http://nowlab.cis.ohio-state.edu/projects/mpi-iba/)de:InfiniBand