Parallel computing
|
Parallel computing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain faster results.
Contents |
Parallel computing systems
The term parallel processor is sometimes used for a computer with more than one central processing unit, available for parallel processing. Systems with thousands of such processors are known as massively parallel.
There are many different kinds of parallel computers (or "parallel processors"). They are distinguished by the kind of interconnection between processors (known as "processing elements" or PEs) and between processors and memories. Flynn's taxonomy also classifies parallel (and serial) computers according to whether all processors execute the same instructions at the same time (single instruction/multiple data -- SIMD) or each processor executes different instructions (multiple instruction/multiple data -- MIMD). Parallel processor machines are also divided into symmetric and asymmetric multiprocessors, depending on whether all the processors are capable of running all the operating system code and, say, accessing I/O devices or if some processors are more or less privileged.
Performance vs. Cost
While a system of n parallel processors is less efficient than one n-times-faster processor, the parallel system is often cheaper to build. For tasks which require very large amounts of computation, have time constraints on completion and especially for those which can be divided into n execution threads, parallel computation is an excellent solution. In fact, in recent years, most high performance computing systems, also known as supercomputers, have a parallel architecture.
Algorithms
It should not be imagined that successful parallel computing is a matter of obtaining the required hardware and connecting it suitably. The difficulty of cooperative problem solving is aptly demonstrated by the following dubious reasoning:
- If it takes one man one minute to dig a post-hole then sixty men can dig it in one second.
In practice, linear speedup (i.e., speedup proportional to the number of processors) is very difficult to achieve. This is because many algorithms are essentially sequential in nature (Amdahl's law states this more formally).
Up to a certain point, certain workloads can benefit from pipeline parallelism when extra processors are added. This uses a factory assembly line approach to divide the work. If the work can be divided into n stages where a discrete deliverable is passed from stage to stage, then up to n processors can be used. However, the slowest stage will hold up the other stages so it is rare to be able to fully use n processors.
Most algorithms must be redesigned in order to make effective use of parallel hardware. Programs which work correctly in a single CPU system may not do so in a parallel environment. This is because multiple copies of the same program may interfere with each other, for instance by accessing the same memory location at the same time. Therefore, careful programming is required in a parallel system.
Superlinear speedup - the effect of a N processor machine completing a task more than N times faster than a machine with a single processor similar to that in the multiprocessor has at times been a controversial issue (and lead to much benchmarketing) but can be brought about by such effects as the multiprocessor machine having not just N times the processing power but also N times cache and memory thus flattening the cache-memory-disk hierarchy, more efficient use of memory by the individual processors due to partitioning of the problem and a number of other effects. Similar boosted efficiency claims are sometimes aired for the use of a cluster of cheap computers as a replacement of a large multiprocessor, but again the actual results depend much on the problem at hand and the ability to partition the problem in a way that is conductive to clustering.
Inter-thread Communication
Parallel computers are theoretically modeled as Parallel Random Access Machines (PRAMs). The PRAM model ignores the cost of interconnection between the constituent computing units, but is nevertheless very useful in providing upper bounds on the parallel solvability of many problems. In reality the interconnection plays a significant role.
The processors may either communicate in order to be able to cooperate in solving a problem or they may run completely independently, possibly under the control of another processor which distributes work to the others and collects results from them (a "processor farm").
Processors in a parallel computer may communicate with each other in a number of ways, including shared (either multiported or multiplexed) memory, a crossbar, a shared bus or an interconnect network of a myriad of topologies including star, ring, tree, hypercube, fat hypercube (an hypercube with more than one processor at a node), a n-dimensional mesh, etc. Parallel computers based on interconnect network need to employ some kind of routing to enable passing of messages between nodes that are not directly connected. The communication medium used for communication between the processors is likely to be hierarchical in large multiprocessor machines. Similarly, memory may be either private to the processor, shared between a number of processors, or globally shared. Systolic array is an example of a multiprocessor with fixed function nodes, local-only memory and no message routing.
Approaches to parallel computers include:
- Multiprocessing
- Computer cluster
- Parallel supercomputers
- Distributed computing
- NUMA vs. SMP vs. massively parallel computer systems
- Grid computing
Parallel software
A huge number of software systems have been designed for programming parallel computers, both at the operating system and programming language level. These systems must provide mechanisms for partitioning the overall problem into separate tasks and allocating tasks to processors. Such mechanisms may provide either implicit parallelism -- the system (the compiler or some other program) partitions the problem and allocates tasks to processors automatically (also called automatic parallelizing compilers) -- or explicit parallelism where the programmer must annotate his program to show how it is to be partitioned. Most of the current implementations of parallelizing compilers only support single-level parallelism, as opposed to multi-level parallelism (also called nested parallelism), which allows threads already running in parallel to spawn further parallelism. It is also usual to provide synchronisation primitives such as semaphores and monitors to allow processes to share resources without conflict.
Load balancing attempts to keep all processors busy by moving tasks from heavily loaded processors to less loaded ones.
Communication between tasks is usually done with threads communicating via shared memory or with message passing, either of which may be implemented in terms of the other.
Well known parallel software problem sets are:
Parallel programming models:
- PVM
- MPI
- OpenMP
- Nano-Threading (e.g. in the NANOS project (http://research.ac.upc.edu/nanos/))
- UPC
- HPF
Topics in parallel computing
Generic:
- Parallel programming
- Parallel algorithm
- Finding parallelism in problems and algorithms
- Cellular automaton
Computer science topics:
- Lazy evaluation vs strict evaluation
- Complexity class NC
- Communicating sequential processes
- Dataflow architecture
- Parallel graph reduction
Practical problems:
- Parallel computer interconnects
- Parallel computer I/O
- Reliability problems in large systems
Programming languages:
Specific:
- Atari Transputer Workstation
- BBN Butterfly computers
- Beowulf cluster
- Blue Gene
- Deep Blue
- Fifth generation computer systems project
- ILLIAC III
- ILLIAC IV
- Meiko Computing Surface
- NCUBE
- Transputer
Parallel computing to increase fault tolerance:
Companies (largely historical):
See also:
References
- This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.
External links:
- http://wotug.ukc.ac.uk/parallel/
- http://www.nhse.org/grand_challenge.html
- http://www.computer.org/parascope/
- http://www.cs.rit.edu/~ncs/parallel.html
- http://parawiki.plm.eecs.uni-kassel.de/
- http://research.ac.upc.edu/nanos/
- Parallelized Madebrot Set (http://sciencesoft.at/index.jsp?link=fractal&lang=en), interactive online demo
de:Parallelrechner fr:Calcul parallèle ja:並列コンピューティング zh:并行计算