Race hazard
|
A race hazard (or race condition) is a flaw in a system or process where the output exhibits unexpected critical dependence on the relative timing of events. The term originates with the idea of two signals racing each other to influence the output first.
Race hazards occur in poorly-designed electronics systems, especially logic circuits, but they may also arise in computer software.
Contents |
Electronics
A typical example of a race hazard may occur in a system of logic gates, where inputs vary. If a particular output depends on the state of the inputs, it may only be defined for steady-state signals. As the inputs change state, a finite delay will occur before the output changes, due to the physical nature of the electronic system. For a brief period, the output may change to an unwanted state before settling back to the designed state. Certain systems can tolerate such glitches, but if for example this output signal functions as a clock for further systems that contain memory, the system can rapidly depart from its designed behaviour (In effect, the temporary glitch becomes permanent).
For example, consider a two input AND gate fed with a logic signal X on input A and its negation, NOT X, on input B. In theory, the output (X AND NOT X) should never be high. However, if changes in the value of X take longer to propagate to input B than to input A then when X changes from false to true, a brief period will ensue during which both inputs are true, and so the gate's output will also be true.
Proper design techniques (e.g. Karnaugh maps - note, the Karnaugh map article includes a concrete example of a race hazard and how to eliminate it) encourage designers to recognise and eliminate race hazards before they cause problems.
As well as these problems, logic gates can enter metastable states, which create further problems for circuit designers.
See critical race and non-critical race for more information on specific types of race hazards.
Computing
Race hazards may arise in software, especially when communicating between separate processes or threads of execution. For example, consider the following two tasks, in pseudocode:
global integer A = 0;
task Received() { A = A + 1; print "RX"; }
task Timeout() // Print only the even numbers { if (A is divisible by 2) { print A; } }
task Received is activated whenever an interrupt is received from the serial controller, and increments the value of A.
task Timeout occurs every second. If A is divisible by 2, it prints A. Output would look something like:
0 0 0 RX RX 2 RX RX 4 4
Now consider this chain of events, which might occur next:
- timeout occurs, activating task Timeout
- task Timeout evaluates
A
and finds it is divisible by 2, so elects to execute the "print A" next. - data is received on the serial port, causing an interrupt and a switch to task Received
- task Received runs to completion, incrementing A and printing "RX"
- control returns to task Timeout
- task timeout executes print A, using the current value of A, which is 5.
Mutexes are used to address this problem in concurrent programming.
In filesystems, File locking provides a commonly-used solution. A more cumbersome remedy involves reorganizing the system in such a way that one unique process (running a daemon or the like) has exclusive access to the file, and all other processes that need to access the data in that file do so only via interprocess communication with that one process (which of course requires synchronization at the process level).
In networking, consider a distributed chat network like IRC, where a user acquires channel-operator privileges in any channel he starts. If two users on different servers, on different ends of the same network, try to start the same-named channel at the same time, each user's respective server will grant channel-operator privileges to each user, since neither server will yet have received the other server's signal that it has allocated that channel.
In this case of a race hazard, the concept of the "shared resource" covers the state of the network (what channels exist, as well as what users started them and therefore have what privileges), which each server can freely change as long as it signals the other servers on the network about the changes so that they can update their conception of the state of the network. However, the latency across the network makes possible the kind of race condition described. In this case, heading off race conditions by imposing a form of control over access to the shared resource -- say, appointing one server to control who holds what privileges -- would mean turning the distributed network into a centralized one (at least for that one part of the network operation). Where users find such a solution unacceptable, a pragmatic solution can have the system 1) recognize when a race hazard has occurred; and 2) repair the ill effects.
A race condition exemplifies an anti-pattern.
A particularly poignant example of a race condition was one of the problems that plagued the Therac-25 (a Life-critical system) accidents.
See also
External links
- Article "Secure programmer: Prevent race conditions - Resource contention can be used against you (http://www-128.ibm.com/developerworks/library-combined/l-sprace.html)" by David A. Wheeler
- Chapter "Avoid Race Conditions (http://www.asta.va.fh-ulm.de/LDP/HOWTO/Secure-Programs-HOWTO/avoid-race.html)" (Secure Programming for Linux and Unix HOWTO)
- Citations from CiteSeer (http://citeseer.ist.psu.edu/cis?q=race+condition+hazard)de:Wettlaufsituation