Safety engineering
|
Safety engineering is an applied science strongly related to systems engineering. Safety engineering assure that a life-critical system behaves as needed even when pieces fail.
Safety engineers distinguish different extents of defective operation: A "fault" is said to occur when some piece of equipment does not operate as designed. A "failure" only occurs if a human being (other than a repair person) has to cope with the situation. A "critical" failure endangers one or a few people. A "catastrophic" failure endangers, harms or kills a significant number of people.
Safety engineers also identify different modes of safe operation: A "probabilistically safe" system has no single point of failure, and enough redundant sensors, computers and effectors so that it is very unlikely to cause harm (usually "very unlikely" means less than one human life lost in a billion hours of operation). An "inherently safe" system is a clever mechanical arrangement that cannot be made to cause harm- obviously the best arrangement, but this is not always possible. For example, "inherently safe" airplanes are not possible. A "fail-safe" system is one that cannot cause harm when it fails. A "fault-tolerant" system can continue to operate with faults, though its operation may be degraded in some fashion.
These terms combine to describe the safety needed by systems: For example, most biomedical equipment is only "critical," and often another identical piece of equipment is nearby, so it can be merely "probabilistically fail-safe". Train signals can cause "catastrophic" accidents (imagine chemical releases from tank-cars) and are usually "inherently safe". Aircraft "failures" are "catastrophic" (at least for their passengers and crew,) so aircraft are usually "probabilistically fault-tolerant". Without any safety features, nuclear reactors might have "catastrophic failures", so real nuclear reactors are required to be at least "probabilistically fail-safe", and some pebble bed reactors are "inherently fault-tolerant".
Contents |
The process
Ideally, safety-engineers take an early design of a system, analyze it to find what faults can occur, and then propose changes to make the system more safe. In an early design stage, often a fail-safe system can be made acceptably safe with a few sensors and some software to read them. Probabilitically fault-tolerant systems can often be made by using more, but smaller and less-expensive pieces of equipment.
Historically, many organizations viewed "safety engineering" as a process to produce documentation to gain regulatory approval, rather than a real asset to the engineering process. These same organizations have often made their views into a self-fulfilling prophecy by assigning less-able personnel to safety engineering.
Far too often, rather than actually helping with the design, safety engineers are assigned to prove that an existing, completed design is safe. If a competent safety engineer then discovers significant safety problems late in the design process, correcting them can be very expensive. This project management error has wasted large sums of money in the development of commercial nuclear reactors.
Additionally, failure mitigation can go beyond design recommendations, particularly in the area of maintenance. There is an entire realm of safety and reliability engineering known as "Reliability Centered Maintenance" (RCM), which is a discipline that is a direct result of analyzing potential failures within a system, and determining maintenance actions that can mitigate the risk of failure. This methodology is used extensively on aircraft, and involves understanding the failure modes of the serviceable replaceable assemblies, in addition to the means to detect or predict an impending failure. Every automobile owner is familiar with this concept when they take in their car to have the oil changed or brakes checked. Even filling up one's car with gas is a simple example of a failure mode (failure due to fuel starvation), a means of detection (gas gauge), and a maintenance action (fill 'er up!).
For large scale complex systems, hundreds if not thousands of maintenance actions can result from the failure analysis. These maintenance actions are based on conditions (eg, gauge reading or leaky valve), hard conditions (eg, a component is known to fail after 100 hrs of operation with 95% certainty), or require inspection to determine the maintenance action (eg, metal fatigue). The Reliability Centered Maintenance concept then analyzes each individual maintenance item for its risk contribution to safey, mission, operational readiness, or cost to repair if a failure does occur. Then the sum total of all the maintenance actions are bundled into maintenance intervals so that maintenance is not occurring around the clock, but rather, at regular intervals. This bundling process introduces further complexity, as it might stretch some maintenance cycles, thereby increasing risk, but reduce others, thereby potentially reducing risk, with the end result being a comprehensive maintenance schedule, purpose built to reduce operational risk and ensure acceptable levels of operational readiness and availability.
Analysis techniques
The two most common fault modeling techniques are called "failure modes and effects analysis" and "fault tree analysis". These techniques are just ways of finding problems and of making plans to cope with failures.
Failure modes and effects analysis
In the technique known as "failure modes and effects analysis", an engineer starts with a block diagram of a system. The Safety engineer then considers what happens if each block of the diagram fails. The engineer then draws up a table in which failures are paired with their effects and an evaluation of the effects. The design of the system is then corrected, and the table adjusted until the system is not known to have unacceptable problems. Of course, the engineers may make mistakes. It's very helpful to have several engineers review the failure modes and effects analysis.
Fault tree analysis
In the technique known as "fault tree analysis", an undesired effect is taken as the root of a tree of logic. Then, each situation that could cause that effect is added to the tree as a series of logic expressions. When fault trees are labelled with actual numbers about failure probabilities, which are often in practice unavailable because of the expense of testing, computer programs can calculate failure probabilities from fault trees.
Some industries use both Fault Trees and Event Trees. A Fault Tree starts from an undesired initiator (loss of critical supply, component failure etc) and follows posible further system events through to a series of final consequences. A related idea to the Fault Tree is the Event Tree, which shows routes back to a given end event from its initiators. The route through a Tree between an event and an initiator in the tree is called a Cutset. The shortest credible way through the tree from Fault to Event is called a Minimal Cutset.
The classic program is the Idaho National Engineering and Environmental Laboratory's SAPHIRE, which is used by the U.S. government to evaluate the safety and reliability of nuclear reactors, the space shuttle, and the International Space Station.
Unified Modeling Language (UML) activity diagrams have been used as graphical components in a fault tree analysis.
Safety certification
Usually a failure in safety-certified systems is acceptable if less than one life per 30 years of operation (109 seconds) is lost to mechanical failure. Most Western nuclear reactors, medical equipment, and commercial aircraft are certified to this level.
Preventing failure
Probabilistic fault tolerance: adding redundancy to equipment and systems
Survival-redundancy.png
Once a failure mode is identified, it can usually be prevented entirely by adding extra equipment to the system. For example, nuclear reactors emit dangerous radiation and contain nasty poisons, and nuclear reactions can cause so much heat that no substance can contain them. Therefore reactors have emergency core cooling systems to keep the temperature down, shielding to contain the radiation, and containments (usually several, nested) to prevent leakage.
Most biological organisms have extreme amounts of redundancy: multiple organs, multiple limbs, etc.
For any given failure, a fail-over, or redundancy can almost always be designed and incorporated into a system.
Inherent fail-safe design
When adding equipment is impractical (usually because of expense), then the least expensive form of design is often "inherently fail-safe". The typical approach is to arrange the system so that ordinary single failures cause the mechanism to shut down in a safe way.
One of the most common fail-safe systems is the overflow tube in baths and kitchen sinks. If the valve sticks open, rather than causing an overflow and damage, the tank spills into an overflow.
Another common example is that in an elevator the cable supporting the car keeps spring-loaded brakes open. If the cable breaks, the brakes grab rails, and the car does not fall.
Another common inherently fail-safe system is the pilot light sensor in most gas furnaces. When the pilot light is off, the sensor cools down and a mechanical arrangement such as a bimetallic switch disengages the gas valve, so that the house cannot fill with unburned gas.
Railroad semaphores design of the horizontal being the danger or stop position is fail-safe in that if the controlling mechanisim fails and the arm is free to fall under gravity, it will fall to the "Stop" position, regardless of the condition of the line ahead.
Inherent fail-safes are common in medical equipment, traffic and railway signals, communications equipment, and safety equipment.
See also
- Safety engineer
- life-critical
- Reliability engineering
- reliability theory
- air brake (rail)
- nuclear reactor
- biomedical engineering
- SAPHIRE (risk analysis software)
- Some of the techniques of safety engineering have been applied to the field of security engineering.
- double switching
- workplace safety
External links
- Hardware Fault Tolerance (http://www.eventhelix.com/RealtimeMantra/HardwareFaultTolerance.htm) - A discussion about redundancy schemes.