Software testing
|
Software testing is a process used to identify the correctness, completeness and quality of developed computer software. Actually, testing can never establish the correctness of computer software, as this can only be done by formal verification (and only when there is no mistake in the formal verification process). It can only find defects, not prove that there are none.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it," where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product-- putting the product through its paces.
The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.
Contents |
Introduction
In general, software engineers distinguish software faults and software failures. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the correctness of the semantic of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer software executes on the CPU . A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended.
Software testing may be viewed as a sub-field of software quality assurance but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.
Regardless of the methods used or level of formality involved the desired result of testing is a level of confidence in the software so that the developers are confident that the software has an acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner.
A problem with software testing is that the number of defects in a software product can be very large, and the number of configurations of the product larger still. Bugs that occur infrequently are difficult to find in testing. A rule of thumb is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software.
A common practice of software testing is that it is performed by an independent group of testers after finishing the software product and before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.
Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in regression testing suites to ensure that future updates to the software don't repeat any of the known mistakes.
It is commonly believed that the earlier a defect is found the cheaper it is to fix it.
In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test driven software development" model. In this process unit tests are written first, by the programmers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed.
Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
Alpha testing
In software development, testing is usually required before release to the general public. In-house developers often test the software in what is known as 'ALPHA' testing which is often performed under a debugger or with hardware-assisted debugging to catch bugs quickly.
It can then be handed over to testing staff for additional inspection in an environment similar to how it was intended to be used. This technique is known as black box testing. This is often known as the second stage of alpha testing.
Beta testing
Following that, limited public tests known as beta-versions are often released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta-versions are made available to the open public to increase the feedback field to a maximal number of future users.
Gamma testing is a little-known informal phrase that refers derisively to the release of "buggy" (defect-ridden) products. It is not a term of art among testers, but rather an example of referential humor. Cynics have referred to all software releases as "gamma testing" since defects are found in almost all commercial, commodity and publicly available software eventually. (Some classes of embedded, and highly specialized process control software are tested far more thoroughly and subjected to other forms of rigorous software quality assurance; particularly those that control "life critical" equipment where a failure can result in injury or death). (see Ivars Peterson's Fatal Defect for counter examples).
White-box and black-box testing
In the terminology of testing professionals (software and some hardware) the phrases "white box" and "black box" testing refer to whether the test case developer has access to the source code of the software under test, and whether the testing is done through (simulated) user interfaces or through the application programming interfaces either exposed by (published) or internal to the target.
In white box testing the test developer has access to the source code and can write code which links into the libraries which are linked into the target software. This is typical of unit tests, which only test parts of a software system. They ensure that components used in the construction are functional and robust to some degree.
In black box testing the test engineer only accesses the software through the same interfaces that the customer or user would, or possibly through remotely controllable, automation interfaces that connect another computer or another process into the target of the test. For example a test harness might push virtual keystrokes and mouse or other pointer operations into a program through any inter-process communications mechanism, with the assurance that these events are routed through the same code paths as real keystrokes and mouse clicks.
Where "alpha" and "beta" refer to stages of before release (and also implicitly on the size of the testing community, and the constraints on the testing methods), white box and black box refer to the ways in which the tester accesses the target.
Beta testing is generally constrained to black box techniques (though a core of test engineers are likely to continue with white box testing in parallel to the beta tests). Thus the term "beta test" can refer to the stage of the software (closer to release than being "in alpha") or it can refer to the particular group and process being done at that stage. So a tester might be continuing to work in white box testing while the software is "in beta" (a stage) but he or she would then not be part of "the beta test" (group/activity).
Code Coverage
In contrast code coverage is inherently a white box testing activity. The target software is built with special options or libraries and/or run under a special environment such that every function that is excercised (executed) in the program(s) are mapped back to the function points in the source code. This process allows developers and quality assurance personnel to look for parts of a system that are rarely or never accessed under normal conditions (error handling and the like) and helps reassure test engineers that the most important conditions (function points) have been tested.
Test engineers can look at code coverage test results to help them devise test cases and input or configuration sets that will increase the code coverage over vital functions.
Generally code coverage tools and libraries exact a performance and/or memory or other resource cost which is unacceptable to normal operations of the software. Thus they are only used in the lab. As one might expect there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing.
There are also some sorts of defects which are affected by such tools. In particular some race conditions or similarly real time sensitive operations are impossible to detect while run under code coverage environments; and conversely some of these defects are only triggered as a result of the additional overhead of the testing code.
Controversy
There is considerable controversy among testing writers and consultants about what constitutes responsible software testing. The self-declared members of the Context-Driven School of testing (http://www.context-driven-testing.com) believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. This belief directly contradicts standards such as the IEEE 829 test documentation standard, and organizations such as the FDA who promote them.
Some of the major controversies include:
Agile vs. Traditional
Starting around 1990, a new style of writing about testing began to challenge what had come before. The seminal work in this regard is widely considered to be Testing Computer Software, by Cem Kaner. Instead of assuming that testers have full access to source code and complete specifications, these writers, who included James Bach and Cem Kaner, argued that testers must learn to work under conditions of uncertainty and constant change. Meanwhile, an opposing trend toward process "maturity" also gained ground, in the form of the Capability Maturity Model. The agile testing movement (which includes but is not limited to forms of testing practiced on agile development projects) has popularity mainly in commercial circles, whereas the CMM was embraced by government and military software providers.
Exploratory vs. Scripted
Exploratory testing means simultaneous learning, test design, and test execution. Scripted testing means that learning and test design happens prior to test execution. Exploratory testing is very common, but in most writing and training about testing it is barely mentioned and generally misunderstood. Many writers consider it a dangerous practice. Some writers consider it a primary and essential practice.
Manual vs. Automated
Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all tests. A challenge with automation is that automated testing requires automated test oracles (an oracle is a mechanism or principle by which a problem in the software can be recognized).
What is the truth then?
Certification
Many certification programs exist to support the professional aspirations of software testers. These include the CSQE program offered by the American Society for Quality, the CSTE program offered by QAI, and the ISEB certification, offered by the British Computer Society. No certification currently offered actually requires the applicant to demonstrate the abiltity to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification.
Custodiet Ipsos Custodes
One principle in software testing is best summed up by the classical Latin question posed by Juvenal: Quis Custodiet Ipsos Custodes (Who watches the watchmen?), or is alternatively referred informally, as the "Heisenbug" concept. Heisenberg's uncertainty principle makes it clear that any form of observation is also an interaction, that the act of testing can also affect that which is being tested.
In practical terms the test engineer is testing software (and sometimes hardware or firmware) with other software (and hardware and firmware). The tools can have their own defects and the process can fail in ways that are not the result of defects in the target but results as artifacts of the harness.
See also
- Automated testing
- Black box testing
- Code coverage
- Defect tracking
- Development stage
- Formal verification
- Fuzz testing
- All-pairs testing
- 829 Standard for software test documentation
- Quality assurance
- Static code analysis
- Test-driven development
- Software engineering
- White box testing
Software testing activities
- Unit testing
- Integration testing
- System testing
- Regression testing
- Load testing
- Performance testing
- Stress testing
- Security testing
- Model-based testing
- Installation testing
- Usability testing
- Stability testing
- Authorization testing
- User acceptance testing
- Conformance testing
- Playtest
Quotes
- "An effective way to test code is to exercise it at its natural boundaries" -- Brian Kernighan
- "Testing is the process of comparing the invisible to the ambiguous, so as to avoid the unthinkable happening to the anonymous."James Bach
- "Program testing can be used to show the presence of bugs, but never to show their absence!" Dijkstra
References
- Cem Kaner, Jack Falk, Hung Quoc Nguyen: Testing Computer Software. Second Edition, John Wiley and Sons, 1993, ISBN 0-471-35846-0
- Cem Kaner, James Bach, Bret Pettichord: Lessons Learned in Software Testing. A Context-Driven Approach. John Wiley & Sons, 2001, ISBN 0-471-08112-4
- Glenford J. Myers: The Art of Software Testing. John Wiley and Sons, 1979, ISBN 0-471-04328-1
External links
- Open Directory (http://dmoz.org/Computers/Programming/Software_Testing/)de:Software-Test