Regression testing is a style of testing that focuses on retesting after changes are made.
In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Traditional regression tests are often partially automated. These note focus on traditional regression.
Regression testing attempts to mitigate two risks:
- A change that was intended to fix a bug failed.
- Some change had a side effect, unfixing an old bug or introducing a new bug.
Regression testing approaches differ in their focus. Common examples include:
- Bug regression: We retest a specific bug that has been allegedly fixed.
- Old fix regression testing: We retest several old bugs that were fixed, to see if they are back. (This is the classical notion of regression: the program has regressed to a bad state.)
- General functional regression: We retest the product broadly, including areas that worked before, to see whether more recent changes have destabilized working code.(This is the typical scope of automated regression testing.)
- Conversion or port testing: The program is ported to a new platform and a subset of the regression test suite is run to determine whether the port was successful. (Here, the main changes of interest might be in the new platform, rather than the modified old code.)
- Configuration testing: The program is run with a new device or on a new version of the operating system or in conjunction with a new application. This is like port testing except that the underlying code hasn't been changed--only the external components that the software under test must interact with.
- Localization testing: The program is modified to present its user interface in a different language and/or following a different set of cultural rules. Localization testing may involve several old tests (some of which have been modified to take into account the new language) along with several new (non-regression) tests.
- Smoke testing also known as build verification testing:A relatively small suite of tests is used to qualify a new build. Normally, the tester is asking whether any components are so obviously or badly broken that the build is not worth testing or some components are broken in obvious ways that suggest a corrupt build or some critical fixes that are the primary intent of the new build didn't work. The typical result of a failed smoke test is rejection of the build (testing of the build stops) not just a new set of bug reports.
The following examples illustrate the use of regression tests:
* Removing Form Data in FireFox and FireBird
* Reappearance/Mutation of Buffer Overflow in ID3v2 tags
* Reappearance of a WinAmp ID3 HTML Bug
* Removing Form Data in FireFox and FireBird
* Reappearance/Mutation of Buffer Overflow in ID3v2 tags
* Reappearance of a WinAmp ID3 HTML Bug
No comments:
Post a Comment