In the first place, the quantity and type of regression we do depends largely on the phase of the project.
Mainly, we do 3 types of regression:
1- Automated regression testing done by developers team; Consists of an automated framework that runs over a night and performs some hundreds of tests covering basic operations (some areas of functionality cannot be covered by this tests due to logistic reasons, or are not efficient to perform in this way). Runs before releasing a minor version to Test Team
2- Manual testing done by testing team: consists of running test cases in order to certify that main functionalities are still working. Consists of a subset of a large test suite, in which tests are vaguely defined and testers are (or should be) encouraged to make variations and to be responsible and think, instead of just run predefined steps (when they are defined). Some tests specifications consist simply of a parameter name, for instance. Usually run after a minor release and last for a continuous amount of time. This means that between each release these tests are almost always being executed. In our case, it is not cost effective, or even possible, to automate most of these tests, due to the nature of the software we do. I will not elaborate on this, because lots has been said on the dangers of automation and I mostly agree with it.
3- Half automated tests done by testers: consist of running predefined environments. Insists on repetition and load or stress, instead of unit functionalities. Also run continuously after minor releases, these tests usually find important crashes not visible in more simple tests, and allow taking some measures on system performance, and thus compare with other releases, aiming continuous improvement.
Although not recognized as Regression testing, I have found, over the 12 years on this kind of job, that most regression is also done in an activity that has much of exploration and of dispersion: defect verification. Even when there was no formal regression testing, I think that the activity of verifying the defects fixes, after a release, helps making a very effective regression:
-when trying to reproduce a (presumably fixed) defect, often side effects are found;
-on this process, and on the variations on the test, the changed code is highly exercized. It is important that when verifying a defect the tester not only reproduces the exact scenario but makes some variations, including the “opposite tests”, i. e. the ones that had no problem. For instance, if there was a problem deleting an attribute, I should also make sure that, after the fix, other attributes are not inadvertently deleted.
-I find the mental process of verifying a defect very motivating for ‘going a little further’, thus inspiring new test cases and functionalities.
What could be improved (in my case):
-Syncronization between 1 and 2 could allow for more efficient manual testing coverage; this could imply including some testers on the team that performs 1
-Still looking for the righ tools / methodologies for managing the manual tests more efficiently
Wednesday, October 10, 2012
Subscribe to:
Posts (Atom)