Friday, May 15, 2009

Learning from mistakes and test cases

Pawel was questioning the usefulness of test cases in one of his blog posts. I dislike useless documentation as much as any sane developer, but I see a strong reason for having test cases, besides outsourcing the tests, as one of the readers pointed out in the comments. My reason is regression testing.

Every time you test a new build of the software, you find bugs. Some are easy to reproduce and appear in daily usage scenarios. You want them fixed, otherwise they will get noticed by the users and most probably you don't want that.

In future builds, you would also like to stay free from the previously discovered bugs. Therefore you make notes about the context and the steps that led to the bug appearance. Surely, not every problem you find is worth keeping track of in a test case. A crash when accessing a web page with valid parameters doesn't usually need a test case, as you'll probably notice it anyway if it will come back.

At a certain point the build becomes stable and free of any obvious problems, which are easily accessible and reproducible. You find less bugs, but more subtle. The contexts in which they appear are not straightforward. Who would've thought to hit the Back and Forward buttons in a wizard 4 times in a row? Still, it's a malfunction and, if it doesn't happen only with Konqueror running on SuSE 6, you would probably want to fix it.

You make a note to check this bug in the next build as well. Throughout time, the list of notes is growing. You need to detail the context and the steps as much as needed, not necessarily as much as possible, because recalling all details in a few months doesn't really work.

Without noticing it, if you didn't want to acknowledge this from the beginning, you have a list of test cases.

In the future, if you think some of the test cases are not worth checking anymore, being obsolete or so, just delete them. Or better yet, mark them obsolete, to be able to revive them with some adjustments.

There is of course, automation, having the computer run a suite of tests for regression, instead of going through a list of test cases yourself. But this comes to show the same thing once again. These automatic tests are, in fact, test cases in a machine readable form.

1 comment:

kZan said...

In my view, a good solution for testing is a test cases library combined with an automated tool for testing. These, together, can be used for a sanity check each time new build is released. This library must contain a set of relevant test cases, identified during several testing iterations.

Running the sanity check before any kind of subtle testing will be a starting point for the testing activities assuring the quality of the build.

Of course, this method is recommended for more complex software with periodically releases.