Wednesday, June 28, 2006

To guess, or not to guess?

Traditionally, my testing follows a reasonably simple structure. Requirements are captured, and evolved over time. A design is built for these requirements, and tests are built to ensure the requirements are met in the design. Then, the design is implemented into code, and as the requirements change the tests can change accordingly. So, what happens when one doesn't actually have requirements in the first place, or they are incomplete?

In my mind, I can naively think of two approaches to dealing with the problem.
  1. The first method would be to design tests for only those requirements that exist- if it is not in the requirements, don't bother testing it (of course, this must be implemented with some common sense- if "must not kill humans" isn't in the requirements, you'd still like to test for that).
  2. The second method is to "guess" what future requirements (or ambigious ones) might be- and design tests accordingly.
Now, which approach is best? Well, like everything else in Software Engineering, we find that both aren't really that good, and as a result we'd need to cut our losses and go with one of them, or invent our own approach in the hope that it is better. To expand on this:

Approach One
  • Every time a requirement is added, we need to design test cases that cover that requirement.
  • It's a lazy form of testing- if you only have a small number of requirements, you don't really have a great deal of work to do.
  • It does however have the advantage of reducing the amount of bugs submitted that will only be turned away because "this isn't part of the requirements, WONTFIX"
Approach Two
  • Guessing requirements is a BAD idea. If you get it wrong, you are left with tests you must change and developers annoyed at the amount of WONTFIX bugs being submitted.
  • Ambigious requirements mean that someone's not doing their job properly in the requirements capture and analysis. If they can't do their job, how can you do yours with any degree of confidence?
  • On the other hand, developers constantly throw new code into their projects in the anticipation that it will be needed in the future. Why shouldn't a tester do the same?
Thus, we end up in somewhat of a conundrum. This is augmented if you're using iteration based development, where each iteration has new requirements and features to test. Perhaps more than anything this reflects the need to work out a plan right at the start of the project, which predicts how much testing, design, development, etc is needed at each phase- reducing the chance of people having nothing to do, or no idea of how to go about it.


Post a Comment

<< Home