9.5 Testable requirements
Requirements must be testable. Some guidance suggests requirements must be SMART (Specific, Measurable, Attainable, Realistic, and Testable and Traceable). I suggest this is a reasonable guideline but, as with most advice, you need to intelligently apply this guidance. Remember, guidance is not law.
We can test manually or automatically; automatic testing is vastly preferable for the simple reason that automatic tests are more likely to be run and run more frequently.
9.5.1 Features versus design
Ousterhout [Ous19] suggests that Test Driven Development (TDD)9 is harmful because it results in ‘tactical’ development rather that ‘strategic’ design decisions. This largely depends on how you interpret TDD. If you treat TDD as automated specification verification then it’s efficacy depends on the specification. If you specify (requirements) at a design level then TDD will work well. It is true that, as presented by most advocates, TDD will result in technical solutions. I’m also sympathetic to Ousterhout’s promotion of design over features.
9.5.2 Test automation
If we can write tests to confirm our requirements have been met, and then automate those tests, we are golden. These automated tests can be run repeatedly as our project moves forward, acting as a ratchet, preventing any backsliding.
9.5.3 Tests we cannot automate
Some requirements cannot be automated, or it may be impractical to automate them. For example, aesthetic design requirements are often subtle and a matter of opinion. They are important, but practically impossible to automate. Even if we somehow establish a baseline (say record the GUI and save this as a ‘gold standard’, then replay the tests and compare the result to this gold standard. Fine, but even a slight change to the GUI will throw this comparison out and we will have to manually assess a new gold standard. Repeat this too often and the automation of tests start to lose its usefulness.).
Although we may not be able to perform a test automatically it is worthwhile setting things up so that test results are confirmed automatically (making this confirmation a condition of passing the product). For example, we could maintain a spreadsheet where each line holds a manual test and next to that test the latest result, we can then write a utility to ‘test’ that all the result fields are ‘pass’ before declaring the product ready for the next phase.
9A development methodology in which tests are written before the implementation code.