11.2 Testing Principles
All testing should employ some simple principles.
- Isolate the thing under test.
- Control variables that effect the thing under test.
- Confirm expectations of the thing under test.
- Follow the country code (leave things as you found them).
11.2.1 Isolate the thing under test
Regardless of what we are testing (entire systems down to single functions) we need to understand the boundary of what we are testing and isolate the thing we are testing. This isolation typically requires us to simulate some input at the boundary of the item under test and capture the output (or effect) of the thing under test. These ‘simulated boundaries’ vary in complexity according to what is being tested. A simple function can be tested with a series of direct calls but higher level user acceptance testing may involve simulating complex external service interfaces (for example, when testing an online store we may need to simulate the external payment processor).
This is often easier said than done, especially for non-functional system level requirements. It is often impractical to isolated a system in a way that will reflect the conditions needed to confirm a non-functional requirement. If you have, for example, an issue with an API being overwhelmed so you decide to rate limit the endpoint. How to best test this? It is simple enough to confirm that the rate limit is correctly set in your configuration, but does this solve the endpoint being overwhelmed? We might test this in a controlled test environment by hitting the endpoint with high load, the problem is that the issue may only occur when overall load is high, or when other processes on the web server are consuming resources, and so on. The best we can hope for is to reproduce some set or circumstances that reliably reproduces the issue before our change and then confirm that after our change these same tests pass. The problem is that these sort of tests tend to be fragile even with the most careful isolation (and controlling of variables, see §11.2.2). It is best to mark such tests as ‘fragile’ and run them as a special subset of testing where we know to expect some issues (i.e. they are not part of an automated pipeline but run with operator oversight). These sort of tests may also be candidates for monitoring rather than testing.
11.2.2 Control variables
Related to the first principle, we must ensure that we fully control any variable condition in both the environment and the interface of the item under test. This requires a full appreciation of the thing under test so that we can identify all the material variables that may effect our test. For example, when testing a function that sums two numbers we will control the inputs, capture the output, and confirm we get the correct result. This function is running in a complex environment with a specific version of a language compiler, CPU characteristics (type, clock speed, etc.), memory, and so on, but these are not variables likely to effect our current test so we need not control for them. If however we are performance testing this same function to ensure it can meet performance criteria (such as summing one million floating point numbers with ten decimal place accuracy each second) then we may need to control things like compiler version, CPU and memory characteristics, as these may now have an influence on our test outcome.
11.2.3 Confirm expectations
This is the essence of testing. Given some controlled condition does the thing under test meet our expectations? The implication of this principle being that we have clear expectations (these being our requirements expressed in the problem domain language). It may be that our tests are the requirements for lower level testing. The higher level our testing the more likely our requirements will need to be translated from the language of the problem domain into the technical language or the solution.
11.2.4 Leave things as you find them
This is an extension of controlling variables. As each test concludes we should ensure we leave things as they were before the test. This simplifies the setup for all subsequent tests and means we can run individual tests or run tests in any order (a powerful capability as our system grows).