It’s all well and good to talk about how systems need to prove they work, but how shall we make that happen?

The goals of testing, in our view, are to:

  1. speed up development by catching mistakes early/reducing rework

  2. increase everyone's confidence in the quality of systems by providing automatic assessment

  3. prove code is ready for production

The types of test in our toolkit are broadly:

  1. Unit – hunk of code that exercises some other code in a vacuum – typically without any environmental requirements

  2. Integration – usually a program of some kind that may require simple or elaborate environmental setup to simulate the real world conditions the code will run in — usually taking “test pattern” data and producing output that is to match a “gold image”. A mismatch is a “regression”, hence these are sometimes called regression tests.

  3. System or smoke tests – a partially or fully set up distributed system to take test-pattern input and produce some known outcomes, proving a pipelined system can operate in harmony. This often provides the proof that networking arrangements are correct, and monitors are set up correctly.

At risk of belaboring the point – unit tests pretty much buy us developer speed. They don’t prove anything important about correctness in real world systems. Integration tests are where we start really getting confidence our code works. We may get all the way to “go right to prod sir or maam!” from them. Or, we may have to run the system in a full smoke environment to be sure. It varies from system to system and client to client.

The most important task of test writers is to clearly commuicate with the business stakeholders what they will be testing and roughly how – and agree with them that the important questions are being answered by our automated testing. Then we can know it’s safe to go to prod.