Rangle has acquired key insights doing Agile software development projects for a diverse group of clients. In this article, we take a look at when testing ought to happen, which parts of it are actually worth doing, and how to tell.
In our experience, it is common in siloed organizations for testing to happen long after code has been changed. This gap between writing and revising code introduces two hidden costs (which are anything but hidden to the developers actually doing the work):
- The cost of context switching is higher.
- The increased odds that something else has been built on top of the proposed change which might also need to be rewritten.
To reduce these costs, testing should be done as soon as possible after the code is changed: immediately if possible, but certainly on the same day. To achieve this, our DevOps specialists work with clients to engineer a continuous integration and deployment pipeline so that every change is tested immediately in an isolated environment. By ensuring that every change is tested in isolation, on all platforms of interest, we enable developers to work in short, productive increments, which in turn allows them to check in with the Product Owner and other stakeholders at least weekly.
Part of setting up such a pipeline, however, is understanding the practical limits to testing, both technically and from an ROI point of view. For example, we usually don't bother with manual testing unless all automated tests have passed, and rely on tools such as Robot and Nightmare.js to streamline that work.
As another example, unit testing is crucial, but by itself cannot check the combinations of components required to satisfy business needs. End-to-end testing does this, but setting up tests for real-world cases can be expensive. It can also make the system brittle, since the cost of rewriting those tests as business needs evolve can make people reluctant to change course as business needs change or become clearer. That said, the reward for doing it can also be significant: getting started with BrowserStack or Sauce Labs isn't easy (particularly if developers need to tunnel through a company's VPN restrictions), but once an organization develops that expertise, it can reap the benefits for years.
On balance, we recommend that teams defer work on fully automating end-to-end testing until other issues have been addressed. In particular, test automation's value isn't apparent until all of the project's stakeholders have a solid understanding of how to define acceptance criteria in testable ways.
When test automation is put in place, many groups specify targets for it so that teams can assess their progress. One common metric is code coverage, which measures how much of the application's source code is actually exercised by tests. Recent studies have shown that there is only weak correlation between code coverage and the effectiveness of test suites, but we still believe that having a target serves as a warning light to signal developers when they have "forgotten" to do the testing they should. We recommend that mature teams should aim for 80% code coverage, though this depends on context. For example, if a React applications contains a lot of stateless components, 60% might be a reasonable target.
The best metric, however, is lag: the time it takes between code being written and tested, and between being tested and merged. Reducing this time will naturally lead developers to work in shorter increments, which in turn will encourage them to create a more modular architecture that can evolve at the speed of the business.
With contribution from Greg Wilson