On Testing

16 Jun 2023

I've been doing a lot of interviews recently. Not all of them were interesting, but on rare occasions, I had the opportunity to have a meaningful conversation with people outside my bubble. Sometimes I've expressed opinions that surprised others, for example:

I rarely do unit testing and prefer integration tests instead.

It may sound strange indeed. Didn't we agree long ago that testing is good and TDD is a way to improve it? If I find unit tests unhelpful, does that mean I'm against TDD too?

To answer this question, let's figure out what TDD means and what kinds of tests exist. I mentioned unit and integration tests earlier. Unit tests cover a particular unit of code, usually a class or a module. Integration tests aim at testing a combination of such modules. These definitions are the most popular ones, at least that's how they are defined on Wikipedia and in several books 1. The unit-integration taxonomy separates tests by the size and complexity of the systems they cover. It says nothing about techniques used in tests, not even if tests are automated or manual, although the former is usually assumed.

TDD is a software development methodology that encourages developers to write automated tests before actual code and continuously run them while programming and refactoring. "Test-Driven Development by Example" by Kent Beck, a book that first defined TDD, does not say how complex the system under testing should be. When asked about how much ground each test should cover, Kent answers:

You could write the tests so they each encouraged the addition of a single line of logic and a handful of refactorings. You could write the tests so they each encouraged the addition of hundreds of lines of logic and hours of refactoring. Which should you do? Part of the answer is that you should be able to do either. The tendency of Test-Driven Developers over time is clear, though — smaller steps. However, folks are experimenting with driving development from application-level tests, either alone or in conjunction with the programmer-level tests we've been writing.

Clearly, TDD does not restrict developers to unit tests only. I use it for integration tests, and it excels there as well. I'm curious why many sources on the Internet put both of them together, barely mentioning that TDD could be used on other test levels too.

So why am I not keen to write unit tests? Because in the context of my day-to-day job, I find them more limiting than assisting.

I mainly develop backends. Sometimes it is applications large enough to encompass all the required functionality ("monoliths"). More and more often lately, it is small programs performing only a handful of tasks ("microservices"). They communicate over HTTP, and a model of requests and responses is their API. Their logic changes often, sometimes drastically. As a result, classes and modules get reshuffled, and method signatures get reshaped. These changes stay under the hood while the API remains backward-compatible. In such a situation, treating the whole service as a unit of testing and writing tests for its API has a higher utility than testing ever-changing classes or modules.

Another reason to favor integration tests comes from my mixed experience with ORMs. They could save you time and energy in uncomplicated cases, but they come with their own limitations and quirks that you need to master on top of the database you chose and its SQL dialect. Therefore, I tend to write SQL directly, which requires testing SQL code too. It was problematic earlier when computers were slower, so developers mocked the database and tested SQL separately, if at all. Now a laptop can spawn a database instance in seconds in a Docker container. I don't see any reason not to exercise this power and avoid testing on a real database2.

I emphasize again that I hold this view only in the above context. It is not necessarily true in other cases. Web frontends and desktop applications can have infinitely more complex inputs, rendering automated integration testing limited and leaving developers with unit testing. The same goes for firmware. You can automate its testing; however, most code is component drivers, so you have to either invest an equal amount of time into carefully mocking components' behavior or resort to testing particular code sections. I'm sure one can find examples where integration testing is not feasible on the backend spectrum too. Your mileage may vary.

Regardless of specific practices, automated testing is a valuable technique that gives developers the confidence to perform required code changes. It is one of the essential tools in my toolbox.

Test early, test often 🖖

  1. For example, in "A Practitioner's Guide to Software Test Design" by Lee Copeland, which I like and recommend.

  2. Especially since Testcontainers makes the setup so easy!