Embracing 'Less, But Better' in Test Automation: Beyond Speed and Quantity

test automation

Explore the 'less, but better' philosophy in test automation, inspired by Dieter Rams. This article challenges the common focus on faster execution and higher test coverage, arguing that true progress lies in enhancing feedback quality and product value, rather than merely increasing speed or quantity, especially with emerging AI tools.

There’s an expression referenced in Greg McKeown’s excellent book ‘Essentialism: the disciplined pursuit of less’ that I’ve been finding myself thinking about on a regular basis for the last couple of months, and that’s “Weniger aber besser.” This German expression translates to ‘less, but better’ in English, and it describes the approach behind the designs of German industrial designer Dieter Rams. While I am not an industrial designer, I believe this expression is highly relevant for test automation and the approaches teams often adopt.

Why? For two primary reasons:

  1. A significant focus exists on 'coverage,' with teams striving to write numerous tests to meet specific coverage metrics. The type of coverage varies, from simplistic statement coverage to more valuable forms like mutation coverage.
  2. Even more extensively discussed today is how AI-powered tools can dramatically accelerate test writing.

In essence, much of the conversation revolves around doing more and doing things faster. What often seems missing is the discussion on how to do things better. For me, personally, exploring how we can improve is far more interesting than merely increasing speed or quantity. Genuine progress, after all, only occurs when we truly strive to do things better than before.

One might argue that sometimes faster is better. And that perspective holds merit. When discussing the purpose of test automation, I frequently emphasize 'valuable feedback, fast.' Achieving the right feedback more quickly—for instance, by automating repetitive tasks or optimizing inefficient tests—is progress. It certainly contributes to overall improvement.

However, a crucial aspect that often remains unchanged when we accelerate feedback delivery is its quality. We gain no new insights simply by retrieving the same information more efficiently. Faster feedback doesn't inherently improve our product, even if it refines the process. This, to me, signifies a missed opportunity for substantial value. Our product—whether it's the end-user facing application or the tests designed to assess its state—doesn't necessarily improve just because we write and run tests faster.

The same principle applies to 'more.' 'More' does not automatically equate to 'better'; it simply means 'more.' More feedback, more tests, more code. Like speed, a greater number of tests alone does not guarantee a superior product. In fact, having 'more' can sometimes negatively impact the product. Consider those low-value tests, often written solely to achieve a coverage percentage for code that doesn't implement significant behavior. Do these truly add value, or do they function as dead weight? Are they worth the execution time, result review, and ongoing maintenance?

This concern largely encapsulates my current reservations about AI, or more precisely, the prevalent discourse surrounding it in the blogosphere and conference talks. The discussion about AI's impact on test automation, software development, and software testing appears heavily biased towards 'faster' and 'more,' often at the expense of addressing 'better.'

Lest this be misunderstood as an attack on AI itself or a rejection of new technology, it is neither. AI itself isn't the root of these reflections or this post. My concern stems from how we utilize and discuss AI, leading me to question if, amidst the hype, we're overlooking something fundamental.

Extensive research already highlights the negative side effects of AI overuse, with results that are, quite frankly, worrying. I cannot help but ponder: is being captivated by the allure of 'faster, faster, faster' and 'more, more, more,' while neglecting to consider 'better,' another such side effect? I don't have a definitive answer yet, but this question has occupied my thoughts for some time.

Therefore, in an effort to introduce some balance to a world seemingly fixated on 'faster' and 'more,' I commit for the coming year to consistently reflect upon, discuss, and inquire about how we can truly make things better. This likely means I won't be writing extensively about accelerating task X with AI or generating more artifacts of type Y. I've never found such topics particularly compelling, and their novelty, for me, has long worn off.

My enduring interest lies in how we can leverage tools (including AI-powered ones) to enhance the quality of our work and the products we deliver. This will be the focus of my writing going forward, hopefully more frequently than before. I'm happy to be an exception to the title of this post by writing more about doing better, if that makes sense.