To Ship Trustworthy Software Faster, Consider How You Measure and Communicate Quality

Tags: metaautomation, quality, automation, test

In the software industry, there’s a common antipattern of working with “test automation” as if automation replicated the same values as manual testing, but faster and with fewer people.

Reality is more complicated. In fact, there’s significant opportunity cost – including, but not limited to, business risk! - to working as if manual testing and automation had more in common than they do.

To ship software faster and with lower risk, it is important to understand the values of different approaches to measuring quality, to find the best balance between them.

I made a graphic of testing types to show the continuum of values from exploratory to verification, and the different roles that manual testing and automation play on that continuum. The horizontal axis is a continuum because although any testing approach can find bugs and has something to say about whether the product works, some approaches are much better at finding those important bugs, and others are vastly better at delivering a prompt answer to the question “Does the system do what we need it to do?”

(Some might recall the “Agile Testing Matrix” of Brian Marick. I will address differences in my next post.)

Shipping software faster and at lower risk needs proper weight given to the lower right of the graphic. By contrast, if the team followed Glenford Myers’ ancient (and therefore highly influential) statements that translate to “The point of testing is to find bugs” * then quality-minded team members are all over on the left and focused on the happy hunting grounds for bugs. Parts of the SUT that are not in those happy hunting grounds could break and the team might not know it until the flaw has already hit customers and a customer who happens to have the time, the inclination, the knowledge, and the skill has generously reported it.

To summarize three important messages of the graphic:

  1. The team needs both exploratory and verification techniques to ship software.
  2. Manual testing does exploratory better.
  3. Automated checking** does verification better.

“MetaAutomation” is at the lower right. I created it to enable business value on the right side, the verification side, far beyond capabilities of current practices and tools: to verify functional requirements, including perf, very fast and scalably, and communicate it efficiently and flexibly to those who need to know – including check failures, as they happen, in a way that is as actionable as possible. This new approach enables gated check-ins with, in addition to unit tests, many bottom-up and E2E checks, which in turn fully guard against quality regressions; so, quality in the SUT is always moving forward.

Speaking of unit tests – where are they in the graphic? Are they part of MetaAutomation? No.

Unit tests would be in the graphic just to the right of MetaAutomation, except for a few things. Unit tests

  • only face developers, and have no visibility team-wide
  • aren’t based on product functional requirements; they’re based on what a unit is supposed to do
  • cannot verify functional requirements generally (exception: if a product functional requirement is expressed in a unit)

Manual testing means that a person is involved; the person makes the observations and notes bugs. The event of reporting a bug (or other action) is triggered by the person, and it doesn’t matter whether or how many tools are used (in addition to the SUT itself); it’s still human time, intelligence, adaptability, and judgment.

Automated checking creates action items (which might become bugs, depending on process) triggered by measurements determined by code. At check run time, no person is directly involved.

I draw the distinction between manual and automated in this way because it speaks to the powers of each approach. This is a prerequisite for seeing how, when automated verifications are focused on and optimized for what they do well, and combined with other things that automation can do for quality, they can be vastly more valuable to the business. I call this problem space “quality automation,” and MetaAutomation is an optimal solution to the problem (at least, until a better one, or the right extension patterns, come along).

However, MetaAutomation does require leaving behind three of what I call “historical accidents:”

First, Myers’ well-intentioned mistake that “test is all about bugs.” The horizontal axis of the above graphic expresses the importance of a balance; one must verify correct behavior, too.

Second, the phrases “test automation” or “automated test” which imply that, when automated, a manual test is the same as before but faster and with less need of people. People doing manual testing are smart, creative, perceptive, and good detectives, but automated verifications are not; they excel in different ways.

Third, the Linear Logging pattern that comes from logging events in services and instrumentation, but is a poor match for the complexity of an automated check. I address that in this post.

 

*Myers’ 1979 book, The Art of Software Testing, page 5: “Testing is the process of executing a program with the intent of finding errors.” Myers emphasizes this from a different perspective on page 6 “… since a test case that does not find an error is largely a waste of time and money…” Myers goes on for pages about this.

 

** “Checking” is a kind of “Testing” with pre-determined verification criteria and that lacks human flexibility, creativity, and judgment at check run time. Checking is what happens with “automated testing” and can be extremely fast, cheap, repetitive and effective.

No Comments

Add a Comment

Sign up here for emails with bites of wisdom on quality automation and MetaAutomation

Recent Blog Posts

  • The end of test cases

    Test cases have served teams well as a tool to enable measures of quality over time, over many runs of those test cases, so the team knows (or should) roughly whether quality is getting better, worse, … more

  • Example: Fixing the Shopping Cart False Positive

    Last month I posted here about the false positive problem.

    Here is a specific example that I learned from a friend at Microsoft, where the false positive caused a defect escape:

    Imagine a shopping … more