4 First Principles of MetaAutomation

Tags: automation, test, metaautomation

I start with a definition: quality automation is the problem space between driving and measuring the SUT on the technology-facing side, and information the people and processes of the business on the business-facing side. It’s related to “test automation,” but broader in scope and value and less semantically deceptive.*

Here are some basic, first principles that inevitably lead to a radically different and much more effective approach to applying automation to manage software quality as part of software development:

First, the value of doing quality automation is different than manual testing. Doing quality automation well is very different from doing manual testing well.

Second, just as the sequence of interactions with the SUT is critically important during manual testing process, it is (at least) very important when automation is driving the SUT, whether the check passes or fails. Think of it as the automation “paying attention” to what it’s doing; it’s much more reliable than reading source code, more accessible across the team, and perf data is there, too.

Third, logs are the perfect low-overhead tool for instrumenting events, but they’re not ideal for monitoring procedures of how a system is driven externally, e.g., with quality automation. Logs are not good at procedures because they drop all context (other than timestamp), by design, even if some context can be added in with identidiers. Fortunately, there’s a much better solution, and it’s so common that we all follow this pattern, every day.

Fourth, everybody on the software team needs detailed, trustworthy, and timely information on SUT quality. The QA role is the gatekeeper for this information, but with a more deliberate and designed approach, they can deliver much more value to more roles across the team than what happens with conventional techniques. This includes behavior reports that can be drilled down from the business-facing steps to the technology-facing steps, the parts of quality automation that go beyond driving and measuring the SUT.

MetaAutomation is a platform-independent, language-independent** guide to solving all these problems and more in the quality automation problem space. It’s inevitable IMO because the value proposition is so great, and software quality is always getting more important generally. It’s difficult because so much about conventional practices is broken and would benefit from change.

MetaAutomation is more than just a tweak; it’s re-thinking the problem of how to ship software faster and at higher quality and finding an optimal solution.

*By linguistic relativity, “test automation” implies that what a manual tester can do can be automated, but that’s not true. See the first of the First Principles above.

**Parts of the implementation require strong typing, object-orientation, threading, exception handling, and synchronization. AFAIK no script language will do all this, not even Python. Java works, though, as does any .Net language or maybe C++. See the free software samples on metaautomation.net to see an open-source code implementation that is available for running and tinkering.

No Comments

Add a Comment

Sign up here for emails with bites of wisdom on quality automation and MetaAutomation

Recent Blog Posts

  • The end of test cases

    Test cases have served teams well as a tool to enable measures of quality over time, over many runs of those test cases, so the team knows (or should) roughly whether quality is getting better, worse, … more

  • Example: Fixing the Shopping Cart False Positive

    Last month I posted here about the false positive problem.

    Here is a specific example that I learned from a friend at Microsoft, where the false positive caused a defect escape:

    Imagine a shopping … more