Collect and present ALL the functional quality data

The basis for the MetaAutomation pattern language, and the reason for it, is at the core quite simple.

Everybody on the team (or, almost everybody) does manual or exploratory testing with the SUT. When we do manual or exploratory testing, we pay attention to our actions and how the product responds to our actions. We’re detectives, and we gather information from all available sources to decide whether SUT behavior is acceptable or not. If it might not be acceptable, we use our powers of observation and detective smarts to enter a bug. The team decides whether the bug is actionable.

Then, there’s “test automation” that appears on the surface to be about automating that process, so that what the manual testing role might do is now handled by automation, that runs at all hours, in the lab or in the cloud, etc… except that it obviously does not replicate the skills or the value of a good tester.

Quality Automation done well can be very powerful, but unfortunately the way it’s generally done, the observational powers of a manual tester are replicated poorly or not at all. When the product is driven by automation, most of the data from driving the product and how the product responds is simply dropped on the floor. It’s nobody’s fault, it’s just the way it’s done, because the available tools are poorly suited to recording this data.

For example, automation code or tools often use logging. Logging is a great tool for instrumenting your software, because logs are simple and lightweight, but for quality automation, logs are poorly suited; they drop all context, by design, other than the timestamp. Some context can be re-created with unique identifiers that show up in multiple log statements, but that can’t fully compensate for the fact that logs lose context and are poorly suited for performance measurement by design.

BDD and keyword-driving automation is a good effort, but what happens behind the keywords? Maybe nothing; you have to look to source code to see what they are supposed to do, but that’s not good enough either. Stepping through code shows what code runs for that time but is risky due to timing and the fact that what the code does might vary from one check run to the next due to changing context. Personally, I’ve modified keyword implementations in order to make a keyword reusable, but such code or code changes aren’t reflected at the output from automation at all.

Typical automation to drive the SUT drops most of all data on the floor, whether the check passes or fails. As a result, if a check passes, nobody cares about more than the “Pass!”

Automation driving the SUT drops most data on the floor.

The lost information is a huge lost opportunity, with huge opportunity cost.

If we record that data in a format that works well with automation, we can do powerful things with it, including shipping software faster and at higher quality with happier teams…

But, how? Log statements are the wrong tool. Keywords hide at least some details of what is really going on with the SUT, potentially important ones.

I will address the solution in a future post. If you are curious enough to do some reading on this, the answer is to apply the ubiquitous Hierarchical Steps pattern by implementing it as part of the Atomic Check pattern. Both patters are part of MetaAutomation, and both are implemented in the GitHub software samples that the MetaAutomation.net web site points out here.

No Comments

Add a Comment

Sign up here for emails with bites of wisdom on quality automation and MetaAutomation

Recent Blog Posts

  • The differences: Manual Test vs. quality automation

    In my last post I describe out the two kinds of automation that fit in the quality automation space.

    People who do quality automation (at least, the part of quality automation that drives and … more

  • The two halves of quality automation

    Quality automation is the domain (or problem space) of driving the SUT, measuring and recording data on SUT behavior and communicating that data to the business. I also use “quality automation” to … more

  • Fixing the false negative problem

    False negatives happen when these three things happen in order:

    Operations (ops) promotes the software to the next level, or ships it to end-users

    Someone (or, some automated process) discovers a … more

  • Fixing the false positive problem

    With all the quality automation that is your responsibility, a run of a check failed. It is your job to check it out.

    After 30 minutes or so of investigation, you find that the failure happened … more