One would think that people who make automation to drive the system under test (SUT) would be eager to preserve the quality knowledge of how the SUT is driven and how it responds, how fast, etc. For example, this quality knowledge would save QA people hours of debugging time, it would improve knowledge and communication about the product, it would put the QA role at the center of the software development business (and neatly solve the no-respect problem that QA suffers) and many other benefits I write about extensively in my book, this blog, and other places.
This is what started my journey to MetaAutomation, years ago, as an SDET at Microsoft. We wrote code in C# to drive our product and used Nunit as a harness. I realized that the process was very broken because the valuable information of how exactly we drove the product and how it responded was mostly just dropped on the floor. I spent hours just trying to recover this information for the many checks I was responsible for, and I wasn’t the only one wasting this time, either.
So, I started, bit by bit, to preserve what I could of that quality knowledge. Log statements were the tool at hand, but I soon discovered that log statements are not and never will be enough (see here, for example). I did eventually solve all of these problems, though, especially for the (at least mostly) repeatable checks that the dev team needs to keep quality moving forward.
Isn’t this what everybody wants? Trustworthy, detailed, actionable quality knowledge about the SUT? It seems not, because current trends in “test automation” seem to be running away from the SUT and hiding it behind obscuring layers.
For example, scripting languages. I think scripting languages are good for projects of 100 lines of code or fewer, or if there’s more editing of the code going on than deploying, but they have an unfortunate tendency to defer errors to runtime which might be much later than a compiler/IDE would have told you. For example, I learned Ruby because of an existing project and quickly learned to hate it because the syntax was so inconsistent (among other reasons). Once it showed me an error due to a typo, about 15 minutes after I would have learned about the issue had I been working with a compiled language like C#... so, 15 minutes completely wasted in just that one example.
Weak typing does the same thing. It might save one some planning and architectural integrity, but software quality code done well is just as important as product code, so planning and architecture are important.
I wrote my MetaAutomation samples using the free version of Visual Studio. It’s more resource-intensive than a text editor, but that’s less of an issue every year given that computing resources are getting cheaper and more powerful, and it helps so much with simple coding that it’s a wise investment to make. I’m not tied to Visual Studio necessarily, either, for the bulk of sample components which are platform-independent and can be done with your favorite text editor.
Keywords that correspond to code-behind, as with Robot Framework, just hide the code behind several layers: a custom language of keyword nouns, a parser for that language, and the distinction between that parser and the language behind the keywords.
Gherkin (with BDD) (see here for more on this topic) has the same problems, but the phrases that correspond to the code-behind are designed to be business-facing which I find to be a good idea but highly ironic: given that there are so many layers injected now between the SUT and what the team can see driving it, why not just leave the SUT-hiding layers out and record the SUT-driving steps in a natural hierarchy of steps with business-facing names?
This is what I do with the samples on this web site. The business-facing-named steps are as close to driving the SUT as they can possibly be, and they’re recoded in a hierarchy according to the natural Hierarchical Steps pattern that we all follow anyway. A QA team only needs one person with dev-level coding ability, and the rest can just write code according to the convention that the team develops. Steps are all business-facing and avoid QA- or dev-specific names. SUT behavior is available for any role to start at the “top” or business-facing root node, and drill down from there as needed.
That’s how to preserve quality knowledge. Most of my book and this blog are about how to use this knowledge to improve communication and efficiency, and ship software faster and at higher quality.