There are some interesting memes bouncing around the software quality space.
For example, the idea that analytics obviates verification; some people think that with good analytics, there’s no need to verify software before putting it in front of paying customers… because they will tell you if it’s broken or doesn’t work for them for any other reason.
Then, there’s the idea that artificial intelligence in software quality (AI, also called machine learning or ML) will remove the need for any other quality practices; AI will solve everything.
Abraham Maslow was a psychologist who created what became a very popular and widely-applied idea: “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” Hence, Maslow’s Hammer.
It’s understandable when people who are very excited about a particular tool want to apply that tool to everything, but it’s neither productive nor helpful. Not every existing practice or tool deserves to be thrown away. Most tools and techniques exist in an ecosystem where the value that they bring is in collaboration or coexistence with many other tools and techniques. To deny (or bury) the value of other tools risks lowering productivity, or in the case of software quality, increasing quality risk to the team.
For example, analytics are great and a powerful tool for gaining insight into the customer experience with the software product. They could be applied to verification – does the system do what it’s supposed to do? – by waiting for customers to experience the inevitable breakages first. This can work, e.g., for software that supports making movies, because the movies are about as far from real-time human impact as you can get and there is plenty of time for people using the software to compensate before the movie is released. But, it does not suffice for any kind of software related to financial (or other impactful) data or human safety, home automation, business automation, etc.
To promote the idea that software has to go in front of the customer before the team even thinks about quality is irresponsible at best. The truth is, verification and validation are both important, and to make software faster and at higher quality, we have to manage quality starting as soon as possible.
For example, AI hailed as the solution to all software quality management problems: I think this has great potential as one of at least several techniques, but for an app that has any originality to it, how would one train the AI to know what correct behavior is? Other than the trivial oracle that says “for the app to crash is incorrect behavior,” I wonder how this problem can be solved without people, at least for the foreseeable future.
In any case, for AI to do a decent job, it needs to keep track of app behavior in a way that makes sense and is accessible to automation (as well as people). That’s where MetaAutomation comes in, with Hierarchical Steps to enable recording what the SUT is doing, and making that information complete and available.
I’ve been careful in promoting MetaAutomation for the quality automation problem space, that nobody mistakes me for waving Maslow’s Hammer around: I’m very clear in my book https://www.amazon.com/dp/0986270423 that other quality practices do have their place, including instrumentation, analytics, and (some day) AI. Many of these techniques are complementary. It’s only some of them that we’re better off not using at all, but then I’m clear about that too, as best I can. (I also posted about one of them here http://www.metaautomation.net/metaautomation-blog/bdd-is-limited-in-what-it-can-do-for-the-team)
What do you think? Comment to get a conversation started.