It’s been three months since my post here on AI/ML (Artificial Intelligence, or Machine Learning).
So, I wanted to post an update in my understanding on this set of technologies:
At the time of this writing, AI seems to be useful for looking for certain low-hanging bugs and increasing code coverage, but without a detailed model for correct behavior (that is, some kind of oracle) it's not useful for verifying correct behavior re. functional requirements. I hear it is also useful for developing GUI test cases.
To correct my earlier post, though: AI might be traceable if the AI engine implements some form of the Hierarchical Steps pattern, for clarity on what it’s doing to the SUT. AI could also be traceable with a long linear series of log statements, but readability and support for analysis would be poor at best.
Quality is such a vast, open-ended topic, there are many approaches to quality and any project that matters to people need at least several of those approaches used. So, I look forward to what AI can contribute to the many different approaches to quality.
At the same time, for a team to continue moving forward in quality, fast and repeatable verifications of functional requirements (however those are recorded) are still needed. Without those, any dev could check in changes that derail part of or even the whole team in future, for the time it takes to back out of those changes and start over.
That’s the focus of my work with MetaAutomation.