Experienced practitioners of software quality might notice similarities between my testing types diagram in this post and Brian Marick’s Agile Testing Matrix, linked here http://www.exampler.com/old-blog/2003/08/21.1.html#agile-testing-project-1 and here http://www.exampler.com/old-blog/2003/08/22/#agile-testing-project-2, and Lisa Crispin’s discussion here http://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/.
The Agile Testing Matrix was intended to consider (as far as I can tell) any and all aspects of the software quality process for agile methodologies.
My focus is on functional software quality including performance, as these things are (or should be) linked to functional requirements (i.e., measurable qualities in the software we are building) which in turn link to business requirements. Unit tests are out of scope for me because, unless the unit implements a specific bit of business-facing logic, they don’t relate to business or functional requirements. I’m not saying that unit tests aren’t useful for developers or that other quality aspects aren’t important, of course, just that my focus is on functional quality and perf and automated communications of quality.*
To put it another way, my focus is on quality automation: automated verifications of requirements and recording of all performance information, and automated communications of the resulting quality information to the business, including people and automated processes, e.g., operations for CI/CD.
In my testing-types diagram I don’t discriminate between business-facing vs. technology-facing testing because there’s no need; with MetaAutomation (an implementation of the quality automation space), they are integrated and all part of the data that is created by the check runs.
Also, to make more effective quality automation, the team needs a clean distinction between “manual” testing and “automated” testing.
Manual testing is when a person does it, with or without tools, and the person is the observer who notices if there is some issue to be noted in a bug, whether or not that issue is a target of the activity. Automated verifications ideally each have a specific target verification or verification cluster, and aside from the preliminary verifications associated with the necessary preliminary steps to get to the target, nothing is measured.
I’m good at manual testing but I’m not an expert, and my focus is not on manual testing. The only reason I discuss it is to illuminate what is special about automated verifications and what is wrong with conventional practices around “test automation.”
That’s the reason why, unlike with the Agile Testing Matrix, the manual vs. automated axis is binary. It’s just the axis of finding bugs vs. quality verifications that is continuous.
One last difference: the “Agile Testing Matrix” addresses agile techniques specifically, but the test types diagram I use is more general and applies whether the team is agile or more like waterfall, whether or not they include test-driven development (TDD), etc.
So, there are similarities between my testing types diagram and Marick’s “Agile Testing Matrix,” but the differences in function and purpose are significant.
*James Coplien wrote here “Why Most Unit Testing is Waste” http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf Coplien makes the point that unit tests are valuable if there is a special bit of business contained in a unit, e.g., a compression or encryption algorithm.