The differences: manual testing, quality automation and quasi-automated testing

Manual test is when a person does the actions and observations and decides whether some aspect of SUT behavior is reportable as a bug (or, other action item). This is true whether or not the tester uses a tool (a network sniffer, for example) in addition to the SUT itself. I talk about this with more detail in my two previous posts, here and here.

Quality automation, or at least the part of quality automation that drives the SUT and records actions, results, and measurements, decides pass/fail itself at runtime.

Quasi-automated checking or testing might include these two case examples:

One is where an automated process generates some data, e.g., a long series of screenshots for a certain locale of an app GUI. A person must then go through and view the pictures for correctness, or to make sure the GUI never insults or misleads anybody. This is an automated setup – more in the industrial-automation sense, because the value is the product, i.e., the screen shots to review -- to expedite testing that can only be manual (until a more machine-learning approach can take over, 10-20 years from now).

Another is where data is gathered by user actions, with analytics for example, where the analytics data is then processed automatically and action items brought up as action items for humans (as potential bugs). This is more like automated data mining, which again is more like industrial automation because the value is in the product – the mined data.

Understanding the distinct value that automation brings is essential to bringing more of it.

No Comments

Add a Comment

Sign up here for emails with bites of wisdom on quality automation and MetaAutomation

Recent Blog Posts