In the Age of Information, testing without the ecosystem misses the main value

Unit tests are popular these days, because they are very fast and can be integrated into a build. The problem with unit tests is that – unless they are of a unit that does an important bit of business logic, like a compression or encryption algorithm – they do not relate to business or functional requirements. What they measure is based only on implementation decisions. Many unit tests are implementation-dependent; refactor the implementation to improve performance, for example, and you might break the unit test (and be compelled to re-write it), which defeats the point of writing the unit test in the first place.

It’s also popular to fake or stub dependencies for the system during checks, to make the checks faster and more reliable than they would be otherwise.

The problem here is, in the age of information (or, the internet) those external relationships are often key to product value. You could stub them out, but that’s like test-driving a car that’s up on blocks; you’ve missed the main point, and you’ve introduced significant quality risk because quality of the most important parts of the application is not measured - until some point in future, when fixes are more expensive and risky and any breakages are noticed farther in time from when the breaking changes were introduced.

Testing an application without the information ecosystem is like test-driving a car that is up on blocks.

To manage risk, test with those dependencies in place, just as your app would live in the wild.

What about speed? There are patterns in MetaAutomation that maximize speed and scalability; the patterns show how to attain those things with a new clarity.

What about reliability? There are patterns for that too, especially the Smart Retry pattern.

To ship software faster at higher quality, the automation that measures product quality must include dependencies (other than dependencies that don’t exist yet) with services and with other tiers of IoT (internet of things) applications. Otherwise the team will face quality risk: late changes and adjustments might be necessary at the less-dependent, more fundamental parts of the application, with downstream effects, or with the relationships between tiers.

Automation for quality must include all the less-dependent layers to manage quality risk.

Bottom-up testing is good strategy because it can speed the checks while omitting the most-dependent parts of the app, i.e., the ones that have fewer downstream risks due to changes (or none at all). For example, if a client GUI is not part of the tested app, the automation will run much faster and with better information on what the system is doing. Given good system architecture, late changes to the client GUI don’t create any risks at all for the parts of the app that it depends on.

Just be sure to include the real dependencies where you can, to avoid quality risk. There are patterns in MetaAutomation to make them fast and reliable.

No Comments

Add a Comment

Sign up here for emails with bites of wisdom on quality automation and MetaAutomation

Recent Blog Posts