Thursday, October 8, 2015

Automated Verifications are Special, and Why This Is Important


MetaAutomation enables automated verifications to be much more powerful than they are with existing practices. However, to realize that value requires, first, an important paradigm shift.

Conventional wisdom about automation, according to current practices, is that there is no fine distinction that separates automated verifications from other aspects of software quality, e.g. manual testing or human-led testing that uses tools (in addition to the SUT, which is itself a kind of tool).

This view can simplify software quality planning because it approaches “test automation” as if this were natural, rather than the contradiction-in-terms that it really is, and that automation is therefore simply an extension of the manual testing effort.

The simplification is an attractive model for understanding the very complex and difficult problem space around software quality. One could argue that this even follows the philosophical principal of Occam’s razor, i.e., the simpler model that explains the observations is the one more likely to be correct or useful.

However, fortunately or unfortunately (depending on your perspective), understanding automation as an extension of the manual testing effort does not explain experiences or capabilities in the space. People with experience in the various roles of doing software quality know well that:

People are very good at

·               Finding bugs

·               Working around issues and exploring

·               Perceiving and judging quality

·               Finding and charactering bugs

But, they’re poor at

·               Quickly and reliably repeating steps many times

·               Making precise or accurate measurements

·               Keeping track of or recording details

Computers driving the product (system under test, or SUT) are very good at

·               Quickly and reliably repeating steps many times

·               Keeping track of and recording details

·               Making precise and accurate measurements

But, computers are poor at

·               Perceiving and judging quality

·               Working around issues or exploring

·               Finding or characterizing bugs

There’s a big divergence between what people are good at, and what computers are good at. This is just one set of reasons that “test automation” doesn’t work; most of what people do can’t be automated. Besides, the term “automation” that comes from industrial automation (building things, faster and more accurately and with fewer people), automation in an airplane (handling and presenting information, and assisting the pilots in flying the plane), or automation in an operations department at a company (allocating network resources through scripts), does not apply to software quality; automation is about output, and if a software product is under development, we generally don’t care about the product output, aside from quality measurements. The “automation” focus of making or doing stuff does not apply to software quality generally.

There is one thing that can be automated, however: verifications around behavior of the software product. Given the above lists on what computers can do well, if we program a computer to do a quality measurement, we’re limited to what eventually amounts to a Boolean quantity: pass or fail.

Note that this is different from using a tool or tools to measure the product so that a person makes the quality decision. People are smart (see above) and they have observational and emotional powers that computers do not. That’s not “automated testing,” either, because all you’re doing is applying a tool (in addition to the software product itself) to make a quality decision. Using a tool could be “automation” like the true meaning of the word (i.e. producing things, modifying or presenting information), but by itself, application of that tool has nothing to do with quality. What the person does with the application of the tool might be related to quality, though, depending on the person’s role and purpose.

I describe the new paradigm like this:

1.       Characterizing software product quality is a vast and open-ended pursuit.

2.       The observational, emotional and adaptive powers of people is indispensable to software quality. (I call this “manual test,” for lack of a better term, to emphasize that it’s a person making quality decisions from product behavior.)

3.       The only part of “automation” that honestly applies to software quality is automated verifications.

4.       Manual test, and automated verifications, are powerful and important in very different ways.

5.       Recognizing the truth of #4 above opens the team up to vast increases in quality measurement efficiency, communication, reporting and analysis.

My last point (#5) is the reason I invented MetaAutomation.

Suppose the quality assurance (QA) team has a Rodney Dangerfield problem (no respect!). MetaAutomation can get them the respect they deserve, by improving speed, quality, transparency of what the software product is doing, exactly what is being measured and what is not being measured. Their achievements will be visible across the whole software team, and the whole team will be grateful.

Suppose the accounting department is preparing for a Sarbanes-Oxley (SOX) audit of the company. They want to know about the value of software developed (or, under development) in the company: what works? How reliably does it work? How fast is it? MetaAutomation answers that need, too, with unprecedented precision and accuracy.

But, MetaAutomation requires step #4 above; it requires people to recognize that automated verifications for quality are very different from manual (or, human-decided) testing.

Once you accept that automated verifications have a special difference and a distinct potential, you open your team to new horizons in productivity and value.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.