start portlet menu bar

HCLSoftware: Fueling the Digital+ Economy

Display portlet menu
end portlet menu bar
Select Page

We all want to test a product very quickly. How do we do that? It’s tempting to say, “Let’s make tools and get it done!” But skilled cognitive work is not factory work. That’s why it’s more important than to understand what testing is and how tools can support the task.

Here at the Test Centre of Excellence (TCoE), we distinguish between aspects of the testing process that machines can do versus those that only skilled humans can do. We have done this linguistically by adapting the ordinary English word “checking” to refer to what tools can do. This is parallel with the long-established convention of distinguishing between “programming” and “compiling.” Programming is what human programmers do. Compiling is what a particular tool does for the programmer, even though what a compiler does might appear to be exactly what programmers do. Think of it this way: no one speaks of automated programming or manual programming. That comparison should help you remember the difference between the two.

Testing, then, is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc.

Testing is an open-ended investigation whereas checking is short for “fact checking” and focuses on specific facts and rules related to those facts. One common problem in our industry is that “checking” is often confused with “testing.” To reduce confusion, try this analogy. A check is describable, whereas a test might not be. That is because, unlike a check, a test involves tacit knowledge. We’re not saying that checking is an inherently bad thing to do. On the contrary, checking may be very important to do. We are asserting that for checking to be considered good, it must happen in the context of a competent testing process. Checking is a tactic of testing.

A tester is more than a tourist
Lots of things we do with a product that aren’t tests can help us learn about it. We can tour the product, see what it’s made of and how it works. This is invaluable, but it’s not quite the same as “testing.” The difference between a tester and a tourist is that a tester’s efforts are devoted to evaluating the product, not merely witnessing it.

To test, we must explore
To test something well, we must work with it. We must get into the nuances of it. This is an exploratory process, even if we have a perfect description of the product. Until we explore that specification, either in our mind’s eye or by working with the product itself, the tests we conceive will be superficial.

Even after we explore enough of the product to understand it deeply, there is still the matter of exploring for problems. Because all testing is sampling, and our sample can never be complete, exploratory thinking has a role throughout the test project as we seek to maximize the value of testing.

By exploration, we mean purposeful wandering, navigating through a space with a general mission, but without a prescribed route. Exploration involves continuous learning and experimenting. There’s a lot of backtracking, repetition, and other processes that look like waste to the untrained eye. Exploring involves a lot of thinking. Exploring is detective work. It’s an open-ended search. Think of exploration as moving through a space. It involves forward, backward, and lateral thinking.

Black box testing is not ignorance-based testing.
Black box testing means that knowledge of the internals of the product doesn’t play a significant part in our testing. Most testers are black-box testers. To do black-box testing well, we need to first learn about the user, expectations and needs, the technology, the configurations the software will run on, the other software that this software will interact with, the data the software must manage, the development process, and so on.

The advantage of black-box testing is that we probably think differently than the programmer and thus, are likely to anticipate risks that the programmer missed. The black-box emphasis on knowledge of the software’s user and environment. We’ve even heard this described as ignorance-based testing because the tester is and stays ignorant of the underlying code.

The more we learn about a product, and the more ways in which we know it, the better we will be able to test it. But if our primary focus is on the source code and tests, we can derive from the source code, we will be covering ground the programmer has probably covered already, and with less knowledge of that code than he/she had.

When testing a complex product: plunge in and quit.
Sometimes complexity can be overwhelming. We might even feel intellectually paralyzed. So, when testing a complex and daunting feature set, do it in bursts. Our mind has an amazing ability to cope with complexity, but don’t expect to comprehend a complex product all at once.

Try throwing yourself into the project for 30 minutes or an hour. Then stop and do something else. This is the method known as “plunge in and quit.” Don’t worry about being unproductive for this short time, and if you feel too confused, quit early.

The great thing about this method is that it requires absolutely no plan other than to select a part of the product and work with it. After a few cycles of “plunge in and quit,” you should begin to see the patterns and outlines of the product. Soon, more organized, and specific testing and studying strategies will come to mind.

Confusion is a test tool.
Is the specification confusing? Ambiguities in specifications are often there to cover up important disagreements among influential stakeholders.

Is the product confusing? It may be broken.

Is the user documentation confusing? This part of the product may be too complex, with too many special cases and inconsistencies to describe.

Is the underlying problem just difficult to understand? Some systems that we try to automate are inherently complex or involve difficult technical issues.

The more we learn about the product, the technology, and testing in general, the more our confusion compass widens, showing us where the important problems lie.

Fresh eyes find failure.
Making sense of something is a rich intellectual process of assimilating new information into what we already know, while modifying what we know to accommodate the new information. After we’ve made sense of a product or feature, we have a mental map of it, and our mind doesn’t work so hard. This can be a problem for us as testers. When we know a product very well, we can make assumptions about it, and we check those assumptions less often.

  • When we come to a product or feature for the first time, pay special attention to what confuses and annoys us. That may tell us something about how a user would react, too.
  • When we work with new additions to the team, test alongside them. Watch how they react to the product as they’re learning it.
  • Beware of getting into a testing rut. Even if we’re not following rigid test scripts, we may get so familiar with a particular feature that we test it in progressively narrower ways.
  • Introduce variation wherever we can or switch testing duties with another tester.

One important outcome of a test process is a better, smarter tester.
Good testers are always learning. As the project progresses, testers gain insight into the product and gradually improve their reflexes and sensibilities in every way that matters in that project. An experienced tester who knows the product and has been through a release cycle or two is able to test with vastly improved effectiveness—even without any instructions—compared to an inexperienced tester who is handed a set of written instructions about how to test the product.

Contact us to learn more about the Test Centre of Excellence (TCoE) at HCLSoftware.

Comment wrap