Abstract: A brief post, the tip of the iceberg on exploring the question ‘What is testing?’. If this intrigues you, then comment or contact me and let’s have a deeper discussion.
Many people confuse “checking” for “testing”. Checking is a part of testing, but not fully representative of testing as a whole. So what do I mean when I say checking, and how is that different from testing?
- Checking is the act of confirmatory testing, verifying specific facts and outcomes typically by following a script or test case.
- Testing is much larger and holistic that that – I define testing as evaluating a product through exploration for the purpose of informing our product stakeholders on risk.
Our guiding light, the purpose of testing, is…
“to cast light on the status of the product and its context in the service of our stakeholders” – James Bach
If you are simply taking acceptance criteria/requirements, and then writing test cases based on that, you are selling yourself short and doing the product a huge disservice! Much of what we find as testers comes off-script, and high-value unknowns are found by letting humans do what humans do best – be true explorers!
Since my job as a tester is to inform my client as early as possible about any potential risks that I feel may threaten the value or on-time successful completion of the project, then I must do testing, not simply checking. As testers, we must move the craft in a positive direction, and get away from simply doing only claims verification. Claims testing is important, but checking is only one piece of what testing actually is. Stop worrying about ‘green or red’ and instead focus on ‘does a problem exist?’. Ever been driving down the road, and smoke starts coming from the hood of your car? You are going to pull over, even if the engine light has not come on. Are you going to keep driving until that light tells you there is a problem? I hope not! I hope you would use your fantastic human brain to make a smart first-order measurement, and decide to pull over. You don’t need a red light to tell you there is a problem. Similarly in testing, a red light may mean nothing at all, while a green light may be deceiving you into thinking there are no problems, when in fact there may be – just open the hood and look! (or “bonnet” for my fellow testers across the pond)
So, if I asked you how you tested something, what would your answer be? That you simply used your knowledge, years of experience and some tools? Not compelling enough! I want to hear about Capability, Scalability, Compatibility, Charisma. I want to hear about how your Flow testing varied from your Scenario testing and why those two are different. Tell me about the methods of testing you used. Tell me why the testing done was “good” enough. Tell me what roadblocks inhibited testing, and how you worked around those; or which still stand in your way. Tell me what you did not test – many testers forget to talk about that, leaving stakeholder wondering if they even considered certain items, lowering their confidence in our ability to explore for risk that matters.
Good testing generally doesn’t come from heavy checklists, test cases or scripts that are followed – anyone can do that. So let’s do testing, which anyone cannot, in fact, do well.
Raise the bar!
Crossposted: uTest – What Is Testing?