Monthly Archives: August 2015

The Improvement Continuum

The Improvement Continuum

Abstract: The Improvement Continuum is a dual-pronged concept, containing both product and personal components, like two heads on the same animal. One head pertains to improvement within a solution, product or service, while the other concerns itself with the human mind, particularly our capacity for learning. This idea states that a viable candidate can never reach a point at which it legitimately plateaus in quality. Therefore, by extension, any perceived quality plateau, intentional aside, must be a product of human misunderstanding or mis-measurement of the current state, rather than a lacking of the candidate itself. For the purpose of this article, the term “candidate” is being used to refer to either a solution, product or service that is currently being used in the market, or the human’s individual capacity to increase mental operational quality through learning as long as that human is not inhibited by a medical condition otherwise. In other words, both products and humans have the capacity for improvement.

 

Introduction:

It is widely accepted that a product or service, can never reach a point at which it should intentionally arrive at a plateau in quality, unless that particular solution has been sunset. There’s an unseen and undiscussed black hole where sunset products go to die, but to avoid that, improvements must continually be invented, prioritized and implemented. This may seem like an obvious statement, but what’s not so obvious is how to move the entity forward.

In software development, an increase in quality does not simply mean finding and fixing bugs, as that is only one facet of how the product can be improved.

When evaluating the product as a whole, we should decide as a team which areas need the most focus. What areas of improvement is your division good at? Which areas may have been neglected? Do your efforts to improve as a tester align with the current product risk areas? First, we must establish that candidacy for improvement actually exists. In other words, is improvement warranted for a given feature, perhaps at least a quarterly cadence, or more rarely, for the product as a whole, perhaps on at least an annual evaluation schedule?

 

Determining Improvement Candidacy:

This section only applies to solutions (product and service offerings), not humans, since the latter is a candidate, inherently and indefinitely.

As long as a product or service is a viable offering, then it’s candidacy for improvement remains active. Even software solutions in maintenance-only mode, with a sunset timeline, are still candidates for being improved upon until EOL (the End-of-Life date). Keep in mind, just because something is a candidate for improvement, that does not mean improvement on that solution is mandatory. To understand this, we must look at the four types of improvement candidates that are abandoned.

 

Abandoned Candidates (Four Types):

Improvement candidates can be abandoned intentionally or unintentionally. How we handle these situations, says a lot abut our maturity as testers. This can be positive or negative, depending on how the abandonment was implemented. If you find yourself in this situation as a tester, take a moment of pause before getting upset. Try to realize that there’s a bigger picture outside of you, and the idea that we are the sole ‘gatekeepers’ of the product is archaic. Michael Bolton better frames this point in his blog post here Testers: Get Out of the Quality Assurance Business. Understanding why a given product is or is not getting attention can greatly help you do your job as a tester. It is easy to become disenfranchised with your product offering if you do not understand the business reasons behind the work being given to your team. Use the criteria below to be more informed about which camp your product lives.

  1. Warranted Intentional: The term ‘intentionally abandoned improvement candidate’ sounds like it’d always be a negative, but this can in fact be a sound business decision, thus warranted. In this case, product and upper management within the business has compared the risks of abandoning improvement from both a financial and reputation perspective. There are many sub-level considerations that play into each of these. Perhaps the revenue generated on the given product is negligible and efforts would be better focused on another solution or direction. Perhaps the known backlog of issues has been evaluated from a product risk perspective and deemed as a low-impact, and shoring up this backlog to maintain industry reputation would reside within the realm of diminishing returns.
  2. Unwarranted Intentional: Like the first type, management did at least make an intentional effort to get together and discuss the business needs, but due to a variety of reasons, of which only a small oligarchy may be aware, the product offering has abandoned without justified cause. Unfortunately, an unwarranted abandonment of a given product offering usually does not come with much transparency down to the teams. It’s may not even be a top-down decision, so tech leads and architects may have been involved, but it is possible that an unwarranted abandonment still took place in favor of a different alternative. Many times, unwarranted abandonment cannot be proven by those opposing the decision until it is too late. For example, the new offering craters after a few years in the market, but the previous solution would still be thriving. In this case, it may not be a matter of an oligarchy making the decisions, or lack of transparency, but rather a major miss on industry expertise and demand forecasting. This can sometimes plague startups that hire amazing talent with incredible knowledge and good intentions, but lacking in industry wisdom through experience.
  3. Warranted Unintentional: This is similar to the first type, except that external factors forced the abandonment. In the case of an innovative idea that does have a market, this usually only happens when a major mistake is made that threatens our humanity. For example, public exposure of a security hole in a financial product that shows it is storing PII (e.g. Home address, SSN, DOB, etc.) in clear text in an unencrypted database. This can cause irreparable reputation damage that can take a product out of the industry almost overnight, no matter what damage control may happen after the fact. You could argue that this is unwarranted, but that position is based on the human notion that everyone gets a ‘second chance’, but that often does not exist in many industries. Take a defibrillator, a medical device for example. What if the initial model from a new startup killed patients in some edge cases? As a startup, recovering from the resulting legal situation would be close to impossible. Now, this also happens when the product never had a market to begin with, thus the initial investment was done based on good intentions rather than research of data points within the target industry. However, this usually manifest in one way or another before a mass go-live since scarce target clients would be one of the obvious indicators of a product that is headed in this direction.
  4. Unwarranted Unintentional: This is the most rare type of abandonment, since smart intuitive ideas usually thrive in one vein or another. If in fact the product is sound and innovative with a need in the market, then this somewhat requires the planets to align, in that three factors must be present: External forces, sometimes even from those who wish to see a competing product fail. Bad salesmanship, marketing, and demographic targeting based on the product’s features, thus no traction is gained in the market. And finally, this requires an internal framework of individuals who lack ownership in the product.

 

Quality Plateaus:

Due to the nature of how improvement works, both within a product or a human, quality plateaus can be both intentional and unintentional. We’ve discussed various types of improvement abandonment, and now similarly need to discuss the different forms of quality plateaus that can take place. A product quality plateau can only be justified in the case of the Warranted Intentional abandonment case described earlier. A quality plateau within a human can only be justified in the case of rare medical conditions, but ethically there’d never be an intentional case of such condition, thus is it an outlier, external to this discussion.

 

Plateaus Within Products And Services:

Other than these edge cases, it has been established that an unaffected entity (see Abstract) cannot legitimately arrive at a quality plateau. By extension, any perceived quality plateau within a product must be a symptom of misunderstanding or mis-measurement of the current state in order to make that determination, rather than a failing of the product itself. This also means that such plateaus always require the need to be remedied.

 

Plateaus Within The Human:

In the case of the human mind, a tester’s skill is a qualitative property, and cannot be mathematically or objectively measured; therefore, a quality plateau would be subjective at best. If this plateau does exists within a tester, then it must be dynamically linked to an ethereal ‘learning to date’ + ‘ongoing learning’ measurement to equivocate to some qualitative scale or understanding. Most of the time this can be due to many easy to identify (and easy to remedy too) factors, such as: work ethic, laziness, lack of resources, poor management, misdirection and misinformation, etc. These problems are age old though, and can be fixed. However, for those who have reached a state of Unconscious Competence, this can be a very legitimate concern.

With that said though, these people are few and far between within the testing community and none of  them of course have mastered all areas of being a good tester either.

The Perimeter Assumption:

Something else to be aware of when it comes to learning in various areas of testing, is The Perimeter Assumption. This is something that many struggle with when it comes to testing and learning. This is the idea that as long as I know the most important items  (the extreme edges/test max capacity), and I understand the general framework, then I don’t need to worry about the little things (other considerations within those boundaries). This is something that is troubling but still influences us as testers. It can make us comfortable and complacent in our learning, if we are only worried about the most extreme scenarios when testing. For example, when testing a credit application, we focus on making sure the form submits but might miss that some negative testing exposes a major flaw in the website, exposing PII stored in the database. Remember, not all showstopper product risks are found from testing in high-risk areas.

 

Time Management:

Learning is hard. If it weren’t hard it wouldn’t be beneficial, nor have the allure like it does for so many. I used to say that time management was my biggest roadblock, but then realized that I created that problem for myself, so I needed to stop complaining about it. Learning is liquid. By saying this, I mean that learning is this huge phase space that has no boundaries, so it is easy to create insurmountable learning obstacles that we never address. Brian Kurtz, one of my mentors, often says, ‘If you give me one hour to test something, I’ll use that entire hour. If you give me one week to test that same item, then I will use the entire week.’ The same is true with learning; we will fill the time given with testing. However, there’s so much to learn and not all of it is valuable to us as testers. We have to prioritize what should be learned since business priorities force us to time box our learning to an extent.

So, I like to use this ‘gate’ visual to describe how we self-sabotage our own learning, in hopes that this might help others become more aware of their own potentially self-imposed learning roadblocks:

It may sound ironic, but unlike other roadblocks in my life, learning roadblocks have little to do with learning itself being the problem. Product feature roadblocks for example are usually based on knowledge about that feature being needed to continue development and testing. With learning, the roadblocks tend to come from all other angles, and this is simply because I have traditionally prioritized learning after all my other duties.

As humans, we will naturally seek to fill our time, and typically we will fill it with items that are familiar to us. We might not even enjoy some of these items. For example, excessive meetings can sometimes creep up in the scrum process; however, we get into a cycle of what we believe is expected of us and rarely challenge it. Sure, we challenge acceptance criteria, developers, bug reports and product management, but at some point through the sprint, testers have satisfied some of these priorities, yet continue to use the remaining team priorities as reasons why individual learning cannot be achieved.

Sometimes, we must work with our team to evaluate schedules and product risks, in order to open these gates consciously so that we can target the learning we want done. We must reprioritize, and minimize the time it takes to address some of these expectations. If we time-box our activities, then we’ll have time to address learning on a continual basis, and set aside time within within each sprint for this. So, stop self-imposing these barriers on yourself. We invent this structure, then complain and beat ourselves up when we don’t get to do the learning we want. When asked why we haven’t focused on using a certain test model or pursued reading that book about testing we said we would months back, we pass it off as not being good at time management. Make learning as a tester, one of your main priorities, and if you have to, work with your Scrum Master, Product Owner and Developers on your team to build time into your sprint to prioritize learning just like you would a user story.

Constantly improving as a tester, also involves a great amount of humility to realize we do not have all the answers. Imagine a contractor being called by a customer and they walk into the house with only a nail and a hammer, under the assumption they can handle anything the homeowner might throw at them. Well, when stated like that it’s a completely ludicrous assumption on the part of the contractor. So, why do we do this as testers? We try to take on all testing scenarios with our current knowledge set, but that set is just like having an inadequate toolbox for the jobs put before us. We need to ‘Fill our mental toolbox’, as my collegue Brian Kurtz says, for better ways to address this common pitfall.

 

Action Items For Testers:

Now that you have this knowledge, how do you use it? How does being aware of ideas like improvement abandonment and quality plateaus actually help you in your day to day to better understand how to continually improve? Ideally this article serves as jumping off platform. Awareness of a problem is the first beneficial step to moving your mental state out of Unconscious Incompetence, which is the “I don’t know what I don’t know” state. What’s the barometer to know if you are in this state? Simply ask yourself if this article came across a little heavy. Does this information seem confusing or foreign to you? If so, then perhaps it’s worth investigating if you genuinely desire to become a better tester, and combat any potential learning plateau of which you may or may not be aware.

So, where do you go from here? Read Quality Concepts and Testers Tell A Compelling Story. It is our job as testers to “cast light on the status of the product and its context, in the service of our stakeholders.” – James Bach. We can only do this effectively if we continually educate ourselves on how the product works, and continually improve our own mentality when it comes to how to test. If you are testing for the sake of testing, and simply present in your job to collect a paycheck, then I encourage you to take a introspective look at the reason why. Our responsibility as testers is to the product and its stakeholders, so ultimately if you are occupying a test position within a company, but know you are lacking that passion to be a constant learning for the betterment of our stakeholders, then it may be time to evaluate if testing is your true calling.

Conclusion:

The improvement continuum exists because there is no true zero. Conversely this also means there is no true 100%. In short, quit trying to quantify things that are only qualifiable; rather, concern yourself with identifying the actions needed to reach the next step on the infinite staircase of learning as a tester.

Testers Tell A Compelling Story

Testers Tell A Compelling Story

Abstract: If you’ve spent any time in the context-driven testing community, then you have probably heard the following directive: As testers, we must tell a compelling story to our stakeholders. But, what does this really mean? Are we just talking about a checklist here? Are we just trying to sound elite? Is this just some form of covering ourselves to prove we’re doing our job? Well, none of the above actually. The purpose of doing this is to continually inform our stakeholders in order to increase their awareness of potential product risks so that ultimately they can make better business decisions. We can do this by telling them about what was and what was not tested, using various methods. First we must level-set on the chosen language here and agree on the meaning of the words “compelling” and “story,” then we’ll dive into the logistics of how to deliver that message.

NOTE: I am also going to use the term “Product Management” quite often in this post. When I say that, I am referring to the people who end up doing the final risk assessment and are making the ultimate business decisions as it relates to the product (more about that here from Michael Bolton). This may involve the Product Owner on your team, or it may involve a set of external stakeholders.


Being Compelling:

The word “compelling” can seem a bit ambiguous and its meaning can be rather subjective, since what is compelling to one, is not so to another. What convinces one person to buy a specific brand, does not convince the person right next to them. However, regardless of your context, we need to set some guardrails around this word. The reason for doing this is to remain inline with the community’s endeavor to establish a common language so that we can properly judge what qualifies as ‘good work’ within the realm of testing, and in this case specifically, how good one is at telling a compelling story as testers. Yes, you as a tester should be constructively judging other testers’ work if you care about the testing community as a whole. We cannot do that unless we’re armed with the right information. So first, let’s take a very literal view, and then move forward from there:

“Compelling” as defined by Merriam Webster:

(1) very interesting : able to capture and hold your attention.

(2) capable of causing someone to believe or agree.

(3) strong and forceful : causing you to feel that you must do something.

The information you present should carry with it hallmarks of these three definitions, regardless of the target stakeholder’s role within the company. Let’s elaborate, specific to the context within a software development environment.

  • (1) Interesting: As a tester, by being a salesperson of the product, and a primary cheerleader for the work the team has done, I am satisfying the first criteria. I know all the ins and outs of the product, thus being a subject matter expert gives me the ability to speak to its captivating attributes in order to draw my stakeholders into the discussion. (This also involves knowing your stakeholder, which I could write an entirely separate article about, explaining how you tailor your story for specific stakeholder roles within the company – more on that later).
  • (2) Cultivate Agreement: As the tester for a given feature, you are aware of an area’s strengths and weaknesses. It is your job to take multiple stances, and defend them, be they Pros (typically in the form of new enhancements or big fixes) or Cons (typically in the form of product risks). Just like the defense given by an attorney in their closing arguments of a trial, so too should you defend your positions, regarding the various areas of the product that have changed or are at risk. Since you are informing on both what you did and did not test, then you can aid much better in joint risk assessment with Product Management. This is how testers influence the product development process the most; not in their actual testing, but in their conversations with those who make the business decisions when telling the story of their testing. Give your opinions a solid foundation on which to stand.
  • (3) Take Action: All information that you give to stakeholders should support any action items they may need to take based on that data, thus you should be a competent professional skeptic in your testing process so that the data best leads Product Management toward fruitful actions. Your feedback as a tester is instrumental in well-intentioned coercion, or since that’s typically a negative term, let’s call it positive peer-pressure. Ideally your Product Owner is embedded or at least in constant communication with the scrum team, thus any actions that arise from this information will be of little or no surprise to the team.

On that note, surprises generally only occur when the above types of communication is absent which of course is not just limited to testers. Either user requirements are ambiguous or not prioritized (by Product Management), or perhaps there are some development roadblocks and test depth is not made tangible (by the Scrum Team). I use these three elements as a heuristic to prime the thinking of our stakeholders, so that they can make smarter and wiser product management decisions for the business.


Becoming A Storyteller:

It might seem obvious to say, but the best storytellers always include three main parts: beginning, middle and end. More specifically, characters, conflict and resolution. In a well-structured novel, the author typically introduces the reader to a new character for a period of time, for the purpose of initial development for the audience. Soon, a conflict arises, followed by some form of conclusion, perhaps including resolution of some interpersonal struggles as well. In testing, we want to develop the characters (feature area prioritization), overcome a conflict of sorts (verify closure of dev tasks and proper bug fixes based on those priorities) and come to a conclusion (present compelling information to Product Management about your testing). Just like an author describes a character’s positive traits as well as their lacking characteristics, we too should be sure that our testing story includes both what we did test and also what we did not test. Many testers forget to talk about what they did not test, and it goes unsaid, which increases the risk gap. This would be akin to an author leaving pages of the book unwritten, and thus open to interpretation. However, unlike a novel where a cliffhanger ending might be intentionally crafted in order to spur sales of a second book, omission of information to our stakeholders should never be intentional, and is not part of the art and science of testing. If this is done, the human brain will naturally fill in gaps with their own knowledge which may be faulty, or worse, make assumptions which can become fact if left unchecked for a long enough amount of time. The problem with assumptions is that they are grown within a hidden part of the brain, only knowable to that individual and typically do not expose themselves until it is too late. Leave as few gaps as possible by becoming a good storyteller. It can be dangerous when a tester becomes “used to” the mental state of not telling a story; believing that their job is simply defined by their test case writing and bug reporting skills as they currently exist. As testers, let us not be so limited or obtuse in our thinking when it comes to exploring ideas that help us become a better tester, otherwise our testing skill-craft risks being destined to remain forever in its infancy.

 

The Content Of The Testing Story:

Now, no matter how good a salesperson you might be, or how convincing and compelling you may sound to your various stakeholders, your pot still needs to hold water. That is to say, the content of your story must be based on solid ground. There are three parts to the content of the testing story that we must tell as testers: product status, testing method and testing quality.

  • Product Status: Testers should tell a story about the status of the product, not only in what it does, but also how it fails or how it could fail. This is when we report on bugs found and other possible risks to the customer. Don’t forget, this report would also include how the product performs well, and the extent to which it meets or exceeds our understanding of customer desires.
  • Testing Methods: Testers also tell a story about how we tested the product. What test strategies and heuristics are you using to do your testing and why are those methods valuable? How does your model for testing expose valuable risks? Did you also talk about what you did not test and why (intentional vs blocked)? Tip: Artifacts that visualize how you proritize risk testing can greatly minimize your storytelling effort. 
  • Testing Quality: Testers also talk about the quality of the testing. How well were you able to conduct the testing. What were the risks and costs of your testing process? What made your work easier or harder, and what roadblocks should be addressed to aid in future testing? What is the product’s testability (your freedom to observe and control)? What issues did you present as testers and how were those addressed by the team?

All three of these elements help us to make sure the content of our testing story is not only sound but also durable in order to hold up under scrutiny.

 

The Logistics of Telling the Story:

So, what is our artifact, that we, as testers, have to show for our testing at the end of the sprint? No, it is not bug counts, dev tasks or even the tests we write. Developers have the actual code as their artifact, which is compelling to the technical team, given it can be code reviewed, checked against SLAs, business rules, etc. As testers, traditionally our artifact has been test cases, but as a profession, I feel we’ve missed the mark if we think that a test case document is a good way to tell a compelling story. Properly documented tests may be necessary in most contexts, but Product Management honestly does not have the time to read through every test case, nor should it be necessary. Tests cases are for you, the testing team to use for the purposes of cross-team awareness, regression suite building, future refactors, dev visibility, etc, while it is actually the high-level testing strategy that really provides the most bang-for-buck value add for stakeholders in Product Management.

As far as the actual ‘how-to’ logistics of the situation, there are multiple options that testers should explore within their context. Since humans are visually-driven beings, a picture can say a thousand words, and the use of models provides immense and immediate payoff for very little actual labor. Now that we’ve established criteria for how to make a story compelling, and what the content of that story should be, let’s take a look at the myriad of tools as your disposal that can help with the logistics of putting that story together.

Test Models:

Models that help inform our thinking as testers, will inherently help Product Management make better business discussions. This influence is an unavoidable positive outcome of using models. The HTSM, Heuristic Test Strategy Model by James Bach (XMind Link), is a model that can greatly broaden our horizons as testers. If you are new to this model, then you can focus initially just on the Quality Criteria and Test Techniques nodes which gives you a ready-made template for testers that will not only help us become subject matter experts in telling that compelling story to our stakeholders but eventually just become part of our definition of what it means to be a tester, rather than feeling like this is extra work.

htsm-basic

By using models in grooming, planning and sprint execution, a tester is able to expand on each node for the specific feature, as well as prioritize testing of each one using High, Medium and Low, as a way to inform Product Management of their tiered test strategy. This kind of test modeling can also be made visible to the entire team before testing even begins, allowing testers to be more efficient and communicative, better closing the risk gap between them and the rest of their team, namely developers. More often than not, developers somewhat solidify how they are going to code something in their head after their team planning sessions, so making the test strategy available to the entire team allows them to compare both sets of intentions, their own and the tester’s, with the outcome of squashing assumptions and exposing even more product risks.


Testing Mnemonics:

Mnemonics is a fancy term for acronyms that spell words or phrases that are easy for humans to remember. For example, FEW HICCUPPS is one: “H” stands for History, “I” for Image, “C” for Comparable Products, etc. SFDIPOT (San Francisco Depot) is yet another that is meant to prime our thinking about how to test. These mnemonics are structured in this way to allow our test coverage to be more robust, helping us fill gaps we would have otherwise missed; not because we are inept, but because we are simply human. Here are some other popular testing mnemonics that are used by the community that should help you with your test strategy to ease storytelling: Testing Menemonics

At CAST 2015, a testing conference that I attended in August, in Grand Rapids, Michigan, I listened to Ajay Balamurugadas talk about fulfilling his dream to finally speak at an international conference on testing. His passion for testing was infectious, and one of his suggestions was to pick a single mnemonic each day from that link above, and try it out. It takes only five minutes to understand each one, and then a time-box of 30-60 minutes to implement on a given story. Any tester who claims they do not have time to try these is doing one of the following, none of which are constructive: diluting themselves about the reality of time, intentionally shirking responsibility, confining themselves to their own cultural expectations or actively refusing to learn and grow as a tester. Try these out, see what happens. Use the ones that work for your context, and discard the others, but be sure to tell your team and other testers within your division what did and did not work for you, since sharing that information prevents others from having to reinvent the wheel.

Decision Tables:

I was reminded about how testers can use decision tables at CAST 2015 from Robert Sabourin, something I had not done since my days testing access control panels in the security industry, yet the concept can be easily applied when exploring software pathways and user scenarios. In my opinion, this is a more mathematical way to approach storytelling using boolean logic, but can be just as effective. The end artifact is still a somewhat thorough story of how you are going to conduct your testing, but it should be noted that decision tables do not account for questions raised during testing or areas that the tester will not test; so, these aspects should be documented along with the presentation of a decision table. While more mathematical than using more straightforward testing models like HTSM, and arguably less user friendly, this visual can still easily be explained to non-technical folks within Product Management. I suggest this here since this method may appeal to some minds that are more geared toward this type of thinking. decision-table

So, how does it work? In short, testers construct test flows and scenarios in this format that contains three parts: Actions, Conditions and Rules. These components, along with the expected outcome, True or False, determine the testing pathways. There is no subjectivity, as is with non-boolean expected results, since it is on of off, a 1 or a 0. This paints a very clear picture of how a feature has been tested, and implies that other scenarios are untested. This gives product owners both insight into your test strategy as well as awareness of potential risks that perhaps they had not yet foreseen as you are exploring the product for value to a customer perspective; albeit, a simulated customer perspective. Remember, we can never be the customer, but we can simulate click paths using established user user personas in our flow testing process.

Side Note: If you are not doing flow testing based on the established User Personas, then ask your Product Management team to provide those to you so that you can be doing better testing work in that area. Anyone conducting flow testing using their own self-created click-through paths apart from your established industry Personas may not be adding as much product value as they believe.

Why do this extra work?:

We should be able to qualify the testing we have done on the given feature in a way that is digestible by our stakeholders. Again, this is for the sake of increasing awareness, not simply proving that you ‘did your job’. If your Product Owner asks you, “What is your test strategy for Feature X?” then what would your answer be? Will you fall back on the typical response and just tell them you used years of knowledge and experience with the product to do the job? Or, will you be able to actually show them something that substantiates your testing from a high-level view that they can understand and garner real value? The latter, I hope. Believe it or not, your stakeholders need this information. Some may claim that they are ‘getting by’ just fine all this time without providing this extra level of storytelling, so they do not need to do this. I liken this argument to a swimmer saying he beat everyone in his heat, therefore he’s ‘good enough and doesn’t need improvement.’ First in your heat might be impressive, but in the greater competition, outside of that vacuum, those stats might fall flat when compared to the larger pool of competitors. Try to look through a paper towel roll and tell me you can see the full picture without fibbing a little.

tunnel-vision

On that note, it’s our directive as testers to be constantly learning from each other within the community, which most testers have yet to explore. We’ve all heard that teaching is ‘to learn something again for a second time.’ By forcing ourselves to use new cognitive tools to tell a story, we are also helping ourselves become Product SMEs, allowing us to be more thoughtful and valuable as testers. This not only benefits the company but your own personal career path as well. If interested, you can read more on that in my blog post titled, The Improvement Continuum.

Tailor Your Story:

Finally, there are multiple ways to tell the same story, and your methods should change depending on your audience. For example, we should not use the same talking points with C-level management as we would with our Product Owner. Since the relationship to the product is different for each role within the company, then the story should also be different. You would use the same themes obviously, but your language should be tailored to best fit their specific perspective as it relates to the product. In Talking with C-Level Management About Testing – Keith Klain – YouTube, Keith Klain discusses how different that messaging should be, based on your target audience. My favorite quote from that video is when the interviewer asks Keith, ‘How do you talk to them about testing?’ to which he replies, ‘I don’t tell talk about testing’, at which point he explains how we can discuss testing without using the traditional vernacular. Being aware of your audience should influence not only how you test but how you talk about your testing. I might be compelled to write another blog post specific to this topic; that is, if there’s enough interest in how to mold the testing story based on the various roles within your stakeholder demographic.


Conclusion:

It is common for Product Management and development teams to be two completely different pages when it comes to managing customers expectations. Developers and testers can lose sight of the business risks, while product owners and VPs can lose sight of the technology constraints. Ultimately, it is the job of Product Management to make the final call for deployment of new code, while our job as testers is to inform those management folks about any potential product risks related to the release. This is mentioned in the Abstract, but it is worth highlighting again; here’s a good blog post by Michael Bolton better exploring that tangent, Testers:  Get Out of the Quality Assurance Business «  Developsense Blog. Again, the purpose of testing is “to cast light on the status of the product and it’s context in the service of our stakeholders.” – James Bach.Testers tell a compelling story, but at the end of the day, your story should roll up to that. If it does not, then reevaluate if the information you’re telling is for your benefit, or your stakeholder’s. Be professionally skeptical and ask yourself questions: Is this worth sharing? Have I made it compelling enough to drive home my progress? Many Product Owners have not had any interest in what their tester documents because it has traditionally been of little value to them. Don’t use a Product Owner’s lack of desire as an excuse to stunt your own growth as a tester. Get good at this, and become a more responsible tester. While the failing of apathy is the responsibility of the entire team, we are at the helm of testing and have the power to change it for the better.

I’d like to hear your feedback, based on your own experiences of how you tell your testing story to your stakeholders. I’ve made the case for us, as testers, needing to tell that story. I’ve also gone one step further and provided you with models and other techniques you can use to get started putting this into action. I am eager to hear how you currently do this, as well as which parts interest you most, from the material I have presented here.

Quality Concepts For Testers

Quality Concepts For Testers

Abstract: I’ve put some resources together on techniques, tools and most importantly; new ways of thinking, that I believe would be beneficial to those within our skill-craft of testing. I know we come from a variety of backgrounds, so I wanted to share some of the quality concepts that I see as important from a testing perspective so that we have a common baseline from which to operate.

First, let’s be honest. As software testers we are not pushed by the actual work of testing to continually improve. While software developers are forced to continually adapt to new technologies to stay relevant, testers can easily get into a comfortable routine. In short, testers are more prone to become products of their environment and continue doing the same level of work, so we must be consciously pursuing information that makes us better at what we do, to ensure we do not plateau in our learning (see my other blog post entitled Improvement Continuum). Does everyone get into this rut, this plateau mindset? No way, but can we sometimes plateau and reach a point where we feel like ‘My process hasn’t changed in 6 months, am I still adding value? Am I still increasing the quality of the product as well as my own mindset?’…You bet. These are valid questions so I am hoping this information will help you feel more empowered.

Some of you might know this already, but I am a big believer in the ideals put forth in Context-Driven Testing (CDT); which states that there are no “Best” practices, but rather “Good” ones that fit the situation/team/industry that you are in. Quality is also subjective depending on the value given to it for the specific circumstance. Your stakeholders define what the value of that quality is, it does not simply inherently exist; therefore, “Quality is value to some person” as Gerald Weinberg put it; but more accurately, “Quality is value to some person who matters” – Michael Bolton/James Bach.

Many times as testers, we push forth in testing with our view of what quality is, but we do not re-evaluate that term from our stakeholder’s point of view for each project or feature/epic we work on. How do you know if you are a context-driven tester? Use the following heuristic (rule of thumb):

If you genuinely believe that quality is subjective and also believe that each story or set of stories has a different target group of stakeholders, then you must also believe that revisiting your definition of quality is a ‘must do’ when switching between projects.

But before we get too deep, here is a quick guideline that I like to give both new and veteran testers to make sure we’re in the same ballpark and moving in a common direction:

Chances are, there’s information here that is not familiar, and piques your interest. If this information does not interest you, then ask yourself if you are a detractor or a promoter. This is going to sound harsh, but it is the truth: You are either moving the testing community forward, or consciously remaining in the dark. The more I dive into the context-driven test community, the more I realize there is to be learned. My preconceptions are constantly shifted and modified as I explore this kind of information. My challenge to you: Find content within here, or related content/tools and take a deep-dive into that over the course of a few weeks or even months. Become an SME (Subject Matter Expert) in a given topic, then actively share that with your team (Community of Practice meetings, Weekly/Monthly Roundtables, etc.) and in-time become a known mentor/thought-leader on that subject which should organically draw others to you. What I’ve listed above is just the tip of the iceberg – There are so many avenues to explore here so finding something that you could get passionate about should be the easy first step. The hard part is seeing it out, but having others on a team pursuing endeavors along similar lines should give you strength.

My hopes are that this will spur cross-team brainstorming within your teams, or allow you to find some new learning pathways if you are testing on an island. A lot of these ideas and tools are great, but putting them in context through ongoing discussions is even more useful. Please feel free to make leave comments on what has and has not worked for you, and I’d love to engage. Also, the discussion will benefit others, which is the whole point of sharing these ideas to begin with.

Welcome

Welcome To My Blog

First, I’d like to thank you for spending your time here, given the countless resources for learning that are available to us through other mediums. If you have not yet read my main About page, then please do that as well to garner more context on where I am coming from.

While this blog and its contents are my own original work unless otherwise noted, I do not claim to be the resident subject-matter expert on any given topic. My mentality is in a constant state of evolution in regards to my paradigms and heuristics for handling testing. This mentality is forever in a state of constant flux, shepherded by some solidified base pairs that act as guard-rails to assist my movement through the four stages of learning. I do make an intentional effort to gut-check myself, and verify that I am at least at the third stage, conscious competence, before I publish a post. Unconscious competence within testing is my ultimate goal, but the very nature and vastness of testing precludes any possibility of setting a date-stamped milestone in that realm.

Everything I post, must roll-up to supporting this directive:

“The purpose of testing is to cast light on the status of the product and its context, in the service of my [stakeholders].” – James Bach

James uses the term “clients,” but I prefer “stakeholders” hence the bracketed quote modification. As a proponent of context-driven testing principles, it is my intent to spend the majority of the time writing up my own thoughts and ideas on this blog; however, there may be times that I feel it necessary to share a given topic or am otherwise motivated to share the work of others within the community, at which point they will be duly credited to the best of my abilities.

I encourage my readers to leave comments, questions or suggestions on any of my blog posts as it relates to the material. I ask that my readers be more reflective and less reflexive. My own personal heuristic for doing this, is to read a blog at least twice and at least twenty-four hours apart to help formulate my comment. Some postings are more basic, and thus the heuristic is not appropriate in those cases; however, I find that my comments are more cohesive and coherent when I use that method for deeper discussion. All external comments will go into a queue which I will moderate, then within a short time period I will post the original comment along with my reply. I have seen this format work well on other blog and article-driven websites. Since I have the same expectation from the community that I have of myself, I expect to be challenged by you, and other readers. Hopefully this is done in a way that is mature, respectful and facilitates discussion.

Finally, if you had not already figured out by now, I tend to write conversationally, so you may see technical flaws in the grammar, become frustrated with sentence constructs or experience superfluous comma usage where I intend there to be conversational pauses, from time to time. Thank you for tolerating some of my idiosyncrasies during your time spent on my site. It is my highest hope that you find this information valuable, and more importantly, applicable within your own context.

– Connor Roberts


A little something extra…

At the time of writing this I have over twenty partially completed blog posts in my unpublished queue. In the interest of transparency, and to give you an idea of what kind of topics I might be discussing, below is a list of the current working titles. Since this is a blog, and not a book, I currently have no specific preference on the order of topics. If a title catches your eye, make me aware of your interest, and I’ll do my best to bump it up in the cadence.

  • Scheduled for publication by September 1, 2015:
    • Testers Tell A Compelling Story
    • The Improvement Continuum
    • Quality Concepts
  • Scheduled for publication by September 7, 2015:
    • A Tester’s Sprint Framework
    • Heuristic Test Strategy Model (HTSM)
    • A Radio Graph For Testers
  • In-Progress/Unscheduled:
    • CAST 2015: Distilled (bumped up)
    • Ethics in Testing (bumped up)
    • Professional Reputation (bumped up)
    • Balance within Testing (bumped up)
    • Fighting Occam’s Razor
    • Design Acclimation’s Influence On Testing
    • Over Stimulation and Test Degradation
    • Perspective, Bias and Free Will
    • The Tester
    • Product Advocacy in Testing
    • Autonomy, Mastery & Purpose
    • A Case For Cases
    • Conjunctive Test Strategy Design
    • The Ladder of Testing Paradigms
    • The Iterative Learning Requirement
    • Biology And Testing