Tools

Blog posts in the “Tools” category are related to the various software packages, scripts and other processes or tools available to us for implementation within a certain context that help us become more efficient testers.

A Tester’s Guide To The Galaxy

Abstract: I’ve created a reference card pack that you can use to do better testing, by fostering a team-driven approach to collaborative holistic exposure of high-value product risks.

Overview:

There are three main ways that we learn: Ingestion (books, blogs, models), Collaboration (conferences, discussions, webinars, meet-ups) and Experimentation (exercises, modeling, day-to-day exploration, etc). Since I recognize there is a myriad of options available to fit your own learning style for the purpose of advancing the testing craft, I’d like to introduce another tool that may help: Tester Reference Cards.

Purpose:

Previously, I presented a new model/framework for testers, A Sprint Framework For Testers. My intention was not that testers use that as a script, but more as a model to inform their thinking; however, it does need some rewording and less emphasis placed on test cases to make it more properly represent the context-driven mindset that I actually posses. However, while deciding how to reword some of those ideas, a new artifact sprang forth in these reference cards. Like the framework, these are not to act as scripts to follow, but rather a guideline for how to go about performing better testing within each stage of your development process. While I believe the framework can provide value, I feel that converting that framework into an immediately tangible form that can be applied in the moment has even more intrinsic value. In other words, the Sprint Framework had a baby, and this is it!

Reference Cards:

ref-cards-02

Conclusion

Keep these as reference sheets in digital form or print them out double-sides (duplex) for a physical manifestation that can be shared by various team members. These reference cards can be used for prompting more healthy and holistic discussions in grooming sessions, sprint planning meetings, team retros, etc. They can also be used in groups or individually by programmers, testers, product owners, scrum masters and other internal stakeholders. I’ve provided you with the tool, but it is only as useful as you apply it within your context. Use whichever method that you feel adds the most value for your given context and the various learning styles within your team.

A Documentation Story

Abstract: This is a story about an experience that Brian Kurtz and I had in shifting test documentation strategies from and all-manual unintentional approach to the utilization of an exploratory logging tool. We also talk about our observations on how that worked overall to our advantage within our context, in recapturing both time and operational expenses. The testing community is always going back and forth on how to balance manual testing with automation. In that vein, we’re convinced that exploratory logging tools can help by allowing manual testers to spend more time on actual testing: investigating, learning about the product, exploring for value, etc. and less time on the granular minutiae of documentation. These tools can help minimize the administrative overhead traditionally mandated to or self-imposed by testers. This experience can tell us something about the larger picture, but does not give us the whole story, and by no means does this include the only measurements we took to make our business case.

Note: When we say “test documentation” in this article, we’re not referring to the usual product outlines, coverage maps, etc that the feature teams will create, but rather one specific team’s documentation focused around bug/session reporting. We realize that bug reporting is only one small part of what a tester does, but this proof-of-concept was executed on a team that does a lot of claims verification and manual check-testing since they operate separately, outside of our normal scrum teams.

Special thanks to Brian Kurtz for auditing and contributing to this work.

Overview

When, how and to what extent to document has always been something of a quandary to many. When I was a young tester I felt it was a must. We must have documentation, simply for the sake of having it. We must write down everything about everything. There was no consideration to whether that documentation actually provided value or not. As I grew in my testing career, and through the mentorship of others, I realized that I needed to undertake a shift in my thinking. My new barometer came in two  forms. The first in the form of the question, “What problem am I trying to solve?” The second, comes in the word, satisfice. In other words, how much documentation is enough to be satisfactorily acceptable based on what is actually necessary, rather than what we traditionally have done as testers. This involved me learning to align my testing compass to point toward the the concerns of product management, rather than simply all the concerns that I had.

Recently, we were working with a team that wanted to solve a few problems: decreasing the cost of testing as well as the amount of time it takes to do that testing, all while raising our product quality. Management always wants it both ways right? More quality with less time. Well, we found a tool that we believed could help mitigate this discrepancy, while at the same time allowing our team to do better testing.

Of course, the first step after choosing a tool (which required much research) was that we needed to make a business case, as this would be external to the already approved yearly budget. Since the cost was a new cost to the company, our business case had to be compelling on both fronts: cost saving and time saving. The tool we landed on was QASymphony qTest eXplorer, an exploratory logging tool that takes much of the administrative overhead out of bug reporting. This tool has a cost of $39 per user per month, which comes to a yearly cost of $468 per tester per year. Since the team we’re targeting has three testers, that’s a yearly cost of $1,404. Now that we have the cost of the proposed tool, that we’ll subtract at the end, let’s take a look at some other expenses and time estimates now:

Manual Testing without an Exploratory Logger Tool

  • Description: Each tester was doing their own form of bug reporting/session-documentation, either through Notepad++, Word, Evernote or directly in Jira.
  • Time spent (single tester, 1 day): 30 Min/Site @ 4 Sites per day = 2 hours per day.
  • Time spent (single tester, 1 week): 2 Hours per day x 5 days = 10 hours per week.
  • Contractor bill rate (unrealistic sample): $20* per hour x 10 hours = $200 per week.
  • Multiplied for a team of three testers = 30 hours or $600 per week on test documentation.

Manual Testing with an Exploratory Logger Tool (qTest Web eXplorer)

  • Description: Each tester was now using this new tool that partially automates the process of note-taking, error reporting, taking screenshots, and other administrative overhead normally required by testers. 
  • Time cost (single tester, 1 day): 5 Min/Site @ 4 Sites per day = 20 minutes per day.
  • Time cost (single tester, 1 week): 20 Minutes per day x 5 days = 1.66 hours per week.
  • Expense (unrealistic sample): $20* per hour x 1.66 hours = $33.33 per week
  • Multiplied for a team of three testers = 5 hours or $99.99 per week on test documentation.
documentation-image2

Click the image above to view the documentation comparison estimates.

*This is used as an unrealistic contractor bill rate, used simply for the sake of writing this blog post.

It was our experience that introducing this new tool into the workflow saved considerable amounts of time, as well as recapturing expense that could be woven back into the budget and better used elsewhere.

Qualifications

Keep in mind, these measurements are surrogate measures, and are second order measurements that do not take much time. They tell us something but do not give us the whole story. If you do want to move toward a more lean documentation processes through the use of exploratory logging tools, by no means should you make a business case on this data alone. This should be one facet, among others that you use to make your business case. Also, don’t get locked into thinking you need a paid tool to do this, as 80% of your business case is most likely getting mindsets shifted to a more common paradigm about what documentation is necessary.

We spoke not only with the testers, but also the people absorbing their work, as well as their management to gain insight into the pros and cons of this shift, as it was taking place, as well as after implementation. After our trial run with the tool, before we paid for it, we followed up on the pros/cons: How does the tool benefit your workflow? How does it hinder it? What limitations does the tool have? What does the tool not provide that manual documentation does? How has the tool affected your testing velocity, either positively or negatively? Is the report output adequate for the team’s needs? What gripes to the developers/managers have about the tool? Is it giving other team members, outside of testing, the information they need to succeed? etc. So, while the expense metrics are appealing to the business, the positive aspects of how this affects testing is what got us excited: showing how the tool frees up the testers to work on other priorities, increasing the amount of collaboration, not being focused on the documentation process, etc. We also spent many hours in consult with QASymphony working with their support team on a few bugs, but that was part of the discovery process when learning about the workflow quirks.

Our Challenge To You…Try it!

Try an exploratory logging tool, which would be anything that can help track/record actions in a more automated way to eliminate much of the documentation overhead and minutiae that testers normally have to deal with on a daily basis. We happened to use qTest Web eXplorer (paid, free trial) which is available for Firefox and Chrome, or you can try another standalone one called Rapid Reporter (freeware), that we have found to do many of the same functions. We have no allegiance to any one product, so we would encourage you to explore a few of them to find what works (or doesn’t) for your context. The worst thing that can happen is that you try these out, they don’t work for your context, and you go back to doing what you want. In the process through, positive or negative, you hopefully learned a little.

Conclusion

We feel it is very important for testers to evaluate how tools like this combined with processes such as Session-Based Test Management can help testing become a much more efficient activity. It is easy as testers to get settled into a routine, where we believe that we have ‘figured it out’, so we stop exploring new tools, consciously or subconsciously, and new ways of approaching situations become lost to us. We hope you take our challenge above seriously, and at least try it. You are in charge of your own learning and improving your skill craft as a tester, so we would encourage you to try something new even if you fail forward. This could bring immense value to your testing and team. Go for it. If you do work in a more restrictive environment, then feel free to use the data we have gathered here as justification to your manager or team to try this out.

One of the biggest pitfalls right now in the testing community is over-documentation. Many testers will claim that test cases and other artifacts are required, but often they are not; it just simply feels like it is. If you believe that heavily documented test cases and suites are required to do good testing, then you are locking yourself out of the reality that there are also many other options, tools and methods. Do you think that asking a product owner to read through one-hundred test cases would really inform them about how the product works as well as how you conducted your testing and what the risks and roadblocks are? I would lean toward ‘No’ as a script is simply that, a script, not a testing story.

In this blog post, we told a story about how a tool alleviated documentation overhead within our context. This is a story based on our experience, with that tool and the benefits that it brought. While we feel this is very different than traditional test documentation means, we feel it is a step in the right direction for us. But don’t just take our word for it, read this or this from Michael Bolton or this from James Bach or this from James Christie or…you get the point.

A Personal Metric for Self-Improvement

Article revisions: Learning is continuous, thus my understanding of testing and related knowledge is continually augmented. Below is the revision history of this article, along with the latest version.

  • December 31, 2015, Version 1.0: Initial release.
  • March 31, 2016, Version 1.1: Most definitions reworded, multiple paragraph edits and additions, updated Excel sheet to calculate actual team average.
  • July 28, 2016, Version 1.2: Added sub-category calc. averaging (credit: Aaron Seitz) plus minor layout modifications.
  • September 20, 2016, Version 1.3: Replaced/reworded Test Strategy & Planning > Thoroughness with Modeling (verb) & Tools > Modeling with Models (noun).

Abstract: A Personal Metric For Self-Improvement is a learning model meant to be used by testers, and more specifically, those within software testing. Many times, self-improvement is intangible and immeasurable in the quantifiable way that we as humans seek to understand. We sometimes use this as an excuse, consciously or subconsciously, to remain stagnant and not improve. Let’s talk about how we can abuse metrics in a positive way by using this private measure. We will seek to quantify that which is only qualifiable, for the purpose of challenging us in the sometimes overlooked areas of self-improvement.

Video Version for the non-readers 😉



Overview

I will kick this post off with a bold statement, and I stand by it: You cannot claim to do good testing if you believe that learning has a glass ceiling. In other words, learning is an endless journey. We cannot put a measurable cap on the amount of learning needed to be a good tester, thus we must continually learn new techniques, embrace new tools and study foreign ideas in order to grow in our craft. The very fact that software can never be bug-free supports this premise. I plan to blog about that later, in a post I am working on regarding mental catalysts. For now though, let’s turn our attention back to self-improvement. In short I am saying, since learning is unending, and better testing requires continual variation, then the job of self-improvement can never be at an end.

This job can feel a bit intangible and almost like trying to hit a moving target with a reliable repeatable process; therefore, we must be intentional about how we approach self-improvement so we can be successful. Sometimes I hear people talk about setting goals, writing things down or trying to schedule their own improvement through a cadence of book reads, coding classes or tutorial videos perhaps. This is noble, because self-improvement does not simply happen, but many times we jump into the activity of self-improvement before we determine if we’ve first focused on the right space. For example, a tester believes that they must learn how to code to become more valuable to their company, so they immediately dive into Codecademy classes. Did the tester stop to think…

Maybe the company I work for has an incomplete understanding of what constitutes ‘good testing’? After all, the term ‘good’ implies a value statement, but who is the judge? Do they know that testing is both an art and a science? I am required to consider these variables if I want to improve my testing craft. Does my environment encourage a varied toolset for testers, or simply the idea that anyone under the “Engineering” umbrella must ‘learn coding’ in order to add value?

Now, Agile (big “A”) encourages cross-functional teams, while I encourage “cross-functional teams to the extent that it makes sense”. At the end of the day, I still want a team of specialists working on my code, not a group of individuals that are slightly good at many things. Now, is there value to some testers learning to code? Yes, and here is a viewpoint with which I wholeheartedly agree. However, the point here, as it relates to self-improvement, is that a certain level of critical thinking is required in order to engage System 2, before this level of introspection can even take place. If this does not happen, then the tester may now be focused on an unwarranted self-improvement endeavor that may be beneficial, but is not for the intentional purpose of ‘better testing’.

So, why create a metric?

This might be a wake-up call to some, but your manager in not in charge of your learning; you are. Others in the community have created guides and categories for self-improvement, such as James Bach’s Tester’s Syllabus, which is an excellent way to steer your own self-improvement. For example, I use his syllabus as a guide and rate myself 0 through 4 on each branch, where zero is a topic in which I am unconsciously competent, and a four is a space in which I am consciously or perhaps unconsciously competent (see this Wikipedia article if you need clarification of those terms). I then compare my weak areas to the type of testing I do on a regular basis to determine where the major risk gaps are in my knowledge. If I am ever hesitant about rating myself higher or lower on a given space, I opt for the lower number. This keeps me from over-estimating my abilities in a certain area, as well as helps me to stay intellectually humble on that topic. This self-underestimation tactic is something I learned from Brian Kurtz, one of my mentors.

The Metric

The personal self-improvement metric I have devised is meant to be used in a private setting. For example, these numbers would ideally not roll up to management as a way of evaluating if you are a good or bad tester. These categories and ratings are simply created to give you a mental prompt in the areas you may need to work on, especially if you are in a team environment as that requires honing soft-skills too. However, you may have noticed that I have completely abused metrics here by measuring qualitative elements using quantitative means. This is usually how metrics are abuse for more nefarious purposes, such as being used to influence groups of decision makers to take unwarranted actions. However, I am OK with abusing metrics in this case, since it is for my own personal and private self-improvement means. Even though the number ratings are subjective, it means something to me, and I can use these surrogate measures to continually tweak my approach to learning.

My main categories are as follows: Testing Mindset, Leadership, Test Strategy & Planning, Self-Improvement, Tools & Automation and Intangibles. To an extent, all of these have a level of intangibility, as we’re trying to create a metric by applying a number (quantitative) to an item that can only accurately be described in qualitative (non-numeric) terms. However, since this intended for personal and private purposes, the social ramifications of assigning a number to these categories is negligible. The audience is one, myself, rather than hundreds or thousands across an entire division. Below is the resulting artifact that is created, but you can download the Excel file as a template to use for yourself, as this contains the data, glossary of terms, sample tester ratings, sample team aggregate, etc.

personal-metric

Click here to download the current Microsoft Excel version

Application & Terms

Typically, you can use this for yourself or if you manage a team of testers, privately with them. I would never share one tester’s radar graph with another, as that would defeat the purpose of having a private metric that can be used for self-improvement. The social aspects of this can me minimized in an environment where a shared sense of maturity and respect exist. You can also find the following terms and definitions in the “Glossary” tab of the referenced Excel sheet:

Testing Mindset:

  • Logic Process: ability to reason through problems in a way that uses critical thinking skills to avoid getting fooled.
  • User Advocacy: ability to put on the user hat, albeit biased, and test using various established consumer personas and scenarios (typically provided by Product Management), apart from the acceptance/expected pathways.
  • Curiosity: ability to become engaged with the product in a way that can and does intentionally supersede the intended purpose as guided by perceived customer desires (i.e. Like a kitten would with a new toy, yet also able to focus that interest toward high-value areas and likely risks within the product).
  • Technical Acumen: ability to explain to others, with the appropriate vocabulary, what kind of testing has been, is or is going to be completed or not completed.
  • Tenacity: ability to stay and remain persistently engaged in testing the product as it relates to seeking risks related to the item under test.

Leadership:

  • Mentorship: ability to recognize areas of weakness within the larger team and train others accordingly to address these gaps.
  • Subject Matter Expertise: ability to become knowledgeable in both the product and practice of testing for the purposes of supporting both the stakeholder’s desires as well as capability of supplementing the information needs of other team members.
  • Team Awareness: ability to get and stay in touch with the two main wavelengths of the team, personal and technical, in order to adjust actions to alleviate testing roadblocks.
  • Interpersonal Skills: ability to work well with others on the immediate or larger teams in such a way that facilitates positive communication and allows for more effective testing, including the ability to convey product risks in a way that is appropriate.
  • Reliability: ability to cope through challenges, lead by example based on previous experiences and champion punctuality as well as support a consistent ongoing telling of the testing story to Product Management.

Test Strategy & Planning:

  • Attention to Detail: ability to created adequately detailed test strategies that satisfy the requirements of the stakeholders and the team.
  • Modeling: ability to convert your process into translatable artifacts, using continually evolving mental models to address risk and increase team confidence in the testing endeavor.
  • Three-Part Testing Story: ability to speak competently on the product status, the testing method and the quality of the testing that was completed for the given item under test.
  • Value-Add Testing Artifacts: ability to create testing artifacts (outlines, mind-maps, etc) that can be used throughout the overlapping development and testing phases, as well as support your testing story in your absence.
  • Risk Assessment: ability to use wisdom, which is the combination of knowledge, experience and discernment, to determine where important product risks are within the given item under test.

Self-Improvement:

  • Desire: ability to maintain an internal motivator that brings passion into the art of testing, for the purpose of supporting all other abilities.
  • Quality Theory: ability to support a test strategy with an adequate sum of explicit and tacit knowledge though the use of a varied tool belt: models, apps, techniques, etc, as well as maintaining a strong understanding of a tester’s role within the development lifecycle.
  • Testing Community: ability to engage with both the internal and external testing communities in a way that displays intellectual humility to the extent that it is required to share new ideas, challenge existing ones, and move testing forward.
  • Product Knowledge: ability to become a subject matter expert in yours team’s area of focus such that you can better expose risk and provide value to product management.
  • Cross-Functionality: ability to learn and absorb skills from outside a traditional subset of standards-based/factory-style testing, such that you can use these new skills to enhance the team’s collective testing effort.

Tools & Automation:

  • Data: ability to interact with multiple types and subsets of data related to the product domain, such that testing can become a more effective way of exposing important risks, be it via traditional or non-traditional structures.
  • Scripting: ability to use some form of scripting as a part of the test strategy, when appropriate, to assist with learning about risks and informing beyond a traditional tool-less/primarily human-only approach to the testing effort, so that the testing completed is more robust in nature.
  • Programming: ability to write code in order to establish a deeper understanding of a product’s inner working, to gain insight into why and how data is represented in a product, as well as close the gap between tester and developer perspectives.
  • Exploratory-Supplement: ability to embrace tools that can enhance the effectiveness of testing, allowing for a decrease in traditional administrative overhead.
  • Models: ability to embrace new ways of thinking, including explicit testing models that are made available in the course of work, or via the larger community. Appropriate contextual models help to challenge existing biases, decrease the risk gap, and reshape our own mental paradigms for the purpose of adding value to the testing effort.

Intangibles:

  • Communication & Diplomacy: ability to discuss engineering and testing problems in such a way that guide the team toward actions items that are in the best interests of the stakeholders, without overpowering or harming team relationships.
  • Ability to Negotiate: ability to prioritize risks that pose a threat to perceived client desires, such that the interaction with product management allows for informing over gatekeeping and risk exposure over risk mitigation in the service of our clients.
  • Self-Starter: ability to push in avenues of learning for the sake of improve the testing craft without the need for external coaxing of management’s intervention. Ideally, this would be fueled by an ongoing discontent at the presence of unknown risks and gaps in learning.
  • Confidence: ability to display conviction in the execution of various test strategies, strategies that hold up to scrutiny when presented to the larger stakeholder audience for the purpose of informing product management.
  • Maturity & Selflessness: ability to distance one’s self from the product in a way that allows for informing stakeholders and the team with proper respect. This is done in a way that distances us from the act of gatekeeping by ensuring that our endeavor of serving the client supersedes our own agendas for the product.

The practical application of this is triggered when testers become introspective and self-critical on the areas mentioned within the spreadsheet. This can only be done when each area is studied in depth. I recommend that testers do an initial evaluation by rating themselves loosely on each category and subcategory, using the Glossary as a reference. These are my own guideline definitions that I’ve given to each term, on which you can rate yourself using a 0-4 scale. Your definitions of these words may be different, so treat these as my own. This calculation is of course a surrogate measure, and meant only to be used as a rough estimate to determine areas for improvement. Once the areas of improvement that need the most attention have been identified (i.e. lowest numbers and matter the most to your team or project), the tester would then seek out resources to assist with those areas: tutorial videos, books, online exercises, peer-mentorship, and others. Don’t forget to reach out to both your company’s internal testing community as well as those who live in the online and testing conference space.

Conclusion

Please remember, this metric is by no means a silver bullet and these areas of focus are not meant to be used as a checklist, but rather a guideline to help testers determine areas of weakness of which you may not be currently aware. Many times, we do not realize an area of weakness or our own biases, until someone else points that out to us. I have found that a documented fashion such as this can help me recognize my own gaps. As stated previously, this is most useful when applied privately, or between a tester and their manager in a one-on-one setting. This is only a surrogate measure that attempts to quantify that which is only qualifiable. Putting numbers on these traits is extremely subjective and for the purpose of catalyzing your own introspection. It is my hope that this helps give testers a guide for self-improvement in collectively advancing the testing craft.

CAST 2015: Distilled

Brian Kurtz and I recently traveled to Grand Rapids, Michigan to attend CAST 2015, a testing conference put on by AST and other members of the Context-Driven Testing (CDT) community. I was rewarded in a myriad of ways such as new ideas, enhanced learning sessions, fresh models, etc, but the most rewarding experience from the conference lies in the people and connections made. The entire CDT community currently lives on Twitter, so if you are new to testing or not involved in social media, I would recommend that you begin there. If you are looking for a starting point, check out my Twitter page here, Connor Roberts – Twitter, and look at the people I am following to get a good idea of who some of the active thought leaders are in testing. This community does a good job on Twitter of actually keeping the information flow clean and in general only shares value-add information. In keeping with that endeavor, it is my intention with this post to share the shining bits and pieces that came out of each session I attended. I hope this is a welcome respite from the normal process of learning that involves hours of panning for gold in the riverbanks, only to reveal small shining flakes from time to time.

Keep in mind, this is only a summary of my biased experience, since the notes I take mainly focus on what I feel was valuable and important to me based on what I currently know or do not know about the sessions I attended at the conference. My own notes and ideas are also mixed in with the content from the sessions, as the speaker may have been triggering thoughts in my head as they progressed. I did not keep track or delineate which are their thoughts and which are my own as I took notes.

It is also very likely that I did not document some points that others might feel are valuable, as the way I garner information is different than how they would. Overall, the heuristic that Brian and I used was to treat any of the non-live sessions as a priority since we knew the live sessions would be recorded and posted to the AST YouTube page after the conference. There are many other conferences that are worthwhile to attend, like STPCon, STAR East/West, etc. and I encourage testers to check them out as well.

 

Pre-Conference Workshop:

“Testing Fundamentals for Experienced Testers” by Robert Sabourin

Web: AmiBug.com, Email: [email protected][email protected]

Slide Deck: http://lets-test.com/wp-content/uploads/2014/06/2014_05_25_Test_Fundementals.pdf

Session Notes:

  • Conspicuous Bugs – Sometimes we want users to know about a problem.
    • E.G. A blood pressure cuff is malfunctioning so we want the doctor to know there is an error and they should use another method.
  • Bug Sampling: Find a way to sample a population of bugs, in order to tell a better story about the whole.
    • E.G. Take a look at the last 200 defects we fixed, and categorize them, in order to get an idea where product management believes our business priorities are.
  • Dijkstra’s Principle: “Program testing can be used to show the presence of bugs but not their absence.”
    • E.G. We should never say to a stakeholder, “This feature is bug-free”, but we can say “This feature has been tested in conjunction with product management to address the highest product risks.”
  • “The goal is to reach an acceptable level of risk. At that point, quality is automatically good enough.” – James Bach
  • Three Quality Principles: Durable, Utilitarian, Beautiful
    • Based on book Vitruvius (book on architecture and design still used today)
  • Move away from centralized system testing, toward decentralized testing
    • E.G. Facebook – Pushed new timeline to New Zealand for a month before releasing it to the world
  • Talked about SBTM (Session Based Test Management): Timebox yourself to 60 minutes, determined what you have learned, then perform subsequent sessions by iterating on the previous data collected. In other words, use what you learn in each timeboxed session to make the next timeboxed session more successful.
  • Use visual models to help explain what you mean. Humans can interpret images much quicker than they can read paragraphs of text. Used a mind map as an example.
    • E.G. HTSM with subcategories and priorities
  • Try to come up with constructive, rather than destructive, conversational models when speaking with your team/stakeholders.
    • E.G. Destructive: “The acceptance criteria is not complete so we can’t estimate it”
    • E.G. Constructive: “Here’s a model I use [show HTSM] when I test features. Is there anything from this model that might help us make this acceptance criteria more complete?
  • Problem solving: We all like to think we’re excellent problems solvers, but we’re really only ever good problems solvers in a couple areas. Remember, your problem solving skill is linked to your experience. If you experience is shallow, your problem solving skill will lack variety.
  • Heuristics (first known use 1887): Book “How To Solve It” by George Pólya.
  • Be visual (models, mind maps, decisions charts)
  • If you don’t know the answer then take a guess. Use your knowledge to determine how wrong the first guess was, and make a better one. Keep iterating until you reach a state of “good enough” quality.
  • Large problems: Solve a smaller similar problem first, then try to use that as a sample to generalize so you can make hypothesis about the larger problem’s solution.
  • Decision Tables (a mathematical approach using boolean logic to express testing pathways to stakeholders – see slide deck)
  • AIM Heuristic: Application, Input, Memory
  • Use storyboarding (like comics) to visualize what you are going to test before you write test cases

Conference Sessions:

“Moving Testing Forward” by Karen Johnson (Orbitz)

Session Notes:

  • Know your shortcomings: Don’t force it. If you don’t like what you do, then switch.
    • E.G. Karen moved from Performance testing into something else, because she realized that even while she liked the testing, she was not very mathematical which is needed to become and even better performance tester.
  • Avoid working for someone you don’t respect. This affects your own growth and learning. You’ll be limited. Career development is not something your boss gives you, it is something you have to find for yourself.
  • Office politics: Don’t avoid, learn to get good at how to shape and steer this. “The minute you have two people in a room, there’s politics.”
  • Networking: Don’t just do it when you need a job. People will not connect with you at those times, if you have not been doing it all the other times.
  • Don’t put people in a box, based on your external perceptions of them. They probably know something you don’t.
  • Don’t be busy, in a corner, just focused on being a tester. Learn about the business, or else you’ll be shocked when something happens, or priorities were different than you “assumed”. Don’t lose sight of the “other side of the house”.
  • Balancing work and personal life never ends, so just get used to it, and get good at not complaining about it. Everyone has to do it, and it will level out in the long term. Don’t try to make every day or week perfectly balanced – it’s impossible.
  • Community Legacy: When you ultimately leave the testing community, which will happen to everyone at some point, what five things can you say you did for the community? Will the community have been better because you were in it? This involves interacting with people more than focusing on your process.
  • Be careful of idolizing thought leaders. Challenge their notions as much as the person’s next to you.
  • Goals: Don’t feel bad if you can’t figure out your long term goals. Tech is constantly changing, thus constant opportunities arise. In five years, you may be working on something that doesn’t even exist yet.
  • If your career stays in technology, then a cycle or learning is indefinite. Get used to learning, or you’ll just experience more pain resisting it.
  • Watch Test Is Dead from 2011, Google.
  • Five years from now, anything you know now will be “old”. Are you constantly learning so that you can stay relevant?
  • Be reliable and dependable in your current job, that’s how you advance.
    • Act as if you have the title you want already and do that job. Don’t wait for someone to tell you that you are a ‘Senior’ or a ‘Lead’ before you start leading. Management tasks require approval, leadership does not.
  • Care about your professional reputation, be aware of your online and social media presences. If you don’t have any, create them and start fostering them (Personal Website, Twitter for testing, etc.)

“Building A Culture Of Quality” by Josh Meier

Session Notes:

  • Two types of culture: Employee (ping pong tables) vs. Engineering (the way we ‘do’ things), let’s talk about the latter (more important)
  • Visible (Environment, Behaviors) vs. Invisible (Values, Attributes)
  • A ship in port is safe, but that’s not what ships are built for – Grace Hopper
  • Pair Tester with Dev for a full day (like an extended Shake And Bake session)
  • When filing bug reports, start making suggestions on possible fixes. At first this will be greeted with “don’t tell me how to do my job”, but eventually it will be welcomes as it will be a time saver, and for Josh, this morphed into the developers asking him, as a tester, to sign off on code reviews as part of their DoD (Definition of Done).
  • Begin participating in code-reviews, even if non-technical
  • *Ask for partial code, pre-commit before it is ready so you can supplement the Dev discussions to get an idea of where the developer is headed.
  • *Taxi Automation – Scripts than can be paused, allow the user to explorer mid-way through the checks, and then the checks continue based on the exploration work done.

“Should Testers Code” (Debate format) by Henrik Anderson and Jeffrey Morgan

My Conclusion: Yes and No. No, because value can be added without becoming technical; however, if your environment would benefit from a more technical tester and it’s something you have the aptitude for, then you should pursue it as part of your learning. If you find yourself desiring to do development, but in a tester role, then evaluate the possibility that you may wish to apply for a developer position, but don’t be a wolf in sheep’s clothing; that does the product and the team a disservice.

Session Notes:

  • It takes the responsibility of creating quality code off the developer if testers start coding (Automation Engineers excluded)
  • Training a blackbox tester for even 1 full hour per day for 10 months cannot replce years of coding education, training and experience. This is a huge time-sink for creation of a Jr. Dev as a best case scenario.
  • The mentality that all testers should code comes from a lack of understanding about how to increase your knowledge in the skill-craft of testing. Automation is a single tool, and coding is a practice. If you are non-technical, work on training your mindset, not trying to become a developer.

My Other Observations:

  • Do you want a foot doctor doing your heart surgery? (Developers spending majority time testing, Testers spending majority time developing?)
  • People who say that all testers should code do not truly understand that quality is a team responsibility, but rather only a developer’s responsibility. Those that hold this stance, consciously or subconsciously have a desire to make testers into coders, and only “then” will it be their responsibility because they will then be in the right role/title. Making testers code is just a sly way of saying that a manual exploratory blackbox tester does not add value, or at least enough value, to belong on my team.
  • By having this viewpoint, you are also saying that you posses the sum of knowledge of what it means to be a good tester and have reached a state of conscious competence in testing enough to make the claim that your determination of what a “tester” is, is not flawed.
  • The language we have traditionally used in the industry is what throws people off. People see the title “Quality Assurance” and think that only the person with that title should be in charge of quality, but this is a misnomer. We cannot claim that the team owns quality then say that it is the tester’s responsibility to be sure that the product in production is free from major product risks. They are opposing viewpoints, neither of which address testing.
  • Developers should move toward a better understanding of what it takes to test, while Testers should move toward a better understanding of what it takes to be a developer. This can be accomplished through collaborative/peer processes like Shake And Bake.
  • I believe that these two roles should never fully come together and be the same. We should stay complex and varied. We need specialists just like complex machines that have specialized parts. The gears inside a Rolex watch cannot do the job of the protective glass layer on top. Likewise, the watch band cannot do the job of keeping time, nor would you want it to. Variety is a good thing, and attempting to become great at everything makes you only partially good at any one thing. Also brands like Rolex and Bvlgari have an amazingly complex ecosystem of parts. The more complex a creation, the more elegant it’s operation and output will be.
  • Just like the ‘wisdom of the crowd’ can help you find the right answer (see session notes below from the talk by Mike Lyles) the myth of group reasoning can equally bite you. For example, a bad idea left unchecked in a given environment can propagate foolishness. This is why the role of the corporate consultant exists in the first place. In regards to testing organizations, keep in mind that just because an industry heads in a certain direction, it does not mean that is the correct direction.

 

“Visualize Testability” by Maria Kedemo

Webcast: https://www.youtube.com/watch?v=_VR8naRfzK8

Slide Deck: http://schd.ws/hosted_files/cast2015/f3/Visualizing%20Testability%202015%20CAST.pdf

Session Notes:

  • Maria talked about the symptoms of low testability
    • E.G. When Developers say, “You’ll get it in a few days, so just wait until then,” this prevents the Tester from making sure something is testable, since they could be sitting with the Devs as they get halfway through it to give them ideas and help steer the coding (i.e. bake the quality into the cake, instead of waiting until after the fact to dive into it)
  • Get visibility into the ‘code in progress’, not just when it is committed at code review time. (similar to to what Josh Meier recommended, see other session notes above)
  • Maria presented a new model: Dimensions of Testability (contained within her slide deck)

 

“Bad Metric, Bad” by Joseph Ours

Email: [email protected], Twitter @justjoehere

Slide-deck: http://www.slideshare.net/qaoth/bad-metric-bad-45224921

Session Notes:

  • Make sure your samples are proper estimates of the population
    • I tweeted: “If you bite into a BLT, and miss the slice of bacon, you will estimate the BLT has 0% bacon”
  • Division within Testing Community (I see a visual/diagram that could easily be created from this)
    • 70% uneducated
    • 25% educated
    • 5% CDT (context-driven testing) educated/aware

 

“The Future Of Testing” by Ajay Balamurugadas

Webcast: https://www.youtube.com/watch?v=vOfjkkblFoA

Session Notes:

  • My main takeaway was about the resources available to us as testers.
    • Ministry of Testing
    • Weekend Testing meetups
    • Skype Face-to-face test training with others in the community
    • Skype Testing 24/7 chat room
    • Udemy Coursera
    • BBST Classes
    • Test Insane (hold global test competition called ‘War With Bugs’, with $$cash prizes)
    • Testing Mnemonics list (pick one and try it out each day)
    • SpeakEasy Program (for those interested in doing conventions/circuits on testing)
  • Also talked about the TQM Model (Total Quality Management)
    • Customer Focus, Total Participation, Process Improvement, Process Management, Planning Process, etc.
  • Ajay encouraged learning from other industries
    • E.G. Medical, Auto, Aerospace, etc. by reading about testing on news sites or product risks found there. They may have applicable information that apply here.
  • “You work for your employer, but learning is in your hands.” (i.e. Don’t wait for your manager to train you, do it yourself)
  • Talked about the AST Grant Program – helps with PR, pay for meetups, etc.
  • Reading is nice, but if you want to become good at something, you must practice it.
  • Professional Reputation – do you have an online testing portfolio
    • On a personal note: He got me on this one. I was in the process then of getting my personal blog back up (which is live now), but also plan to even put up some screen recordings of how I test in various situations, what models I use, how I use them, why I test the way I do, how to reach a state of ‘good enough’ testing where product risks are mitigated or only minimal ones remain, how to tell a story to our stakeholders about what was and was not tested, understanding metrics use and misuse, etc.
  • “Your name is your biggest certificate” – Ajay (on the topic of certifications)

 

“Reason and Argument for Testers” by Thomas Vaniotis and Scott Allman

Session Notes:

  • Discussed Argument vs Rhetoric
    • Argument – justification of beliefs, strength of evidence, rational analysis
    • Rhetoric – literary merit, attractiveness, social usefulness, political favorability
  • They talked about making conclusions based on premises. You need to make sure your premises are sound, before you try to make a conclusion based on solely conjecture that only ‘sounds’ good on the surface.
  • Talked about language – all sound arguments are valid, but not all valid arguments are sound. There are many true conclusions that do not have sound arguments. No sound argument will lead to a false conclusion.
  • Fallacies (I liked this definition) – a collection of statements that resemble arguments, but are invalid.
  • Abduction – forming conclusion in a dangerous way (avoid this by ensuring your premises are sound)
  • Use Safety Language (Epistemic Modality) to qualify statements and make them more palatable for your audience. You can reach the same outcome and still maintain friendships/relationships.

My conclusions:

  • This was really a session on psychology in the workplace, not limited to testers, but it was a good reminder on how to make points to our stakeholders if we want to convince them of something.
  • If you work with people your respect, then you should realize that they are most likely speaking with the product’s best interests at heart, at least from their perspective, and not out to maliciously attack you personally. You can avoid personal attacks by speaking from your own experience. Instead of saying “That’s not correct, here’s why…” You can say “In my experience, I have found, X Y Z to be true, because of these factors…” In this way you will make the same point, without the confrontational bias.
  • If you want to convince others, be Type-A when dealing with the product, but not when dealing with people. Try to separate the two in your mind before going into any conversation.

“Visual Testing” by Mike Lyles

Twitter @mikelyles

Slide-deck: http://www.slideshare.net/mikelyles/visual-testing-its-not-what-you-look-at-its-what-you-see

Session Notes:

  • This was all about how we can be visually fooled as testers. Lots of good examples in the slide-deck, and he stumped about half of the crowd there, even though we were primed about being fooled.
  • Leverage the Wisdom of the Crowd: Mike also did an exercise where he held up a jar of gum balls and asked us how many were inside. One person guessed 500, one person guess 1,000. At that point our average was 750. Another person guessed 200, another 350, another 650, another 150, etc. and this went on for a while until we had about 12 to 15 guesses written down. The average of the guesses came out to around 550. The Total number of gum balls was actually within 50-100 of this average. The point that Mike was making was that leveraging the wisdom of the crowd to make decisions is smarter than trying to go it alone or based on smaller subsets/sources of comparison. Use the people in your division, around you on your team and even in the testing community at large to make sure you are on the right track and moving toward the most likely outcome that will better serve your stakeholders.
    • This involves an intentional effort to be humble, and realize that you (we) do not have all the answers to any given situation. We should be seeking counsel for situations that have potentially sizable product impacts and risks, especially in areas that are not in our wheelhouse.
  • Choice Blindness: People will come up with convincing reasons why to take a certain set of actions based on things that are inaccurate or never happened.

 

“Using Tools To Improve Testing: Beyond The UI” by Jeremy Traylor

Slide-deck: http://schd.ws/hosted_files/cast2015/a5/Alt%20Beyond%20the%20UI%20-%20Tools.pptx

Session Notes:

  • Testers should become familiar with more development-like tools (e.g. Browser Dev Tools, Scripting, Fiddler commands, etc.)
  • JSONLint – a JSON validator
  • Use Fiddler (Windows) or Charles (Mac)
    • Learn how to send commands through this (POST, GET, etc.) and not just use it to only monitor output.
  • API Testing: Why do this?
    • Sometimes the UI is not complete, and we could be testing sooner and more often to verify backend functionality
    • You can test more scenarios than simply testing from the UI, and you can test those scenarios quickly if you are using script to hit the API rather than manual UI testing.
      • Some would argue that this invalidates testing since you are not doing it how the user is doing it, but as long as you are sending the exact input data that the UI would send then I would argue this is not a waste of time and can expose product risks sooner rather than later.
    • Gives testers better understanding of how the application works, instead of everything beyond the UI just being a ‘black box’ that they do not understand.
    • Some test scenarios may not be possible in the UI. There may be some background caching or performance tests you want to do that cannot be accomplished from the front end.
    • You can have the API handle simple tasks rather than reply on creating front-end logic conversions after the fact. This increases testability and reliability.
  • Postman (Chrome extension) – this is an backend-HTTP testing tool that has a nice GUI/front-end. This helps decrease the barrier to entry for testers who may be firmly planted in the blackbox/manual-only world and want to increase their technical knowledge to better help their team.
  • Tamper Data (addon for Firefox) – can change data as it is in route, so you can better simulate Domain testing (positive/negative test scenarios).
  • SQL Fiddle – This is a DB tool for testing queries, scripts, etc.
  • Other tools: SOAPUI, Advanced Rest Client, Parasoft SOAtest, JSONLint, etc.
  • Did you know that the “GET” command can be used to harvest data (PII, user information, etc). Testers, are you checking this? (HTSM > Quality Criteria > Security). However, “GET” can ‘lie’ so you want to check the DB to make sure the data it returns is actually true.

My conclusions:

  • Explore what works for you and your team/product, but don’t stick your head in the sand and just claim that you are a manual-only tester. You have to at least try these tools and make a genuine effort to use them for a while before you can discount their effectiveness. Claiming they would not work for your situation or never making time to explore them is the same as saying that you wish to stay in the dark on how to become a better tester.
  • Since Security testing is not one of my fortes, I personally would like to become a better whitebox hacker to aid in my skill-craft as a tester. This involves trying to gain the system and expose security risks, but for noble purposes. Any found risks then go to help better inform the development team and are used to make decisions on how the product can be made more secure. Since testers are supposed to be informers, this is something I need to work on to better round out my skill-set.

 

“When Cultures Collide” by Raj Subramanian and Carlene Wesemeyer

Session Notes:

  • Raj and Carlene spent the majority of the time talking about communication barriers such as differences in body language, the limitations of text-only (chat or email), as well as assumptions that are made by certain cultures about others regardless if they are within the same culture or not.
  • Main takeaway: Don’t take a yes for a yes and a no for a no. Be over-communicative if necessary to ensure that the expectations you have in your head match what they have in their head.

 

Conclusion:

I hope that my notes have helped you in some way, or at the very least exposed you to some new ideas and knowledgable folks in the industry from which you can learn. Please leave comments here on what area you received the most value from or need clarification. Again, these are my distilled notes from the four days I was there, so I may be able to recall more or update this blog if you feel one area might be lacking. If you also went to CAST 2015, and any of the same sessions, then I’d love to hear your thoughts on any important points I may have overlooked that would be beneficial to the community.

Testers Tell A Compelling Story

Testers Tell A Compelling Story

Abstract: If you’ve spent any time in the context-driven testing community, then you have probably heard the following directive: As testers, we must tell a compelling story to our stakeholders. But, what does this really mean? Are we just talking about a checklist here? Are we just trying to sound elite? Is this just some form of covering ourselves to prove we’re doing our job? Well, none of the above actually. The purpose of doing this is to continually inform our stakeholders in order to increase their awareness of potential product risks so that ultimately they can make better business decisions. We can do this by telling them about what was and what was not tested, using various methods. First we must level-set on the chosen language here and agree on the meaning of the words “compelling” and “story,” then we’ll dive into the logistics of how to deliver that message.

NOTE: I am also going to use the term “Product Management” quite often in this post. When I say that, I am referring to the people who end up doing the final risk assessment and are making the ultimate business decisions as it relates to the product (more about that here from Michael Bolton). This may involve the Product Owner on your team, or it may involve a set of external stakeholders.


Being Compelling:

The word “compelling” can seem a bit ambiguous and its meaning can be rather subjective, since what is compelling to one, is not so to another. What convinces one person to buy a specific brand, does not convince the person right next to them. However, regardless of your context, we need to set some guardrails around this word. The reason for doing this is to remain inline with the community’s endeavor to establish a common language so that we can properly judge what qualifies as ‘good work’ within the realm of testing, and in this case specifically, how good one is at telling a compelling story as testers. Yes, you as a tester should be constructively judging other testers’ work if you care about the testing community as a whole. We cannot do that unless we’re armed with the right information. So first, let’s take a very literal view, and then move forward from there:

“Compelling” as defined by Merriam Webster:

(1) very interesting : able to capture and hold your attention.

(2) capable of causing someone to believe or agree.

(3) strong and forceful : causing you to feel that you must do something.

The information you present should carry with it hallmarks of these three definitions, regardless of the target stakeholder’s role within the company. Let’s elaborate, specific to the context within a software development environment.

  • (1) Interesting: As a tester, by being a salesperson of the product, and a primary cheerleader for the work the team has done, I am satisfying the first criteria. I know all the ins and outs of the product, thus being a subject matter expert gives me the ability to speak to its captivating attributes in order to draw my stakeholders into the discussion. (This also involves knowing your stakeholder, which I could write an entirely separate article about, explaining how you tailor your story for specific stakeholder roles within the company – more on that later).
  • (2) Cultivate Agreement: As the tester for a given feature, you are aware of an area’s strengths and weaknesses. It is your job to take multiple stances, and defend them, be they Pros (typically in the form of new enhancements or big fixes) or Cons (typically in the form of product risks). Just like the defense given by an attorney in their closing arguments of a trial, so too should you defend your positions, regarding the various areas of the product that have changed or are at risk. Since you are informing on both what you did and did not test, then you can aid much better in joint risk assessment with Product Management. This is how testers influence the product development process the most; not in their actual testing, but in their conversations with those who make the business decisions when telling the story of their testing. Give your opinions a solid foundation on which to stand.
  • (3) Take Action: All information that you give to stakeholders should support any action items they may need to take based on that data, thus you should be a competent professional skeptic in your testing process so that the data best leads Product Management toward fruitful actions. Your feedback as a tester is instrumental in well-intentioned coercion, or since that’s typically a negative term, let’s call it positive peer-pressure. Ideally your Product Owner is embedded or at least in constant communication with the scrum team, thus any actions that arise from this information will be of little or no surprise to the team.

On that note, surprises generally only occur when the above types of communication is absent which of course is not just limited to testers. Either user requirements are ambiguous or not prioritized (by Product Management), or perhaps there are some development roadblocks and test depth is not made tangible (by the Scrum Team). I use these three elements as a heuristic to prime the thinking of our stakeholders, so that they can make smarter and wiser product management decisions for the business.


Becoming A Storyteller:

It might seem obvious to say, but the best storytellers always include three main parts: beginning, middle and end. More specifically, characters, conflict and resolution. In a well-structured novel, the author typically introduces the reader to a new character for a period of time, for the purpose of initial development for the audience. Soon, a conflict arises, followed by some form of conclusion, perhaps including resolution of some interpersonal struggles as well. In testing, we want to develop the characters (feature area prioritization), overcome a conflict of sorts (verify closure of dev tasks and proper bug fixes based on those priorities) and come to a conclusion (present compelling information to Product Management about your testing). Just like an author describes a character’s positive traits as well as their lacking characteristics, we too should be sure that our testing story includes both what we did test and also what we did not test. Many testers forget to talk about what they did not test, and it goes unsaid, which increases the risk gap. This would be akin to an author leaving pages of the book unwritten, and thus open to interpretation. However, unlike a novel where a cliffhanger ending might be intentionally crafted in order to spur sales of a second book, omission of information to our stakeholders should never be intentional, and is not part of the art and science of testing. If this is done, the human brain will naturally fill in gaps with their own knowledge which may be faulty, or worse, make assumptions which can become fact if left unchecked for a long enough amount of time. The problem with assumptions is that they are grown within a hidden part of the brain, only knowable to that individual and typically do not expose themselves until it is too late. Leave as few gaps as possible by becoming a good storyteller. It can be dangerous when a tester becomes “used to” the mental state of not telling a story; believing that their job is simply defined by their test case writing and bug reporting skills as they currently exist. As testers, let us not be so limited or obtuse in our thinking when it comes to exploring ideas that help us become a better tester, otherwise our testing skill-craft risks being destined to remain forever in its infancy.

 

The Content Of The Testing Story:

Now, no matter how good a salesperson you might be, or how convincing and compelling you may sound to your various stakeholders, your pot still needs to hold water. That is to say, the content of your story must be based on solid ground. There are three parts to the content of the testing story that we must tell as testers: product status, testing method and testing quality.

  • Product Status: Testers should tell a story about the status of the product, not only in what it does, but also how it fails or how it could fail. This is when we report on bugs found and other possible risks to the customer. Don’t forget, this report would also include how the product performs well, and the extent to which it meets or exceeds our understanding of customer desires.
  • Testing Methods: Testers also tell a story about how we tested the product. What test strategies and heuristics are you using to do your testing and why are those methods valuable? How does your model for testing expose valuable risks? Did you also talk about what you did not test and why (intentional vs blocked)? Tip: Artifacts that visualize how you proritize risk testing can greatly minimize your storytelling effort. 
  • Testing Quality: Testers also talk about the quality of the testing. How well were you able to conduct the testing. What were the risks and costs of your testing process? What made your work easier or harder, and what roadblocks should be addressed to aid in future testing? What is the product’s testability (your freedom to observe and control)? What issues did you present as testers and how were those addressed by the team?

All three of these elements help us to make sure the content of our testing story is not only sound but also durable in order to hold up under scrutiny.

 

The Logistics of Telling the Story:

So, what is our artifact, that we, as testers, have to show for our testing at the end of the sprint? No, it is not bug counts, dev tasks or even the tests we write. Developers have the actual code as their artifact, which is compelling to the technical team, given it can be code reviewed, checked against SLAs, business rules, etc. As testers, traditionally our artifact has been test cases, but as a profession, I feel we’ve missed the mark if we think that a test case document is a good way to tell a compelling story. Properly documented tests may be necessary in most contexts, but Product Management honestly does not have the time to read through every test case, nor should it be necessary. Tests cases are for you, the testing team to use for the purposes of cross-team awareness, regression suite building, future refactors, dev visibility, etc, while it is actually the high-level testing strategy that really provides the most bang-for-buck value add for stakeholders in Product Management.

As far as the actual ‘how-to’ logistics of the situation, there are multiple options that testers should explore within their context. Since humans are visually-driven beings, a picture can say a thousand words, and the use of models provides immense and immediate payoff for very little actual labor. Now that we’ve established criteria for how to make a story compelling, and what the content of that story should be, let’s take a look at the myriad of tools as your disposal that can help with the logistics of putting that story together.

Test Models:

Models that help inform our thinking as testers, will inherently help Product Management make better business discussions. This influence is an unavoidable positive outcome of using models. The HTSM, Heuristic Test Strategy Model by James Bach (XMind Link), is a model that can greatly broaden our horizons as testers. If you are new to this model, then you can focus initially just on the Quality Criteria and Test Techniques nodes which gives you a ready-made template for testers that will not only help us become subject matter experts in telling that compelling story to our stakeholders but eventually just become part of our definition of what it means to be a tester, rather than feeling like this is extra work.

htsm-basic

By using models in grooming, planning and sprint execution, a tester is able to expand on each node for the specific feature, as well as prioritize testing of each one using High, Medium and Low, as a way to inform Product Management of their tiered test strategy. This kind of test modeling can also be made visible to the entire team before testing even begins, allowing testers to be more efficient and communicative, better closing the risk gap between them and the rest of their team, namely developers. More often than not, developers somewhat solidify how they are going to code something in their head after their team planning sessions, so making the test strategy available to the entire team allows them to compare both sets of intentions, their own and the tester’s, with the outcome of squashing assumptions and exposing even more product risks.


Testing Mnemonics:

Mnemonics is a fancy term for acronyms that spell words or phrases that are easy for humans to remember. For example, FEW HICCUPPS is one: “H” stands for History, “I” for Image, “C” for Comparable Products, etc. SFDIPOT (San Francisco Depot) is yet another that is meant to prime our thinking about how to test. These mnemonics are structured in this way to allow our test coverage to be more robust, helping us fill gaps we would have otherwise missed; not because we are inept, but because we are simply human. Here are some other popular testing mnemonics that are used by the community that should help you with your test strategy to ease storytelling: Testing Menemonics

At CAST 2015, a testing conference that I attended in August, in Grand Rapids, Michigan, I listened to Ajay Balamurugadas talk about fulfilling his dream to finally speak at an international conference on testing. His passion for testing was infectious, and one of his suggestions was to pick a single mnemonic each day from that link above, and try it out. It takes only five minutes to understand each one, and then a time-box of 30-60 minutes to implement on a given story. Any tester who claims they do not have time to try these is doing one of the following, none of which are constructive: diluting themselves about the reality of time, intentionally shirking responsibility, confining themselves to their own cultural expectations or actively refusing to learn and grow as a tester. Try these out, see what happens. Use the ones that work for your context, and discard the others, but be sure to tell your team and other testers within your division what did and did not work for you, since sharing that information prevents others from having to reinvent the wheel.

Decision Tables:

I was reminded about how testers can use decision tables at CAST 2015 from Robert Sabourin, something I had not done since my days testing access control panels in the security industry, yet the concept can be easily applied when exploring software pathways and user scenarios. In my opinion, this is a more mathematical way to approach storytelling using boolean logic, but can be just as effective. The end artifact is still a somewhat thorough story of how you are going to conduct your testing, but it should be noted that decision tables do not account for questions raised during testing or areas that the tester will not test; so, these aspects should be documented along with the presentation of a decision table. While more mathematical than using more straightforward testing models like HTSM, and arguably less user friendly, this visual can still easily be explained to non-technical folks within Product Management. I suggest this here since this method may appeal to some minds that are more geared toward this type of thinking. decision-table

So, how does it work? In short, testers construct test flows and scenarios in this format that contains three parts: Actions, Conditions and Rules. These components, along with the expected outcome, True or False, determine the testing pathways. There is no subjectivity, as is with non-boolean expected results, since it is on of off, a 1 or a 0. This paints a very clear picture of how a feature has been tested, and implies that other scenarios are untested. This gives product owners both insight into your test strategy as well as awareness of potential risks that perhaps they had not yet foreseen as you are exploring the product for value to a customer perspective; albeit, a simulated customer perspective. Remember, we can never be the customer, but we can simulate click paths using established user user personas in our flow testing process.

Side Note: If you are not doing flow testing based on the established User Personas, then ask your Product Management team to provide those to you so that you can be doing better testing work in that area. Anyone conducting flow testing using their own self-created click-through paths apart from your established industry Personas may not be adding as much product value as they believe.

Why do this extra work?:

We should be able to qualify the testing we have done on the given feature in a way that is digestible by our stakeholders. Again, this is for the sake of increasing awareness, not simply proving that you ‘did your job’. If your Product Owner asks you, “What is your test strategy for Feature X?” then what would your answer be? Will you fall back on the typical response and just tell them you used years of knowledge and experience with the product to do the job? Or, will you be able to actually show them something that substantiates your testing from a high-level view that they can understand and garner real value? The latter, I hope. Believe it or not, your stakeholders need this information. Some may claim that they are ‘getting by’ just fine all this time without providing this extra level of storytelling, so they do not need to do this. I liken this argument to a swimmer saying he beat everyone in his heat, therefore he’s ‘good enough and doesn’t need improvement.’ First in your heat might be impressive, but in the greater competition, outside of that vacuum, those stats might fall flat when compared to the larger pool of competitors. Try to look through a paper towel roll and tell me you can see the full picture without fibbing a little.

tunnel-vision

On that note, it’s our directive as testers to be constantly learning from each other within the community, which most testers have yet to explore. We’ve all heard that teaching is ‘to learn something again for a second time.’ By forcing ourselves to use new cognitive tools to tell a story, we are also helping ourselves become Product SMEs, allowing us to be more thoughtful and valuable as testers. This not only benefits the company but your own personal career path as well. If interested, you can read more on that in my blog post titled, The Improvement Continuum.

Tailor Your Story:

Finally, there are multiple ways to tell the same story, and your methods should change depending on your audience. For example, we should not use the same talking points with C-level management as we would with our Product Owner. Since the relationship to the product is different for each role within the company, then the story should also be different. You would use the same themes obviously, but your language should be tailored to best fit their specific perspective as it relates to the product. In Talking with C-Level Management About Testing – Keith Klain – YouTube, Keith Klain discusses how different that messaging should be, based on your target audience. My favorite quote from that video is when the interviewer asks Keith, ‘How do you talk to them about testing?’ to which he replies, ‘I don’t tell talk about testing’, at which point he explains how we can discuss testing without using the traditional vernacular. Being aware of your audience should influence not only how you test but how you talk about your testing. I might be compelled to write another blog post specific to this topic; that is, if there’s enough interest in how to mold the testing story based on the various roles within your stakeholder demographic.


Conclusion:

It is common for Product Management and development teams to be two completely different pages when it comes to managing customers expectations. Developers and testers can lose sight of the business risks, while product owners and VPs can lose sight of the technology constraints. Ultimately, it is the job of Product Management to make the final call for deployment of new code, while our job as testers is to inform those management folks about any potential product risks related to the release. This is mentioned in the Abstract, but it is worth highlighting again; here’s a good blog post by Michael Bolton better exploring that tangent, Testers:  Get Out of the Quality Assurance Business «  Developsense Blog. Again, the purpose of testing is “to cast light on the status of the product and it’s context in the service of our stakeholders.” – James Bach.Testers tell a compelling story, but at the end of the day, your story should roll up to that. If it does not, then reevaluate if the information you’re telling is for your benefit, or your stakeholder’s. Be professionally skeptical and ask yourself questions: Is this worth sharing? Have I made it compelling enough to drive home my progress? Many Product Owners have not had any interest in what their tester documents because it has traditionally been of little value to them. Don’t use a Product Owner’s lack of desire as an excuse to stunt your own growth as a tester. Get good at this, and become a more responsible tester. While the failing of apathy is the responsibility of the entire team, we are at the helm of testing and have the power to change it for the better.

I’d like to hear your feedback, based on your own experiences of how you tell your testing story to your stakeholders. I’ve made the case for us, as testers, needing to tell that story. I’ve also gone one step further and provided you with models and other techniques you can use to get started putting this into action. I am eager to hear how you currently do this, as well as which parts interest you most, from the material I have presented here.

Quality Concepts For Testers

Quality Concepts For Testers

Abstract: I’ve put some resources together on techniques, tools and most importantly; new ways of thinking, that I believe would be beneficial to those within our skill-craft of testing. I know we come from a variety of backgrounds, so I wanted to share some of the quality concepts that I see as important from a testing perspective so that we have a common baseline from which to operate.

First, let’s be honest. As software testers we are not pushed by the actual work of testing to continually improve. While software developers are forced to continually adapt to new technologies to stay relevant, testers can easily get into a comfortable routine. In short, testers are more prone to become products of their environment and continue doing the same level of work, so we must be consciously pursuing information that makes us better at what we do, to ensure we do not plateau in our learning (see my other blog post entitled Improvement Continuum). Does everyone get into this rut, this plateau mindset? No way, but can we sometimes plateau and reach a point where we feel like ‘My process hasn’t changed in 6 months, am I still adding value? Am I still increasing the quality of the product as well as my own mindset?’…You bet. These are valid questions so I am hoping this information will help you feel more empowered.

Some of you might know this already, but I am a big believer in the ideals put forth in Context-Driven Testing (CDT); which states that there are no “Best” practices, but rather “Good” ones that fit the situation/team/industry that you are in. Quality is also subjective depending on the value given to it for the specific circumstance. Your stakeholders define what the value of that quality is, it does not simply inherently exist; therefore, “Quality is value to some person” as Gerald Weinberg put it; but more accurately, “Quality is value to some person who matters” – Michael Bolton/James Bach.

Many times as testers, we push forth in testing with our view of what quality is, but we do not re-evaluate that term from our stakeholder’s point of view for each project or feature/epic we work on. How do you know if you are a context-driven tester? Use the following heuristic (rule of thumb):

If you genuinely believe that quality is subjective and also believe that each story or set of stories has a different target group of stakeholders, then you must also believe that revisiting your definition of quality is a ‘must do’ when switching between projects.

But before we get too deep, here is a quick guideline that I like to give both new and veteran testers to make sure we’re in the same ballpark and moving in a common direction:

Chances are, there’s information here that is not familiar, and piques your interest. If this information does not interest you, then ask yourself if you are a detractor or a promoter. This is going to sound harsh, but it is the truth: You are either moving the testing community forward, or consciously remaining in the dark. The more I dive into the context-driven test community, the more I realize there is to be learned. My preconceptions are constantly shifted and modified as I explore this kind of information. My challenge to you: Find content within here, or related content/tools and take a deep-dive into that over the course of a few weeks or even months. Become an SME (Subject Matter Expert) in a given topic, then actively share that with your team (Community of Practice meetings, Weekly/Monthly Roundtables, etc.) and in-time become a known mentor/thought-leader on that subject which should organically draw others to you. What I’ve listed above is just the tip of the iceberg – There are so many avenues to explore here so finding something that you could get passionate about should be the easy first step. The hard part is seeing it out, but having others on a team pursuing endeavors along similar lines should give you strength.

My hopes are that this will spur cross-team brainstorming within your teams, or allow you to find some new learning pathways if you are testing on an island. A lot of these ideas and tools are great, but putting them in context through ongoing discussions is even more useful. Please feel free to make leave comments on what has and has not worked for you, and I’d love to engage. Also, the discussion will benefit others, which is the whole point of sharing these ideas to begin with.