Mentality

Blog posts in the “Mentality” category are related to the psychology of the tester; paradigms, perceptions, perspectives, biases and tacit knowledge that we bring, consciously or unconsciously, to the table as testers as we go about the art of testing.

Shedding The False Burden of QA, Part One: Testers Always Have Enough Time

Good testing is hard.  It’s a deep intellectual endeavor that requires critical thinking, and among other things, time.  However, the good news for testers is, you always have enough time. Wait… does this sound contrary to reality? Where you think you found the issue, but there’s something deeper going on that you simply don’t have time to research?

Remember, a tester is a lighthouse.  What does a lighthouse do?  It casts light on the rocks, uncovering hidden risks to captains who then redirect the ship. It doesn’t prioritize them (product management), nor remove/fix them (development), but rather simply reports.

The job of a tester isn’t reporting all risks, it’s reporting all known risks that you discovered in the time allotted.  Not having enough time is a risk that may need to be on your report. Notice, I said “risks” not “bugs,” on purpose, since bugs are only one type of risk that may end up on that list.

Risk: anything that could threaten the on-time and successful completion of the project.

Warning, lest anyone think that I’m saying you can always claim you didn’t have enough time… The risk of “not enough time” is best conveyed in tandem with supporting information. Typically this is done as part of the testing report that discusses the quality of the testing, which among other things, includes what could not be tested.  Oops, is that a typo? Nope.

We do great at talking about what we did do, but not about what we didn’t do. The time isn’t yours, it’s the companies. That paycheck you just got purchased that time. It also purchased the company’s right to get a truthful report on the quality of the testing. Don’t feel the need to take ownership of that time; simply report on the risk gap and let the person(s) who matter make the call on how to proceed with the work.

The Binge-Purge Cycle of Frameworks

Abstract: Process frameworks are all the rage; some sold as magic pill, a silver-bullet to solve your organization’s problems toward achieving rapid delivery. They all have pros and cons, so I won’t spend time on belaboring what’s already widely discussed out there. Instead, I wanted to share my thoughts about the vicious cycle they can perpetuate within companies, as demonstrated by a history of many organizations continually adopting then scrapping these frameworks (most recently CapitalOne). The binge/purge cycle is staggering to me.

Overview
Recently, Dean Leffingwell, posted on LinkedIn about SAFe 6.0, a new version of his framework. My positions on these process frameworks have evolved over years and at this point, I’m convinced they are more for corralling and controlling systems that have other systemic issues at play (whether they intend to do this or not). Below was going to be my reply to Dean’s post, but it was too long for LinkedIn’s character limit, so I decided to post it here. Would love to hear your insights and feedback.


The more I interact with different software companies, the more I feel these frameworks are being used (many times unintentionally) to mask unrelated problems in that given system.

For example, when a system (software division) suffers from one of many pain points (e.g. let’s use ‘poor talent and hiring practices’ for the sake of this exercise), these process frameworks can mask the symptoms of that source issue. They do this by producing more busyness, sometimes only as the ‘appearance’ of work.

Now, it’s likely that none of us would claim that any of these frameworks have the ability to increase the software engineering acumen of employees; however, the new busyness being observed may cause leadership to feel like the framework is being successful, when actually it is only masking the source.

No, this framework doesn’t purport or claim to raise engineering talent; yet, when management sees busyness from formerly lower-performing employees, those leaders may not feel the pain of that source problem anymore. If there’s no pain, then there will be no intrinsic motivation to solve the real core issue of poor talent and hiring practices. However, over time, that problem becomes more distinguishable and dissociated from the efforts of applying the framework. Years later, new leadership witnesses this along with other unrelated systemic issues (corporate culture, product quality, innovation, etc) and deems the framework isn’t working. They either change to another out-of-box solution, complete with more promises, or they discontinues its implementation entirely. Ever heard a version of this statement? – “We tried Agile. It didn’t work”. Maybe it failed for legitimate reasons, but more often than not, that’s the exception and not the rule.

There are at least two issues here that bother me, compounding one another:

1) The original source problem of poor talent was hidden by the framework’s implementation.

2) The assertion that the framework was a failure or the cause of other systemic inefficiency, is false.

Both of these are misleading, but due to the nature of short human memory and new leadership rotation, the cycle continues. I’ve seen this repeated in many places I’ve worked or consulted, and I am curious how long it will take leadership in this community to recognize this cycle and preventively squash it for future generations. Or (I fear) is the rent-seeking nature too alluring and will continue to overtake true craftsmanship?

Anecdotally, the most successful environments I’ve worked, where engineering talent and soft skills were high, operated fabulously without any kind of heavy process framework.

The Agile community needs an upheaval, a revolution, similar to what the testing community started to go through in the early 2000s. Will that happen soon? 8-Ball reads, “Future uncertain.”

Practical Approach to Delivery Enablement

Abstract: The job of the influencer or coach/mentor in a software team, regardless of title (Manager, Director, Agile Coach, Scrum Master, Tech Lead, etc) isn’t to evangelize best practices and beat people over the head with manifestos. Rather, the job of any professional in an Agile environment is delivery enablement, which I define as enhancing/optimizing delivery of value to the customer and our stakeholders. This translates into working code in production. Everything we do should roll up to that, and the 9 Business Outcomes* described below.

When coaching a new team or set of teams, and especially when joining a new company, it is very important to bring with a great amount of intellectual humility. I have seen many folks come in guns-blazing with their ideas, trying to implement what they did at previous work places too quickly. This is far too common, and it usually has multiple consequences:

  • Ignores the unique context of the new team(s) or company (context-oblivious vs context-aware)
  • Proofs-of-concept executed without first understanding context can fail, even if they might have been a good idea, due to lack of buy-in and ownership (command and control approach vs coaching/mentorship)
  • Can be off-putting to partners and team members who know the system better and may already have ideas they were hoping you’d pair with them on instead (know-it-all vs collaborative partner)
  • Multiple other detriments, including the burning of bridges and very quickly becoming seen as someone to be worked around instead of worked with (seen as an impediment rather than a delivery enablement advocate)

It is important to show care about the people more than anything else up front. The technology problems will come, believe me, and when they do, you need to be respected and trusted in order to be an effective value delivery enabler.  So, what is my mental model and the approach I take when being faced with new teams or a new environment?

Practical Approach to Delivery Enablement

First 30 Days (Initial Absorption)

It is important to learn the various team contexts and people at play; gaining trust and building chemistry, rather than making suggestions out of the gate…

  • Heavy Learning of various team frameworks, contexts, scrum events, communication patterns, practices, tooling, dependencies, etc.
  • Build Partnerships with key technology and business stakeholders (via 1-on-1s, team outings, troubleshooting, cultural events, etc).
  • Identify top 2 or 3 Impediments (‘pain points’) experienced by the development and product team.
  • Gain Consensus on prioritization of the 9 Business Outcomes (below) with Tech/Product delivery leadership buy-in.

Day 30-90 (Planning and Experimentation)

At this point, I’ve reached the second stage of learning and I seek to leverage new relationships to start planning how to tackle delivery impediments…

  • Prioritize and Generate Impediment resolution plan: The “How” and any possible solutions must come from the team, or indirectly through coaching (e.g. Socratic Method)
  • Upstream Coaching: Management and other influential business stakeholders external to the development team may need to be educated on inhibitive anti-patterns observed – Start small and build conversational safety (more offering up questions than definitive changes at this point).
  • Fill Agility Practice Gaps: Delivery Transparency via Dashboarding/Metrics, Various trainings, Balance team protection with stakeholder needs, and more.
  • Canary-in-a-Coalmine: Execute small proofs-of-concepts (POC), targeting the more experimental/open teams, to gain traction on any new or pivoted agility or engineering practice. (The goal here is to avoid command-and-control practice setting but letting peer-to-peer influence abound post POC).
  • Conduct initial Team Health & Agility assessments at both the ART/org and team levels (gain consensus on which categories matter to move the needle)
  • Consult development team Retrospectives on any new POC or change and bubble up feedback as appropriate to management and influencer level.
  • Widen circle of go-to partners, allies and proactive stakeholders (leverage strengths, interests to get POCs off the ground).

Day 90+ (Coaching and Optimizing)

A success measurement at this stage is having gained the trust and respect of the development team, other peers and leadership such that the continued momentum and desire for continuous improvement stays strong. We can close the loop by both directly and indirectly affecting the business outcomes that the business previously prioritized during the first 30 days…

  • Continue to Fill Agility Practice Gaps: Optimize Value-flow through ART (Agile Release Train), PI Planning, Risk & Dependency Tracking, Hardening/Innovation allocation, and more.
  • Engineering Practices: Provide coaching on Shift-left DevOps embracement, CI/CD gap identification, automation opportunities, unit testing, build and deployment gates, quality and development risk modeling and test strategy (contextual depending on Monolith or Microservice), SDLC optimization, Vertical Slicing of teams, monitoring and alerting and more.
  • Tool Optimization & Information Radiators: Offer guidance on ALM configuration and visibility, provide stakeholder-appropriate value-driven dashboards (Product v Business v Devops, etc).
  • Impediment Removal: Continually facilitate team and org level technical and non-technical roadblocks.
  • Conflict Resolution: Manage team conflict via iterative stages only escalating when appropriate through 1-on-1, 2-on-1, then manager level if proven necessary or for recurring trends – (e.g. keep ‘team business’ at the team level ideally).
  • Ongoing Agility Health Assessments: Continue to asses/re-assess quarterly, or every 6-months depending on environment and contextual maturity.

The Nine Business Outcomes

Everything we do should roll up to one or more of these 9 Business Outcomes. Whether you area  developer, tester, product owner, or otherwise, it is important to gain consensus with your stakeholders on which outcomes are and are not a priority in their minds. This then allows delivery teams to move in the direction that our stakeholders across both IT and the rest of the business have in mind…

PathToAgility-9 Outcomes

Let us rise above the average statistic that says 64% of the features we build are rarely or never used. Image how much time and OpEx waste that is ($$). This is what we need to think about as developments teams (Product Owners, Engineers, etc). The goals of Agile isn’t just shortening the feedback loop, but also the learning cycle so we CAN deliver the right thing. More on that in the video link here, David Hawks – User Stories Suck

*Note: The Nine Business Outcomes content is part of the Path To Agility (PTA) program – more information can be found on PTA at the link provided.

Hiring Good Testers

Abstract: I frequently get asked how I interview testers, be it anyone from exploratory to automation and anywhere within that spectrum (i.e. including “Toolsmiths”, see Richard Bradshaw’s work here for context on that term). What the person is really asking me though is, “How do you know someone who interviews well will actually perform well once hired?” The real answer is, ‘You don’t’. You can use interview models to help reduce the unknowns, but ultimately if you’ve been a hiring manager long enough, you’ve hired some duds and had to manage them out. I ultimately try to talk to people about the number one thing that drives good testing, and that is the desire and capacity to learn. desire alone isn’t enough. Testing is after all, learning, at its core. We’re scientists, not showstoppers. We’re explorers, not Product Managers. Our passion lies within the journey, not so much the end or counting the number of things we found along the way (unless yo’re doing Domain Testing – I jest). So, as a boilerplate for a year or so, I used Dan Ashby’s interview model as my go-to when doing phone screens and in person interviews. After a few more years, I realized that my interview process, like my testing process, must be continually adapting and breaking so that it can reform and adapt to the contexts of whatever company or product in which I work. the major shifts in my interview process have coincided with the times I changed companies. Below are my current ways of ‘weeding out the weak’ persay, and saving myself time when it comes to finding passionate talent in testing and automation (notice, other than this sentence you won’t find any questions around specific tools like Selenium, SoapUI, etc. Good testing is tool agnostic). The sections are divided below: I typically use Phase I during the initial phone-screens, and Phase II when they come in-person. Sometimes I dive into Phase II on the phone if I get a feeling they are ahead of the curve. <Note: The term “agile” is intentionally typed as ‘little-a agile’, not ‘big-A Agile’. We’re talking about the ability to flex and adapt, not the marketing monolith that is pedaled heavily right now.>

 

Phase I: Initial Weed-Out Questions for Testers (in a Modern Software Development Environment)

  • What is good testing?
    • Poor answers: Clicking through a product to make sure the quality is good and all of the requirements are met.
      • This person likely has a shallow definition of what it means to test. This is Claims Testing, sometimes called human checking, but does not indicate an understanding of deep testing. this candidate is also a Product Owner at heart if they think they “assure” quality, rather than cast light on risks so that others (Product Owners/Business) can assure what does or doesn’t meet the level of quality desired.
    • Acceptable Answers: Exploration of a product, experimentation so that we can learn about what’s happening in a product, casting light on any risks that might threaten the value of the product or timing of the project and making those risks known to our stakeholders so they can make decisions on how to mitigate that risk (i.e. fix, ignore, backlog, etc)
      • This candidate has at least a basic understanding of their role as a tester within a larger organization. Their statement around bringing risks to light, but not making decisions on them is healthy and speaks to their maturity of not being in the gatekeeper mindset.
  • What is the role of a tester in an agile organization?
    • Poor answers: Find bugs, write test cases, break things, stop releases, get certifications
      • Shows gatekeeper mindset still exists, and heavy administrative focus on the value of tester being linked to test case writing or bugs-found instead of on providing customer value to the end-user with holistic testing approaches)
    • Acceptable answers: Explore for value to the customer even if my PO didn’t mention it in the acceptance criteria, challenge the veracity of the acceptance criteria, operate under the assumption that Product probably always missed something when creating User Stories, use testing models to fill those gaps in my thinking so I am not just relying on my mental model/experience to do good testing.
      • This display intellectual humility in understanding their thinking is inherently flawed in some respect – which it is for everyone, also shows healthy understanding of testing and flexibility to pivot for the purpose of providing customer value and not just check off acceptance criteria)
  • When does the testing process start and end in an agile scrum team?
    • Poor answers: After code complete, when the Dev hands off the code to QA, after a deploy we start testing, and then we stop when we cover everything.
      • This shows that they believe testing is something you “start” after development, and are still in a Waterfall mindset when it comes to what testing actually is (i.e not just clicking around a product). Also, this answer implies that we test until we as testers are satisfied (unhealthy), not until Product is satisfied (healthy)
    • Acceptable answers: Throughout the entire SDLC process – this starts in the portfolio planning stages as we should have a QA/Test lead pairing with Dev, Product and Architecture to discuss risks up front as we initially design the product. If we’re waiting until the sprint to start testing, then we’ve missed a lot of opportunities to help our stakeholders cast light on risk, much of which can be uncovered earlier in the process before any code is actually written.
      • This shows the candidate has a firm understanding of the fact that risk exposure and mitigation never starts and ends, but is rather ongoing. I would also ask follow-up questions around how they did this at previous companies, because it shows a high sense of maturity and leadership if they injected themselves into the design phase and not just down the line in the scrum-team portion of testing. In fact, a good tester in an agile org will be frustrated and may even have a story about leaving a company that did not allow them to participate earlier in the process.
  • With the world of agile testing constantly changing, what meetups or conferences do you attend, and what books do you read on the latest practices that would make us (your company) want to hire you over any other tester?
    • Poor answers: I haven’t read any books or attended meetups, but I have 20 years of experience and I Google when needed to solve problems, as well as read Guru99 which has articles on testing and development.
      • Years of experience does not make someone a good tester, nor does ad-hoc Googling show a learning mindset, as everyone has to do that as part of their job anyway. Also, when you Google “software testing”, the first non-ad hit that comes up is Guru99, so for obvious reasons this is a questionable answer when given alone.
    • Acceptable answers: Every month or two I go to a local meetup, here are a few blogs I read regularly <names 3 or 4 sources>, one of my favorite books on testing is <names title and author and tells you about something they learned from it>, I follow people on Twitter <like it or not, this is where the testing community lives and thrives! E.g. Link>
      • This shows that they are constantly learning (#1 skill needed for good testing is learning – getting tired of hearing this yet? No deep technical questions need to be asked to determine if someone is in the right mindset for a career in the test industry, like many managers think – Now, depth of knowledge for a specific role, is another story). This shows they are immersing themselves in the testing community and finding out about what other testers and companies are doing to stay up to date on the latest tools, practices and mindsets around testing and agility, and not waiting for their manager or the company to bring that to them.

Phase II: Advanced Quality & Testing Theory topics

If the candidate breezes through these with flying colors, I then go into the deeper topics below, which typically can only be answered confidently by true practitioners of the testing skill-craft.

  • Familiarity with the Four Schools of Software Testing (and why Context-Driven is healthier than the other three)
  • Understanding of good/bad testing measurement and metrics (e.g. DLR, Defect Density, First/Second/Third order measurements and when to use each appropriately)
  • Testing heuristics (e.g. HTSM model for testing)
  • Explain good Test Reporting (i.e. the 3-Part Testing Story/Braid)
  • Testers are not Gatekeepers (i.e. Product vs Tester responsibility understanding)
  • Regression practices (RCRCRC model, as well as Combinatorial testing practices to decrease process waste)
  • Testing Oracles (FEWHICCUPPS model for Product consistency)
  • Quality Criteria: Capability, Reliability, Usability, Charisma, etc (ability to give example of test types in at least a few of these)
  • Testing Techniques: e.g. Can they explain the difference between Scenario Testing and Flow Testing. What is Domain Testing? Etc.
  • Agility within testing (shift left, pairing early, mindset of not having to wait for code to start testing, Shake ‘N’ Bake pair-testing process)
  • How Exploratory Testing differs from Ad-Hoc or Random Testing (and why that matters – i.e. Exploratory testing should have a structure and they should be able to speak to that)
  • Test Chartering and SBTM (Session Based Test Management)
  • The Dead Bee Heuristic for problem solving and ensuring issues are actually fixed
  • Artifact generation: Lean Test Strategy documentation vs Heavy Test Cases (i.e. hopefully the former, so they spend more valuable time testing rather than documenting)
  • Understanding the difference between Checking and Testing
  • What is Galumphing and why is it important in testing?
  • What are the two pillars of Testability (Observability and Controllability) and can they explain why both Devs and Testers should care about them
  • Good understanding of the difference between ‘Best’ and ‘Contextual’ Practices
  • Good understanding of the detriment of IEEE testing standard ISO29119 (+other standards from the consortium or dogmatic static models)
  • Bonus: Familiarity with the RST namespace (how and why this group of the testing industry has broken off from traditional norms, shedding legacy habits and mindsets, etc)

Conclusion

People who react well to the above more advance topics, in displaying that learning mindset (even if they do not comes across as experts for the specific question asked) are typically the ones you want in your shop. Of course, you must be sure that your in-person interview process has a good element of letting them experiment in the interview to see how they think. Many times I open our web product on a laptop and put it in front of them to see what they do. Do they sit there without touching it and just speak theory, or do they grab the laptop, pull it toward them, and start playing with the product? The latter usually tells me they have an experimentation mindset and willingness to learn, as well as leads to better questions from them about our business needs and desires.

At the end of the day, for most projects, I value a growth mindset and passion for learning over someone with 20 years of experience who thinks they have everything already figured out, and little to learn. Intellectual humility, the belief that one’s thinking is inherently flawed and has gaps, is key to being a good scientist, and thus a good tester. Some testers have even come to call themselves ‘Professional Skeptics’ to sum up that scientific, humble and critically thinking mindset in a single phrase – and I like it. If you’ve been hiring for at least some time, you’ve probably has people who interviewed well, but eventually fell short of your expectations; I know I have, and had to manage them out. That is to say, I do not present this information as a silver bullet or sorts. We are still humans, thus this blog post is yet another flawed model from which you must adapt your hiring process, to discard/keep what you feel is best suited to your environment. I am eager to hear your thoughts on what common interview behavior and attributes you’ve noticed across your good hires that did live up or grow beyond your initial expectations.

Testing Terminology

Abstract:Another brief post, reacting to a culmination of bad posts I have seen lately in the testing community surrounding the testing terminology we use to define our craft. With language comes power, and this terminology tells others what we think about our craft. Much of what I have seen lately casts testing in a negative light – one of button clickers and script writers, rather than intellectual explorers and light casters. I am the latter, so I must delineate and separate myself from the former. My post below is mainly a reaction to this uTest forum post, the straw that broke the camel’s back.

Recently, I joined the uTest community, simply because I wanted to get some more raw testing time in, as well as see what kind of testing caliber is available in one of the most popular crowd-sources testing services in the world. After browsing around for a few days, I found that while the intentions in many blog posts, articles and links were probably good, much of the information related to definitions is outdated and contribute to misleading new testers, which keeps the testing craft in the dark – preventing it from advancing. Here are a few corrected definitions of the poorly defined words that I saw while going through some of the forums on uTest…

Testing: evaluating a product through exploration and experimentation for the purpose of informing our stakeholders on risk.

Note: “The purpose of testing is to cast light on the status of the product and it’s context, in the service of our stakeholders.” James Bach. We are not Product Management, we do not make the go/no-go calls on releases – we are simply the lighthouse, that points out risk and potential problems so that our stakeholders can make much more informed decisions about how to run the business.

More on testing here: https://www.utest.com/articles/what-is-testing
More on testing vs checking: http://www.satisfice.com/blog/archives/856
More on the difference between Testing and QA: http://www.developsense.com/blog/2010/05/testers-get-out-of-the-quality-assurance-business/

Bug: anything that threatens the value of the product.

Note: Anyone can log a bug report. This does not necessarily mean that it is a “defect”, which is in fact a term that the context-driven software testing community is moving away from. More on CDT here: http://context-driven-testing.com/

Quality: Quality is value to some person, who matters.

Note: This is incredibly important, because it highlights the extremely subjective nature of what product quality really is. We must realize that the objective is not to test “everything”; however, it is to test what matters in the time allotted. “The goal is to reach an acceptable level of risk. At that point, quality is automatically good enough.” – More on that here: http://www.satisfice.com/articles/gooden2.pdf

Test Case: formally structured, specific, proceduralized, explicit, documented, and largely confirmatory test ideas – and often, excessively so.

Note: A test case is not a test, any more than a recipe is a meal, or an itinerary is a trip. Open your mind to the fact that heavily scripted test cases do not add the value you think they do. If you are reading acceptance criteria, and writing test cases based on that, you are short-circuiting the real testing process and are going to miss an incredible amount of product risks that may matter to your client. More on the value (or lack thereof) of test cases here: http://www.developsense.com/blog/2017/01/drop-the-crutches/

Quality Assurance (QA): Ah, yes. The most abused title/phrase in the testing world…No one person does this, and anyone who has a title “QA” is fooling themselves. “The quality assurance role in the company resides with the management and the CEO (the principal quality officer in the company), since it was they—and certainly not the testers—who had the authority to make decisions about quality.”

Notes: Again, more on the difference between Testing and QA here: http://www.developsense.com/blog/2010/05/testers-get-out-of-the-quality-assurance-business/

Test Plan: The test plan is the set of ideas that guide or represent the intended test process. Often those ideas are only partially documented, spread across multiple documents, and subject to change as the project evolves.

Notes: More here… http://www.satisfice.com/tools/tpe-model.pdf

Perhaps you meant one of these instead?…
Test Plan Document: A test plan document is any document intended to convey test plan information. However, test plan documents are not the only source of information about the test plan. Test plan information is also contained in the oral tradition of the project and the culture of the company.

Test Strategy: The test strategy is the way tests will be designed and executed to support an effective quality assessment. Test strategy is the plan for what parts of the product will be covered by tests and what test techniques will be used. Test strategy is distinct from the logistics of implementing the strategy. Test strategy is essentially the “brains” of the test process.

I will stop there for now. I could go on about how we should replace the phrase “verify that” with “challenge that” in our vocabulary, or how standards from IEEE and ISO (e.g. ISO29119) are archaic and detrimental to software testing/development, but I could go on for pages on that, so I’ll save it for another time.

Crossposted: uTest – Testing Terminology

What is Testing?

Abstract: A brief post, the tip of the iceberg on exploring the question ‘What is testing?’. If this intrigues you, then comment or contact me and let’s have a deeper discussion.

Updated: April 19th, 2018 (added my mental model to give a visual/be more explicit about my more general statements)

Many people confuse “checking” for “testing”. Checking is a part of testing, but not fully representative of testing as a whole. So what do I mean when I say checking, and how is that different from testing?

  • Checking is the act of confirmatory testing, verifying specific facts and outcomes typically by following a script or test case.
  • Testing is much larger and holistic that that – I define testing as evaluating a product through exploration for the purpose of informing our product stakeholders on risk.

Our guiding light, the purpose of testing, is…
“to cast light on the status of the product and its context in the service of our stakeholders” – James Bach

If you are simply taking acceptance criteria/requirements, and then writing test cases based on that, you are selling yourself short and doing the product a huge disservice! Much of what we find as testers comes off-script, and high-value unknowns are found by letting humans do what humans do best – be true explorers! In fact, when Michael Bolton asked Brian Kurtz and I in a Rapid Software Testing class, to define “What is testing to you?”, this is what we came up with as combination of our shared mental models…

WhatIsTesting

Since my job as a tester is to inform my client as early as possible about any potential risks that I feel may threaten the value or on-time successful completion of the project, then I must be a tester, not simply a checker. This kind of answer is a much more compelling and holistic response, rather than simply saying something like “Finding bugs” or “Breaking things” (which we actually do not do, more on that here: Testers Don’t Break The Software). As testers, we must move the testing craft in a positive direction, and get away from simply doing only claims verification. Claims testing is important, but checking is only one piece of what testing actually is. Stop worrying about ‘green or red’ and instead focus on ‘does a problem exist?’. Ever been driving down the road, and smoke starts coming from the hood of your car? You are going to pull over, even if the engine light has not come on. Are you going to keep driving until that light tells you there is a problem? I hope not! I hope you would use your fantastic human brain to make a smart first-order measurement, and decide to pull over. You don’t need a red light to tell you there is a problem. Similarly in testing, a red light may mean nothing at all, while a green light may be deceiving you into thinking there are no problems, when in fact there may be – just open the hood and look! (or “bonnet” for my fellow testers across the pond)

So, if I asked you how you tested something, what would your answer be? That you simply used your knowledge, years of experience and some tools? Not compelling enough! I want to hear about Capability, Scalability, Compatibility, Charisma. I want to hear about how your Flow testing varied from your Scenario testing and why those two are different. Tell me about the methods of testing you used. Tell me why the testing done was “good” enough. Tell me what roadblocks inhibited testing, and how you worked around those; or which still stand in your way. Tell me what you did not test – many testers forget to talk about that, leaving stakeholder wondering if they even considered certain items, lowering their confidence in our ability to explore for risk that matters.

Good testing generally doesn’t come from heavy checklists, test cases or scripts that are followed – anyone can do that. So let’s do testing, which anyone cannot, in fact, do well.

Raise the bar!

Crossposted (original version): uTest – What Is Testing?

Testing Manifesto

Abstract: The Testing Manifesto is an encapsulation of a what some of us, context-driven testers, believe the role of “Tester” to be. The skill-craft of testing can be too blurred in many environments, such that we thought this was necessary to put out there. While we’ve used this internally for a while now, we were prompted to share this after seeing a recent tweet gain multiple likes and retweets, which on the surface seems noble, but is actually part of the misinformation out there on what the role of testing actually entails. Testing is not Product Management; Testing is not Programming. At my company, this is used as a guideline, in conjunction with the Agile Manifesto that drives the higher level team processes. 

Overview

A while back, we all got a good look at the Agile Manifesto, along with the twelve principles (or combined into one PDF here) which puts the focus on coming together to collaborate and solve problems. Now, while collaboration is an excellent way to generate solutions, as leveraging the wisdom of the crowd is a valuable practice, this can sometimes become a driving forces that puts too much emphasis on cross-functionality and push craftsmanship to the back burner. The Agile shops we’ve witnessed fall into two main categories, collaboration vs craftsmanship. Don’t be confused, these are not completely opposed camps, and the latter is not devoid of the former. A good Agile shop that centers around craftsmanship will of course also leverage collaboration as a part of that, but the implementations that make us wary are those that attempt to push everyone to do everything, when a team of specialists with some overlap may actually be a much healthier approach to solving engineering problems.

The Testing Manifesto

Long story short, in order to put some emphasis back on the craft of testing, and be sure that the role testers fill is clear and does not get pushed to the edges, we have put together this Testing Manifesto (PDF).

testing-01

Click on image above to open PDF

Purpose

This is meant to compliment the Agile Manifesto as the two are to be used in parallel; one is not meant to replace the other. We hope that you find value in this and can use it to help discussions with some people who could be considered more Agilistas, and need help finding that balance between the focus on collaboration and craftsmanship. You can have both, but there’s a healthy balance that many do not attain. We hope that putting this out there publicly, will help teams move the needle more toward a balanced perspective. If we’re truly context-driven, then we must admit that craftsmanship can’t always be a low priority; it matters, thus a mix of generalists and specialists may be needed to effectively solve many of the modern engineering problems we face in the industry.

My Testing Journey

Abstract: This is a personal experience story about where I started as a tester and how I have grown through my experiences and interaction with various mentors along the way. I’ve moved from a gatekeeper to an informer, mainly due to the influence of some smart minds along the way that took their time to invest in me. I’ve shifted paradigms along the way, pretty drastically since my formal entry intro into testing in 2005, and believe I am in a much more mentally sound and intentionally aware state today.

The following timeline is an account of my own personal journey as I moved through my testing career. It is my hope that by sharing this, I give you some context for not only where I am coming from, a barometer of sorts by which to judge all my other content, but also some perspective on how I might engage with you on various topics should we have discussions from time to time. I believe that learning about each other’s personal history and experiences can allow us to have more meaningful conversations and hopefully minimize the chance that we will talk past each other. I believe putting this information out there publicly is congruent with my guiding light of service to others which is my defining heuristic for determining my actions.

Pre/2005:

Early on in my testing career I simply tested what I wanted to test, and thought that was enough. I had no intention of reporting out on my testing other than writing bug reports, unless someone specifically asked, and even then it was only a shallow, non-compelling verbal response. It’s safe to say that I had my head in the sand when it came to learning new things and while I had good intentions, I did not take the responsible steps I needed to in order to be intentional about my learning. I used to champion my somewhat OCD-nature as a powerful swords. Finding everything was my goal and I had strict tunnel vision that required everything to be pixel perfect; this was obviously at a time when I thought perfection was something actually attainable. To make it worse, this endeavor did not come with any logical reason or continual evaluation of if the work I was doing was actually worthwhile. “Was I doing good testing?” was not a question I asked myself on a daily basis, as I thought good intentions were enough. I had little, if any awareness of the term “product risks” as I only wanted the user interface/backend to work in the way “I” thought it should. After all, “I am the tester, the gatekeeper, the last line of defense; therefore, I know best right?” Well, I now believe I was living under a detrimental paradigm; detrimental to the product, my team and frankly one that stifled my own advancement in the skill-craft of testing. Today, if I were to evaluate my 2005-self as a tester, I would tell him, “Under you current mindset, it will not be possible for you to do good testing,” and since using the word “good” is a value judgement coming from my future self, I believe there may have been a meaningful impact on the 2005-me. Anyone got a time machine?

2006:

At this point I moved from a test-everything mode, into a justification mindset, which happened to correspond to moving to a new company and working on a very different product platform. Was this coincidence? I think not. I felt I had to forever prove I was doing my job, but not for the sake of making the product better or informing my stakeholders. Instead, I was doing this for completely selfish reasons that probably ended up misleading my stakeholders and fooling myself more than it was helping. Again, I may have had good intentions all along, as I stated earlier, but you know what they say about those. This selfishness and tendency to mislead was not intentional in nature, but was akin to the blind leading the blind. I was still not up to the level at which I would claim I was doing good testing, because I naviaged uncharted waters with a very limited tool-belt. I made no effort to find the tools needed to alleviate those gaps in my thinking, unless they were already available. During this time I only tested to confirm what I already believed or suspected (see: Confirmation Bias) rather than practice intellectual humility to increase my knowledge and effectiveness as a tester. My reasons for testing were still flawed: I tested for the sake of statistics, higher bug count numbers, etc. So, I would generate reports on what I tested and the outcomes, but these were extremely biased, and made me out to be a hero tester, as if I were solely responsible for the product and code quality when I was never even involved in the creation. At this point, I was still a long way from becoming a good tester, and had no concept of the business and developer creating the initial quality, and testers simply being the informers of that build process. I would say things like “It’s my job to break things.” No it isn’t. “I create a quality product.” No I don’t.

2008 (early):

I then moved into the paradigm of ‘It is my job to find defects and report on the status of those defects but I still know better on what should be fixed since I am the tester’ and while I explored, testing outside of the acceptance criteria, I was doing ad-hoc/random testing rather than truly structured exploratory testing. I was still missing the boat on the true nature of what it meant to do exploratory testing and I wasn’t even thinking yet on reporting out on what I did not test, only what I did. I was getting to a point where I realized there were more important product risks, but still somewhat ignoring them because I had not fully broken out of the paradigm of the tester being the final “gatekeeper” of the product/release, which we are not. (Read more on that in Michael Bolton’s blog “Testers: Get Out of the Quality Assurance Business“)

2008 (mid/later):

By the end of 2008, I still lingered in the shallow end of the unconsciously incompetent exploratory tester pool (yes, incompetent), and to a fault, in that I reported anything and everything I found, even when 80% might be edge cases or of low value to the customer. I was not weighing product risks and priorities as part of my testing, and many times would insist we fix something before I would close a given story, yet it was outside of the scope of that story. I would try to convince developers they were being short sighted, which may have been true in many cases, since there was a love/hate relationship between Dev and Test at the company I worked for at the time. However, many times I was contributing to feature creep (AKA modifying a feature’s acceptance criteria while it is in progress, thus invalidating estimates and increasing TTL without warranted justification from Product Management). Fortunately (and through some prodding from my main mentor at the time), I quickly learned to log separate bugs for these issues I was finding, and had to come down from my high-and-mighty testing tower to do so. It took some humbling to realize that perhaps I did not know the full picture about the product and business priorities at play, so these bugs I kept reporting were in fact not as critical as I initially advocated.

2009/2010:

Events in my personal life, outside of work, made me reevaluate how I interacted with people. I realized I was rather selfish, and not putting the “other” first. Through some strong mentorship from some wiser people in my life, I turned a corner in how I operated as a human. This personal shift directly affected my work. I no longer saw myself as the ‘one and only’ expert on a given feature. I no longer saw being a ‘tester’ as more important when it comes to product releases. I began to shy away from telling others that testers were gatekeepers, and instead pushed the idea that we have to work with others to determine what is or is not important to the stakeholders.

2011:

I continued to champion product management’s goals over my own, but I still did not do it with any structure. Every new feature I approached was done with good intent, but no consistency. I could not look back on what I had tested from project to project and come up with any kind of internal metrics for myself to rate my efficiency as a tester. Eventually, I created A Personal Metric For Self-Improvement years later, but at this point, I was still ‘in the dark’ on how to be a good tester, and what really constitutes a good tester, but I desired to learn that even though I was not conscious of how to go about it, at least not on the surface.

2012:

It was not really until 2012, that I embraced the idea that testers truly are the informers of risks, not the final decisions makers. Once we inform the stakeholder of a certain bug or risk, and product management says ‘OK, we can push a release even with these bugs’, then we need to step back, as testers, and let the business run the business. This was also when I was first introduced to the context-driven testing community. This is a community that embraces the main principles listed here, on the Context-Driven Testing site, which I came to realize that I embraced these as well. Of all the testing environments in which I have been, I would say that Dealertrack has been my most intellectually beneficial experience. This is due to a number of influences and interactions, but mainly through mentors like Brian Kurtz. Brian also exposed me to the great minds in the community, like Michael Bolton, James BachElisabeth HendricksonGerald WeinbergCem Kaner and others. I refer to this as my “awakening” phase, as a tester, where I moved into thinking much more intentionally and critically about the skill-craft of testing. I used to test things ‘my way’ and that was enough justification for me, but I then realized that it was not enough justification for my stakeholders. Before, I had my own best interests at heart, and I was now realizing, through learning more about the depth of testing, how to have the customer’s best interests at heart, and chasing those superseded mine. I realized that I used to be part of the problem, not the solution, in that I was not actively trying to learn and improve my skill-craft in testing. I was still not using explicit models, heuristics, oracles, mnemonics and other tools that were available in the testing community. Sure, I inherently had my own as we all do, but those models and heuristics were limited. Once I learned about the availability of the external community, I quickly realized how small and obtuse I had been as a tester. I was humbled by the amount of new information and realized that I had a long way to go if I wanted to really consider my work good testing.

2013-Present:

Once the floodgates of possibilities in learning had opened for me mentally, the pathway was clear. I had, and still have, a lot of learning to do before I can consider myself consciously competent in certain areas. I started to get intentional about mapping out my strengths and weaknesses using personal metrics like this or this. I became more self-critical on what makes a good tester, and challenged myself in ways that I had not done before. I dove headlong into the context-driven testing community which cast even more light on my inadequate areas. Through discussions with other much wiser testers, I learned how to increase my skill-craft. I learned how to tell a compelling story to my stakeholders. I learned that learning takes more hard-work that I had previously thought, and like anything worth accomplishing, it does not just happen, you must create it for yourself. I became even more ever-aware of the needs of others around me, and how I could use my knowledge to aid in their endeavors rather than just stay in my own little bubble on my team, my project, my stories. I got used to saying, “Yeah, maybe you’re right and I need to re-evaluate why I believe what I believe,” instead of forging ahead with my own ideas based only on my limited experienced; limited in the sense that I had previously made decisions based only on what I had been through, rather than always seeking counsel and establishing relationships where others could break through my shield, expose me to my own biases, and do it in a way that genuinely cared for my advancement. I realized that I had to rely on gathering the perspectives of others to help shape my decision-making process and harden any actions I took before I carried them out. It also dawned on me that I needed to be much more intentional about my interaction with the online community so that I could reach those outside of my immediate walls. I rejoined Twitter, and created an account to engage with testers (@connorroberts). I began attending conferences such as CAST, Reinventing Testers, etc to engage with other minds in the community. I created this blog where I could share new ideas and tools with others. I started a local meet-up, DFW Testers, where those in and around Dallas/Ft. Worth, Texas could come together to explore the depths of testing and I continue to look for even more ways to engage with others about our skill-craft.

So, where I am now?

I am a work in progress, but I can safely say that I have honed the art of constantly becoming a more competent tester every week. I am involved in the larger community, I crave learning, I engage and collaborate with others, I know that practice won’t make me perfect, but it will make me more competent and as long as I am rapidly experimenting with new ideas, practices, tools and models, then I will avoid my greatest fear as a tester – becoming stagnant and ultimately irrelevant. In short, it is the skill of critical thinking and forever-learning that allows me to be at peace as a tester. I don’t need to fret over a user story, or worry about a feature deadline, because I am at peace with the knowledge that I have filled my tool-belt with things that allow me to do sufficient testing within any time frame. I also remind people that you are the average of the five people with which you spend the most time. Surround yourself with critical thinkers, and people who know more than you. As long as I continue to do that, I know I will have peers in my life who will hold me accountable and challenge my biases. I have no concerns for my own future.

A Documentation Story

Abstract: This is a story about an experience that Brian Kurtz and I had in shifting test documentation strategies from and all-manual unintentional approach to the utilization of an exploratory logging tool. We also talk about our observations on how that worked overall to our advantage within our context, in recapturing both time and operational expenses. The testing community is always going back and forth on how to balance manual testing with automation. In that vein, we’re convinced that exploratory logging tools can help by allowing manual testers to spend more time on actual testing: investigating, learning about the product, exploring for value, etc. and less time on the granular minutiae of documentation. These tools can help minimize the administrative overhead traditionally mandated to or self-imposed by testers. This experience can tell us something about the larger picture, but does not give us the whole story, and by no means does this include the only measurements we took to make our business case.

Note: When we say “test documentation” in this article, we’re not referring to the usual product outlines, coverage maps, etc that the feature teams will create, but rather one specific team’s documentation focused around bug/session reporting. We realize that bug reporting is only one small part of what a tester does, but this proof-of-concept was executed on a team that does a lot of claims verification and manual check-testing since they operate separately, outside of our normal scrum teams.

Special thanks to Brian Kurtz for auditing and contributing to this work.

Overview

When, how and to what extent to document has always been something of a quandary to many. When I was a young tester I felt it was a must. We must have documentation, simply for the sake of having it. We must write down everything about everything. There was no consideration to whether that documentation actually provided value or not. As I grew in my testing career, and through the mentorship of others, I realized that I needed to undertake a shift in my thinking. My new barometer came in two  forms. The first in the form of the question, “What problem am I trying to solve?” The second, comes in the word, satisfice. In other words, how much documentation is enough to be satisfactorily acceptable based on what is actually necessary, rather than what we traditionally have done as testers. This involved me learning to align my testing compass to point toward the the concerns of product management, rather than simply all the concerns that I had.

Recently, we were working with a team that wanted to solve a few problems: decreasing the cost of testing as well as the amount of time it takes to do that testing, all while raising our product quality. Management always wants it both ways right? More quality with less time. Well, we found a tool that we believed could help mitigate this discrepancy, while at the same time allowing our team to do better testing.

Of course, the first step after choosing a tool (which required much research) was that we needed to make a business case, as this would be external to the already approved yearly budget. Since the cost was a new cost to the company, our business case had to be compelling on both fronts: cost saving and time saving. The tool we landed on was QASymphony qTest eXplorer, an exploratory logging tool that takes much of the administrative overhead out of bug reporting. This tool has a cost of $39 per user per month, which comes to a yearly cost of $468 per tester per year. Since the team we’re targeting has three testers, that’s a yearly cost of $1,404. Now that we have the cost of the proposed tool, that we’ll subtract at the end, let’s take a look at some other expenses and time estimates now:

Manual Testing without an Exploratory Logger Tool

  • Description: Each tester was doing their own form of bug reporting/session-documentation, either through Notepad++, Word, Evernote or directly in Jira.
  • Time spent (single tester, 1 day): 30 Min/Site @ 4 Sites per day = 2 hours per day.
  • Time spent (single tester, 1 week): 2 Hours per day x 5 days = 10 hours per week.
  • Contractor bill rate (unrealistic sample): $20* per hour x 10 hours = $200 per week.
  • Multiplied for a team of three testers = 30 hours or $600 per week on test documentation.

Manual Testing with an Exploratory Logger Tool (qTest Web eXplorer)

  • Description: Each tester was now using this new tool that partially automates the process of note-taking, error reporting, taking screenshots, and other administrative overhead normally required by testers. 
  • Time cost (single tester, 1 day): 5 Min/Site @ 4 Sites per day = 20 minutes per day.
  • Time cost (single tester, 1 week): 20 Minutes per day x 5 days = 1.66 hours per week.
  • Expense (unrealistic sample): $20* per hour x 1.66 hours = $33.33 per week
  • Multiplied for a team of three testers = 5 hours or $99.99 per week on test documentation.
documentation-image2

Click the image above to view the documentation comparison estimates.

*This is used as an unrealistic contractor bill rate, used simply for the sake of writing this blog post.

It was our experience that introducing this new tool into the workflow saved considerable amounts of time, as well as recapturing expense that could be woven back into the budget and better used elsewhere.

Qualifications

Keep in mind, these measurements are surrogate measures, and are second order measurements that do not take much time. They tell us something but do not give us the whole story. If you do want to move toward a more lean documentation processes through the use of exploratory logging tools, by no means should you make a business case on this data alone. This should be one facet, among others that you use to make your business case. Also, don’t get locked into thinking you need a paid tool to do this, as 80% of your business case is most likely getting mindsets shifted to a more common paradigm about what documentation is necessary.

We spoke not only with the testers, but also the people absorbing their work, as well as their management to gain insight into the pros and cons of this shift, as it was taking place, as well as after implementation. After our trial run with the tool, before we paid for it, we followed up on the pros/cons: How does the tool benefit your workflow? How does it hinder it? What limitations does the tool have? What does the tool not provide that manual documentation does? How has the tool affected your testing velocity, either positively or negatively? Is the report output adequate for the team’s needs? What gripes to the developers/managers have about the tool? Is it giving other team members, outside of testing, the information they need to succeed? etc. So, while the expense metrics are appealing to the business, the positive aspects of how this affects testing is what got us excited: showing how the tool frees up the testers to work on other priorities, increasing the amount of collaboration, not being focused on the documentation process, etc. We also spent many hours in consult with QASymphony working with their support team on a few bugs, but that was part of the discovery process when learning about the workflow quirks.

Our Challenge To You…Try it!

Try an exploratory logging tool, which would be anything that can help track/record actions in a more automated way to eliminate much of the documentation overhead and minutiae that testers normally have to deal with on a daily basis. We happened to use qTest Web eXplorer (paid, free trial) which is available for Firefox and Chrome, or you can try another standalone one called Rapid Reporter (freeware), that we have found to do many of the same functions. We have no allegiance to any one product, so we would encourage you to explore a few of them to find what works (or doesn’t) for your context. The worst thing that can happen is that you try these out, they don’t work for your context, and you go back to doing what you want. In the process through, positive or negative, you hopefully learned a little.

Conclusion

We feel it is very important for testers to evaluate how tools like this combined with processes such as Session-Based Test Management can help testing become a much more efficient activity. It is easy as testers to get settled into a routine, where we believe that we have ‘figured it out’, so we stop exploring new tools, consciously or subconsciously, and new ways of approaching situations become lost to us. We hope you take our challenge above seriously, and at least try it. You are in charge of your own learning and improving your skill craft as a tester, so we would encourage you to try something new even if you fail forward. This could bring immense value to your testing and team. Go for it. If you do work in a more restrictive environment, then feel free to use the data we have gathered here as justification to your manager or team to try this out.

One of the biggest pitfalls right now in the testing community is over-documentation. Many testers will claim that test cases and other artifacts are required, but often they are not; it just simply feels like it is. If you believe that heavily documented test cases and suites are required to do good testing, then you are locking yourself out of the reality that there are also many other options, tools and methods. Do you think that asking a product owner to read through one-hundred test cases would really inform them about how the product works as well as how you conducted your testing and what the risks and roadblocks are? I would lean toward ‘No’ as a script is simply that, a script, not a testing story.

In this blog post, we told a story about how a tool alleviated documentation overhead within our context. This is a story based on our experience, with that tool and the benefits that it brought. While we feel this is very different than traditional test documentation means, we feel it is a step in the right direction for us. But don’t just take our word for it, read this or this from Michael Bolton or this from James Bach or this from James Christie or…you get the point.

A Personal Metric for Self-Improvement

Article revisions: Learning is continuous, thus my understanding of testing and related knowledge is continually augmented. Below is the revision history of this article, along with the latest version.

  • December 31, 2015, Version 1.0: Initial release.
  • March 31, 2016, Version 1.1: Most definitions reworded, multiple paragraph edits and additions, updated Excel sheet to calculate actual team average.
  • July 28, 2016, Version 1.2: Added sub-category calc. averaging (credit: Aaron Seitz) plus minor layout modifications.
  • September 20, 2016, Version 1.3: Replaced/reworded Test Strategy & Planning > Thoroughness with Modeling (verb) & Tools > Modeling with Models (noun).

Abstract: A Personal Metric For Self-Improvement is a learning model meant to be used by testers, and more specifically, those within software testing. Many times, self-improvement is intangible and immeasurable in the quantifiable way that we as humans seek to understand. We sometimes use this as an excuse, consciously or subconsciously, to remain stagnant and not improve. Let’s talk about how we can abuse metrics in a positive way by using this private measure. We will seek to quantify that which is only qualifiable, for the purpose of challenging us in the sometimes overlooked areas of self-improvement.

Video Version for the non-readers 😉



Overview

I will kick this post off with a bold statement, and I stand by it: You cannot claim to do good testing if you believe that learning has a glass ceiling. In other words, learning is an endless journey. We cannot put a measurable cap on the amount of learning needed to be a good tester, thus we must continually learn new techniques, embrace new tools and study foreign ideas in order to grow in our craft. The very fact that software can never be bug-free supports this premise. I plan to blog about that later, in a post I am working on regarding mental catalysts. For now though, let’s turn our attention back to self-improvement. In short I am saying, since learning is unending, and better testing requires continual variation, then the job of self-improvement can never be at an end.

This job can feel a bit intangible and almost like trying to hit a moving target with a reliable repeatable process; therefore, we must be intentional about how we approach self-improvement so we can be successful. Sometimes I hear people talk about setting goals, writing things down or trying to schedule their own improvement through a cadence of book reads, coding classes or tutorial videos perhaps. This is noble, because self-improvement does not simply happen, but many times we jump into the activity of self-improvement before we determine if we’ve first focused on the right space. For example, a tester believes that they must learn how to code to become more valuable to their company, so they immediately dive into Codecademy classes. Did the tester stop to think…

Maybe the company I work for has an incomplete understanding of what constitutes ‘good testing’? After all, the term ‘good’ implies a value statement, but who is the judge? Do they know that testing is both an art and a science? I am required to consider these variables if I want to improve my testing craft. Does my environment encourage a varied toolset for testers, or simply the idea that anyone under the “Engineering” umbrella must ‘learn coding’ in order to add value?

Now, Agile (big “A”) encourages cross-functional teams, while I encourage “cross-functional teams to the extent that it makes sense”. At the end of the day, I still want a team of specialists working on my code, not a group of individuals that are slightly good at many things. Now, is there value to some testers learning to code? Yes, and here is a viewpoint with which I wholeheartedly agree. However, the point here, as it relates to self-improvement, is that a certain level of critical thinking is required in order to engage System 2, before this level of introspection can even take place. If this does not happen, then the tester may now be focused on an unwarranted self-improvement endeavor that may be beneficial, but is not for the intentional purpose of ‘better testing’.

So, why create a metric?

This might be a wake-up call to some, but your manager in not in charge of your learning; you are. Others in the community have created guides and categories for self-improvement, such as James Bach’s Tester’s Syllabus, which is an excellent way to steer your own self-improvement. For example, I use his syllabus as a guide and rate myself 0 through 4 on each branch, where zero is a topic in which I am unconsciously competent, and a four is a space in which I am consciously or perhaps unconsciously competent (see this Wikipedia article if you need clarification of those terms). I then compare my weak areas to the type of testing I do on a regular basis to determine where the major risk gaps are in my knowledge. If I am ever hesitant about rating myself higher or lower on a given space, I opt for the lower number. This keeps me from over-estimating my abilities in a certain area, as well as helps me to stay intellectually humble on that topic. This self-underestimation tactic is something I learned from Brian Kurtz, one of my mentors.

The Metric

The personal self-improvement metric I have devised is meant to be used in a private setting. For example, these numbers would ideally not roll up to management as a way of evaluating if you are a good or bad tester. These categories and ratings are simply created to give you a mental prompt in the areas you may need to work on, especially if you are in a team environment as that requires honing soft-skills too. However, you may have noticed that I have completely abused metrics here by measuring qualitative elements using quantitative means. This is usually how metrics are abuse for more nefarious purposes, such as being used to influence groups of decision makers to take unwarranted actions. However, I am OK with abusing metrics in this case, since it is for my own personal and private self-improvement means. Even though the number ratings are subjective, it means something to me, and I can use these surrogate measures to continually tweak my approach to learning.

My main categories are as follows: Testing Mindset, Leadership, Test Strategy & Planning, Self-Improvement, Tools & Automation and Intangibles. To an extent, all of these have a level of intangibility, as we’re trying to create a metric by applying a number (quantitative) to an item that can only accurately be described in qualitative (non-numeric) terms. However, since this intended for personal and private purposes, the social ramifications of assigning a number to these categories is negligible. The audience is one, myself, rather than hundreds or thousands across an entire division. Below is the resulting artifact that is created, but you can download the Excel file as a template to use for yourself, as this contains the data, glossary of terms, sample tester ratings, sample team aggregate, etc.

personal-metric

Click here to download the current Microsoft Excel version

Application & Terms

Typically, you can use this for yourself or if you manage a team of testers, privately with them. I would never share one tester’s radar graph with another, as that would defeat the purpose of having a private metric that can be used for self-improvement. The social aspects of this can me minimized in an environment where a shared sense of maturity and respect exist. You can also find the following terms and definitions in the “Glossary” tab of the referenced Excel sheet:

Testing Mindset:

  • Logic Process: ability to reason through problems in a way that uses critical thinking skills to avoid getting fooled.
  • User Advocacy: ability to put on the user hat, albeit biased, and test using various established consumer personas and scenarios (typically provided by Product Management), apart from the acceptance/expected pathways.
  • Curiosity: ability to become engaged with the product in a way that can and does intentionally supersede the intended purpose as guided by perceived customer desires (i.e. Like a kitten would with a new toy, yet also able to focus that interest toward high-value areas and likely risks within the product).
  • Technical Acumen: ability to explain to others, with the appropriate vocabulary, what kind of testing has been, is or is going to be completed or not completed.
  • Tenacity: ability to stay and remain persistently engaged in testing the product as it relates to seeking risks related to the item under test.

Leadership:

  • Mentorship: ability to recognize areas of weakness within the larger team and train others accordingly to address these gaps.
  • Subject Matter Expertise: ability to become knowledgeable in both the product and practice of testing for the purposes of supporting both the stakeholder’s desires as well as capability of supplementing the information needs of other team members.
  • Team Awareness: ability to get and stay in touch with the two main wavelengths of the team, personal and technical, in order to adjust actions to alleviate testing roadblocks.
  • Interpersonal Skills: ability to work well with others on the immediate or larger teams in such a way that facilitates positive communication and allows for more effective testing, including the ability to convey product risks in a way that is appropriate.
  • Reliability: ability to cope through challenges, lead by example based on previous experiences and champion punctuality as well as support a consistent ongoing telling of the testing story to Product Management.

Test Strategy & Planning:

  • Attention to Detail: ability to created adequately detailed test strategies that satisfy the requirements of the stakeholders and the team.
  • Modeling: ability to convert your process into translatable artifacts, using continually evolving mental models to address risk and increase team confidence in the testing endeavor.
  • Three-Part Testing Story: ability to speak competently on the product status, the testing method and the quality of the testing that was completed for the given item under test.
  • Value-Add Testing Artifacts: ability to create testing artifacts (outlines, mind-maps, etc) that can be used throughout the overlapping development and testing phases, as well as support your testing story in your absence.
  • Risk Assessment: ability to use wisdom, which is the combination of knowledge, experience and discernment, to determine where important product risks are within the given item under test.

Self-Improvement:

  • Desire: ability to maintain an internal motivator that brings passion into the art of testing, for the purpose of supporting all other abilities.
  • Quality Theory: ability to support a test strategy with an adequate sum of explicit and tacit knowledge though the use of a varied tool belt: models, apps, techniques, etc, as well as maintaining a strong understanding of a tester’s role within the development lifecycle.
  • Testing Community: ability to engage with both the internal and external testing communities in a way that displays intellectual humility to the extent that it is required to share new ideas, challenge existing ones, and move testing forward.
  • Product Knowledge: ability to become a subject matter expert in yours team’s area of focus such that you can better expose risk and provide value to product management.
  • Cross-Functionality: ability to learn and absorb skills from outside a traditional subset of standards-based/factory-style testing, such that you can use these new skills to enhance the team’s collective testing effort.

Tools & Automation:

  • Data: ability to interact with multiple types and subsets of data related to the product domain, such that testing can become a more effective way of exposing important risks, be it via traditional or non-traditional structures.
  • Scripting: ability to use some form of scripting as a part of the test strategy, when appropriate, to assist with learning about risks and informing beyond a traditional tool-less/primarily human-only approach to the testing effort, so that the testing completed is more robust in nature.
  • Programming: ability to write code in order to establish a deeper understanding of a product’s inner working, to gain insight into why and how data is represented in a product, as well as close the gap between tester and developer perspectives.
  • Exploratory-Supplement: ability to embrace tools that can enhance the effectiveness of testing, allowing for a decrease in traditional administrative overhead.
  • Models: ability to embrace new ways of thinking, including explicit testing models that are made available in the course of work, or via the larger community. Appropriate contextual models help to challenge existing biases, decrease the risk gap, and reshape our own mental paradigms for the purpose of adding value to the testing effort.

Intangibles:

  • Communication & Diplomacy: ability to discuss engineering and testing problems in such a way that guide the team toward actions items that are in the best interests of the stakeholders, without overpowering or harming team relationships.
  • Ability to Negotiate: ability to prioritize risks that pose a threat to perceived client desires, such that the interaction with product management allows for informing over gatekeeping and risk exposure over risk mitigation in the service of our clients.
  • Self-Starter: ability to push in avenues of learning for the sake of improve the testing craft without the need for external coaxing of management’s intervention. Ideally, this would be fueled by an ongoing discontent at the presence of unknown risks and gaps in learning.
  • Confidence: ability to display conviction in the execution of various test strategies, strategies that hold up to scrutiny when presented to the larger stakeholder audience for the purpose of informing product management.
  • Maturity & Selflessness: ability to distance one’s self from the product in a way that allows for informing stakeholders and the team with proper respect. This is done in a way that distances us from the act of gatekeeping by ensuring that our endeavor of serving the client supersedes our own agendas for the product.

The practical application of this is triggered when testers become introspective and self-critical on the areas mentioned within the spreadsheet. This can only be done when each area is studied in depth. I recommend that testers do an initial evaluation by rating themselves loosely on each category and subcategory, using the Glossary as a reference. These are my own guideline definitions that I’ve given to each term, on which you can rate yourself using a 0-4 scale. Your definitions of these words may be different, so treat these as my own. This calculation is of course a surrogate measure, and meant only to be used as a rough estimate to determine areas for improvement. Once the areas of improvement that need the most attention have been identified (i.e. lowest numbers and matter the most to your team or project), the tester would then seek out resources to assist with those areas: tutorial videos, books, online exercises, peer-mentorship, and others. Don’t forget to reach out to both your company’s internal testing community as well as those who live in the online and testing conference space.

Conclusion

Please remember, this metric is by no means a silver bullet and these areas of focus are not meant to be used as a checklist, but rather a guideline to help testers determine areas of weakness of which you may not be currently aware. Many times, we do not realize an area of weakness or our own biases, until someone else points that out to us. I have found that a documented fashion such as this can help me recognize my own gaps. As stated previously, this is most useful when applied privately, or between a tester and their manager in a one-on-one setting. This is only a surrogate measure that attempts to quantify that which is only qualifiable. Putting numbers on these traits is extremely subjective and for the purpose of catalyzing your own introspection. It is my hope that this helps give testers a guide for self-improvement in collectively advancing the testing craft.