Mentality

Blog posts in the “Mentality” category are related to the psychology of the tester; paradigms, perceptions, perspectives, biases and tacit knowledge that we bring, consciously or unconsciously, to the table as testers as we go about the art of testing.

Testing Manifesto

Abstract: The Testing Manifesto is an encapsulation of a what some of us, context-driven testers, believe the role of “Tester” to be. The skill-craft of testing can be too blurred in many environments, such that we thought this was necessary to put out there. While we’ve used this internally for a while now, we were prompted to share this after seeing a recent tweet gain multiple likes and retweets, which on the surface seems noble, but is actually part of the misinformation out there on what the role of testing actually entails. Testing is not Product Management; Testing is not Programming. At my company, this is used as a guideline, in conjunction with the Agile Manifesto that drives the higher level team processes. 

Overview

A while back, we all got a good look at the Agile Manifesto, along with the twelve principles (or combined into one PDF here) which puts the focus on coming together to collaborate and solve problems. Now, while collaboration is an excellent way to generate solutions, as leveraging the wisdom of the crowd is a valuable practice, this can sometimes become a driving forces that puts too much emphasis on cross-functionality and push craftsmanship to the back burner. The Agile shops we’ve witnessed fall into two main categories, collaboration vs craftsmanship. Don’t be confused, these are not completely opposed camps, and the latter is not devoid of the former. A good Agile shop that centers around craftsmanship will of course also leverage collaboration as a part of that, but the implementations that make us wary are those that attempt to push everyone to do everything, when a team of specialists with some overlap may actually be a much healthier approach to solving engineering problems.

The Testing Manifesto

Long story short, in order to put some emphasis back on the craft of testing, and be sure that the role testers fill is clear and does not get pushed to the edges, we have put together this Testing Manifesto (PDF).

testing-01

Click on image above to open PDF

Purpose

This is meant to compliment the Agile Manifesto as the two are to be used in parallel; one is not meant to replace the other. We hope that you find value in this and can use it to help discussions with some people who could be considered more Agilistas, and need help finding that balance between the focus on collaboration and craftsmanship. You can have both, but there’s a healthy balance that many do not attain. We hope that putting this out there publicly, will help teams move the needle more toward a balanced perspective. If we’re truly context-driven, then we must admit that craftsmanship can’t always be a low priority; it matters, thus a mix of generalists and specialists may be needed to effectively solve many of the modern engineering problems we face in the industry.

My Testing Journey

Abstract: This is a personal experience story about where I started as a tester and how I have grown through my experiences and interaction with various mentors along the way. I’ve moved from a gatekeeper to an informer, mainly due to the influence of some smart minds along the way that took their time to invest in me. I’ve shifted paradigms along the way, pretty drastically since my formal entry intro into testing in 2005, and believe I am in a much more mentally sound and intentionally aware state today.

The following timeline is an account of my own personal journey as I moved through my testing career. It is my hope that by sharing this, I give you some context for not only where I am coming from, a barometer of sorts by which to judge all my other content, but also some perspective on how I might engage with you on various topics should we have discussions from time to time. I believe that learning about each other’s personal history and experiences can allow us to have more meaningful conversations and hopefully minimize the chance that we will talk past each other. I believe putting this information out there publicly is congruent with my guiding light of service to others which is my defining heuristic for determining my actions.

Pre/2005:

Early on in my testing career I simply tested what I wanted to test, and thought that was enough. I had no intention of reporting out on my testing other than writing bug reports, unless someone specifically asked, and even then it was only a shallow, non-compelling verbal response. It’s safe to say that I had my head in the sand when it came to learning new things and while I had good intentions, I did not take the responsible steps I needed to in order to be intentional about my learning. I used to champion my somewhat OCD-nature as a powerful swords. Finding everything was my goal and I had strict tunnel vision that required everything to be pixel perfect; this was obviously at a time when I thought perfection was something actually attainable. To make it worse, this endeavor did not come with any logical reason or continual evaluation of if the work I was doing was actually worthwhile. “Was I doing good testing?” was not a question I asked myself on a daily basis, as I thought good intentions were enough. I had little, if any awareness of the term “product risks” as I only wanted the user interface/backend to work in the way “I” thought it should. After all, “I am the tester, the gatekeeper, the last line of defense; therefore, I know best right?” Well, I now believe I was living under a detrimental paradigm; detrimental to the product, my team and frankly one that stifled my own advancement in the skill-craft of testing. Today, if I were to evaluate my 2005-self as a tester, I would tell him, “Under you current mindset, it will not be possible for you to do good testing,” and since using the word “good” is a value judgement coming from my future self, I believe there may have been a meaningful impact on the 2005-me. Anyone got a time machine?

2006:

At this point I moved from a test-everything mode, into a justification mindset, which happened to correspond to moving to a new company and working on a very different product platform. Was this coincidence? I think not. I felt I had to forever prove I was doing my job, but not for the sake of making the product better or informing my stakeholders. Instead, I was doing this for completely selfish reasons that probably ended up misleading my stakeholders and fooling myself more than it was helping. Again, I may have had good intentions all along, as I stated earlier, but you know what they say about those. This selfishness and tendency to mislead was not intentional in nature, but was akin to the blind leading the blind. I was still not up to the level at which I would claim I was doing good testing, because I naviaged uncharted waters with a very limited tool-belt. I made no effort to find the tools needed to alleviate those gaps in my thinking, unless they were already available. During this time I only tested to confirm what I already believed or suspected (see: Confirmation Bias) rather than practice intellectual humility to increase my knowledge and effectiveness as a tester. My reasons for testing were still flawed: I tested for the sake of statistics, higher bug count numbers, etc. So, I would generate reports on what I tested and the outcomes, but these were extremely biased, and made me out to be a hero tester, as if I were solely responsible for the product and code quality when I was never even involved in the creation. At this point, I was still a long way from becoming a good tester, and had no concept of the business and developer creating the initial quality, and testers simply being the informers of that build process. I would say things like “It’s my job to break things.” No it isn’t. “I create a quality product.” No I don’t.

2008 (early):

I then moved into the paradigm of ‘It is my job to find defects and report on the status of those defects but I still know better on what should be fixed since I am the tester’ and while I explored, testing outside of the acceptance criteria, I was doing ad-hoc/random testing rather than truly structured exploratory testing. I was still missing the boat on the true nature of what it meant to do exploratory testing and I wasn’t even thinking yet on reporting out on what I did not test, only what I did. I was getting to a point where I realized there were more important product risks, but still somewhat ignoring them because I had not fully broken out of the paradigm of the tester being the final “gatekeeper” of the product/release, which we are not. (Read more on that in Michael Bolton’s blog “Testers: Get Out of the Quality Assurance Business“)

2008 (mid/later):

By the end of 2008, I still lingered in the shallow end of the unconsciously incompetent exploratory tester pool (yes, incompetent), and to a fault, in that I reported anything and everything I found, even when 80% might be edge cases or of low value to the customer. I was not weighing product risks and priorities as part of my testing, and many times would insist we fix something before I would close a given story, yet it was outside of the scope of that story. I would try to convince developers they were being short sighted, which may have been true in many cases, since there was a love/hate relationship between Dev and Test at the company I worked for at the time. However, many times I was contributing to feature creep (AKA modifying a feature’s acceptance criteria while it is in progress, thus invalidating estimates and increasing TTL without warranted justification from Product Management). Fortunately (and through some prodding from my main mentor at the time), I quickly learned to log separate bugs for these issues I was finding, and had to come down from my high-and-mighty testing tower to do so. It took some humbling to realize that perhaps I did not know the full picture about the product and business priorities at play, so these bugs I kept reporting were in fact not as critical as I initially advocated.

2009/2010:

Events in my personal life, outside of work, made me reevaluate how I interacted with people. I realized I was rather selfish, and not putting the “other” first. Through some strong mentorship from some wiser people in my life, I turned a corner in how I operated as a human. This personal shift directly affected my work. I no longer saw myself as the ‘one and only’ expert on a given feature. I no longer saw being a ‘tester’ as more important when it comes to product releases. I began to shy away from telling others that testers were gatekeepers, and instead pushed the idea that we have to work with others to determine what is or is not important to the stakeholders.

2011:

I continued to champion product management’s goals over my own, but I still did not do it with any structure. Every new feature I approached was done with good intent, but no consistency. I could not look back on what I had tested from project to project and come up with any kind of internal metrics for myself to rate my efficiency as a tester. Eventually, I created A Personal Metric For Self-Improvement years later, but at this point, I was still ‘in the dark’ on how to be a good tester, and what really constitutes a good tester, but I desired to learn that even though I was not conscious of how to go about it, at least not on the surface.

2012:

It was not really until 2012, that I embraced the idea that testers truly are the informers of risks, not the final decisions makers. Once we inform the stakeholder of a certain bug or risk, and product management says ‘OK, we can push a release even with these bugs’, then we need to step back, as testers, and let the business run the business. This was also when I was first introduced to the context-driven testing community. This is a community that embraces the main principles listed here, on the Context-Driven Testing site, which I came to realize that I embraced these as well. Of all the testing environments in which I have been, I would say that Dealertrack has been my most intellectually beneficial experience. This is due to a number of influences and interactions, but mainly through mentors like Brian Kurtz. Brian also exposed me to the great minds in the community, like Michael Bolton, James BachElisabeth HendricksonGerald WeinbergCem Kaner and others. I refer to this as my “awakening” phase, as a tester, where I moved into thinking much more intentionally and critically about the skill-craft of testing. I used to test things ‘my way’ and that was enough justification for me, but I then realized that it was not enough justification for my stakeholders. Before, I had my own best interests at heart, and I was now realizing, through learning more about the depth of testing, how to have the customer’s best interests at heart, and chasing those superseded mine. I realized that I used to be part of the problem, not the solution, in that I was not actively trying to learn and improve my skill-craft in testing. I was still not using explicit models, heuristics, oracles, mnemonics and other tools that were available in the testing community. Sure, I inherently had my own as we all do, but those models and heuristics were limited. Once I learned about the availability of the external community, I quickly realized how small and obtuse I had been as a tester. I was humbled by the amount of new information and realized that I had a long way to go if I wanted to really consider my work good testing.

2013-Present:

Once the floodgates of possibilities in learning had opened for me mentally, the pathway was clear. I had, and still have, a lot of learning to do before I can consider myself consciously competent in certain areas. I started to get intentional about mapping out my strengths and weaknesses using personal metrics like this or this. I became more self-critical on what makes a good tester, and challenged myself in ways that I had not done before. I dove headlong into the context-driven testing community which cast even more light on my inadequate areas. Through discussions with other much wiser testers, I learned how to increase my skill-craft. I learned how to tell a compelling story to my stakeholders. I learned that learning takes more hard-work that I had previously thought, and like anything worth accomplishing, it does not just happen, you must create it for yourself. I became even more ever-aware of the needs of others around me, and how I could use my knowledge to aid in their endeavors rather than just stay in my own little bubble on my team, my project, my stories. I got used to saying, “Yeah, maybe you’re right and I need to re-evaluate why I believe what I believe,” instead of forging ahead with my own ideas based only on my limited experienced; limited in the sense that I had previously made decisions based only on what I had been through, rather than always seeking counsel and establishing relationships where others could break through my shield, expose me to my own biases, and do it in a way that genuinely cared for my advancement. I realized that I had to rely on gathering the perspectives of others to help shape my decision-making process and harden any actions I took before I carried them out. It also dawned on me that I needed to be much more intentional about my interaction with the online community so that I could reach those outside of my immediate walls. I rejoined Twitter, and created an account to engage with testers (@connorroberts). I began attending conferences such as CAST, Reinventing Testers, etc to engage with other minds in the community. I created this blog where I could share new ideas and tools with others. I started a local meet-up, DFW Testers, where those in and around Dallas/Ft. Worth, Texas could come together to explore the depths of testing and I continue to look for even more ways to engage with others about our skill-craft.

So, where I am now?

I am a work in progress, but I can safely say that I have honed the art of constantly becoming a more competent tester every week. I am involved in the larger community, I crave learning, I engage and collaborate with others, I know that practice won’t make me perfect, but it will make me more competent and as long as I am rapidly experimenting with new ideas, practices, tools and models, then I will avoid my greatest fear as a tester – becoming stagnant and ultimately irrelevant. In short, it is the skill of critical thinking and forever-learning that allows me to be at peace as a tester. I don’t need to fret over a user story, or worry about a feature deadline, because I am at peace with the knowledge that I have filled my tool-belt with things that allow me to do sufficient testing within any time frame. I also remind people that you are the average of the five people with which you spend the most time. Surround yourself with critical thinkers, and people who know more than you. As long as I continue to do that, I know I will have peers in my life who will hold me accountable and challenge my biases. I have no concerns for my own future.

A Documentation Story

Abstract: This is a story about an experience that Brian Kurtz and I had in shifting test documentation strategies from and all-manual unintentional approach to the utilization of an exploratory logging tool. We also talk about our observations on how that worked overall to our advantage within our context, in recapturing both time and operational expenses. The testing community is always going back and forth on how to balance manual testing with automation. In that vein, we’re convinced that exploratory logging tools can help by allowing manual testers to spend more time on actual testing: investigating, learning about the product, exploring for value, etc. and less time on the granular minutiae of documentation. These tools can help minimize the administrative overhead traditionally mandated to or self-imposed by testers. This experience can tell us something about the larger picture, but does not give us the whole story, and by no means does this include the only measurements we took to make our business case.

Note: When we say “test documentation” in this article, we’re not referring to the usual product outlines, coverage maps, etc that the feature teams will create, but rather one specific team’s documentation focused around bug/session reporting. We realize that bug reporting is only one small part of what a tester does, but this proof-of-concept was executed on a team that does a lot of claims verification and manual check-testing since they operate separately, outside of our normal scrum teams.

Special thanks to Brian Kurtz for auditing and contributing to this work.

Overview

When, how and to what extent to document has always been something of a quandary to many. When I was a young tester I felt it was a must. We must have documentation, simply for the sake of having it. We must write down everything about everything. There was no consideration to whether that documentation actually provided value or not. As I grew in my testing career, and through the mentorship of others, I realized that I needed to undertake a shift in my thinking. My new barometer came in two  forms. The first in the form of the question, “What problem am I trying to solve?” The second, comes in the word, satisfice. In other words, how much documentation is enough to be satisfactorily acceptable based on what is actually necessary, rather than what we traditionally have done as testers. This involved me learning to align my testing compass to point toward the the concerns of product management, rather than simply all the concerns that I had.

Recently, we were working with a team that wanted to solve a few problems: decreasing the cost of testing as well as the amount of time it takes to do that testing, all while raising our product quality. Management always wants it both ways right? More quality with less time. Well, we found a tool that we believed could help mitigate this discrepancy, while at the same time allowing our team to do better testing.

Of course, the first step after choosing a tool (which required much research) was that we needed to make a business case, as this would be external to the already approved yearly budget. Since the cost was a new cost to the company, our business case had to be compelling on both fronts: cost saving and time saving. The tool we landed on was QASymphony qTest eXplorer, an exploratory logging tool that takes much of the administrative overhead out of bug reporting. This tool has a cost of $39 per user per month, which comes to a yearly cost of $468 per tester per year. Since the team we’re targeting has three testers, that’s a yearly cost of $1,404. Now that we have the cost of the proposed tool, that we’ll subtract at the end, let’s take a look at some other expenses and time estimates now:

Manual Testing without an Exploratory Logger Tool

  • Description: Each tester was doing their own form of bug reporting/session-documentation, either through Notepad++, Word, Evernote or directly in Jira.
  • Time spent (single tester, 1 day): 30 Min/Site @ 4 Sites per day = 2 hours per day.
  • Time spent (single tester, 1 week): 2 Hours per day x 5 days = 10 hours per week.
  • Contractor bill rate (unrealistic sample): $20* per hour x 10 hours = $200 per week.
  • Multiplied for a team of three testers = 30 hours or $600 per week on test documentation.

Manual Testing with an Exploratory Logger Tool (qTest Web eXplorer)

  • Description: Each tester was now using this new tool that partially automates the process of note-taking, error reporting, taking screenshots, and other administrative overhead normally required by testers. 
  • Time cost (single tester, 1 day): 5 Min/Site @ 4 Sites per day = 20 minutes per day.
  • Time cost (single tester, 1 week): 20 Minutes per day x 5 days = 1.66 hours per week.
  • Expense (unrealistic sample): $20* per hour x 1.66 hours = $33.33 per week
  • Multiplied for a team of three testers = 5 hours or $99.99 per week on test documentation.
documentation-image2

Click the image above to view the documentation comparison estimates.

*This is used as an unrealistic contractor bill rate, used simply for the sake of writing this blog post.

It was our experience that introducing this new tool into the workflow saved considerable amounts of time, as well as recapturing expense that could be woven back into the budget and better used elsewhere.

Qualifications

Keep in mind, these measurements are surrogate measures, and are second order measurements that do not take much time. They tell us something but do not give us the whole story. If you do want to move toward a more lean documentation processes through the use of exploratory logging tools, by no means should you make a business case on this data alone. This should be one facet, among others that you use to make your business case. Also, don’t get locked into thinking you need a paid tool to do this, as 80% of your business case is most likely getting mindsets shifted to a more common paradigm about what documentation is necessary.

We spoke not only with the testers, but also the people absorbing their work, as well as their management to gain insight into the pros and cons of this shift, as it was taking place, as well as after implementation. After our trial run with the tool, before we paid for it, we followed up on the pros/cons: How does the tool benefit your workflow? How does it hinder it? What limitations does the tool have? What does the tool not provide that manual documentation does? How has the tool affected your testing velocity, either positively or negatively? Is the report output adequate for the team’s needs? What gripes to the developers/managers have about the tool? Is it giving other team members, outside of testing, the information they need to succeed? etc. So, while the expense metrics are appealing to the business, the positive aspects of how this affects testing is what got us excited: showing how the tool frees up the testers to work on other priorities, increasing the amount of collaboration, not being focused on the documentation process, etc. We also spent many hours in consult with QASymphony working with their support team on a few bugs, but that was part of the discovery process when learning about the workflow quirks.

Our Challenge To You…Try it!

Try an exploratory logging tool, which would be anything that can help track/record actions in a more automated way to eliminate much of the documentation overhead and minutiae that testers normally have to deal with on a daily basis. We happened to use qTest Web eXplorer (paid, free trial) which is available for Firefox and Chrome, or you can try another standalone one called Rapid Reporter (freeware), that we have found to do many of the same functions. We have no allegiance to any one product, so we would encourage you to explore a few of them to find what works (or doesn’t) for your context. The worst thing that can happen is that you try these out, they don’t work for your context, and you go back to doing what you want. In the process through, positive or negative, you hopefully learned a little.

Conclusion

We feel it is very important for testers to evaluate how tools like this combined with processes such as Session-Based Test Management can help testing become a much more efficient activity. It is easy as testers to get settled into a routine, where we believe that we have ‘figured it out’, so we stop exploring new tools, consciously or subconsciously, and new ways of approaching situations become lost to us. We hope you take our challenge above seriously, and at least try it. You are in charge of your own learning and improving your skill craft as a tester, so we would encourage you to try something new even if you fail forward. This could bring immense value to your testing and team. Go for it. If you do work in a more restrictive environment, then feel free to use the data we have gathered here as justification to your manager or team to try this out.

One of the biggest pitfalls right now in the testing community is over-documentation. Many testers will claim that test cases and other artifacts are required, but often they are not; it just simply feels like it is. If you believe that heavily documented test cases and suites are required to do good testing, then you are locking yourself out of the reality that there are also many other options, tools and methods. Do you think that asking a product owner to read through one-hundred test cases would really inform them about how the product works as well as how you conducted your testing and what the risks and roadblocks are? I would lean toward ‘No’ as a script is simply that, a script, not a testing story.

In this blog post, we told a story about how a tool alleviated documentation overhead within our context. This is a story based on our experience, with that tool and the benefits that it brought. While we feel this is very different than traditional test documentation means, we feel it is a step in the right direction for us. But don’t just take our word for it, read this or this from Michael Bolton or this from James Bach or this from James Christie or…you get the point.

A Personal Metric for Self-Improvement

Article revisions: Learning is continuous, thus my understanding of testing and related knowledge is continually augmented. Below is the revision history of this article, along with the latest version.

  • December 31, 2015, Version 1.0: Initial release.
  • March 31, 2016, Version 1.1: Most definitions reworded, multiple paragraph edits and additions, updated Excel sheet to calculate actual team average.
  • July 28, 2016, Version 1.2: Added sub-category calc. averaging (credit: Aaron Seitz) plus minor layout modifications.
  • September 20, 2016, Version 1.3: Replaced/reworded Test Strategy & Planning > Thoroughness with Modeling (verb) & Tools > Modeling with Models (noun).

Abstract: A Personal Metric For Self-Improvement is a learning model meant to be used by testers, and more specifically, those within software testing. Many times, self-improvement is intangible and immeasurable in the quantifiable way that we as humans seek to understand. We sometimes use this as an excuse, consciously or subconsciously, to remain stagnant and not improve. Let’s talk about how we can abuse metrics in a positive way by using this private measure. We will seek to quantify that which is only qualifiable, for the purpose of challenging us in the sometimes overlooked areas of self-improvement.

Overview

I will kick this post off with a bold statement, and I stand by it: You cannot claim to do good testing if you believe that learning has a glass ceiling. In other words, learning is an endless journey. We cannot put a measurable cap on the amount of learning needed to be a good tester, thus we must continually learn new techniques, embrace new tools and study foreign ideas in order to grow in our craft. The very fact that software can never be bug-free supports this premise. I plan to blog about that later, in a post I am working on regarding mental catalysts. For now though, let’s turn our attention back to self-improvement. In short I am saying, since learning is unending, and better testing requires continual variation, then the job of self-improvement can never be at an end.

This job can feel a bit intangible and almost like trying to hit a moving target with a reliable repeatable process; therefore, we must be intentional about how we approach self-improvement so we can be successful. Sometimes I hear people talk about setting goals, writing things down or trying to schedule their own improvement through a cadence of book reads, coding classes or tutorial videos perhaps. This is noble, because self-improvement does not simply happen, but many times we jump into the activity of self-improvement before we determine if we’ve first focused on the right space. For example, a tester believes that they must learn how to code to become more valuable to their company, so they immediately dive into Codecademy classes. Did the tester stop to think…

Maybe the company I work for has an incomplete understanding of what constitutes ‘good testing’? After all, the term ‘good’ implies a value statement, but who is the judge? Do they know that testing is both an art and a science? I am required to consider these variables if I want to improve my testing craft. Does my environment encourage a varied toolset for testers, or simply the idea that anyone under the “Engineering” umbrella must ‘learn coding’ in order to add value?

Now, Agile (big “A”) encourages cross-functional teams, while I encourage “cross-functional teams to the extent that it makes sense”. At the end of the day, I still want a team of specialists working on my code, not a group of individuals that are slightly good at many things. Now, is there value to some testers learning to code? Yes, and here is a viewpoint with which I wholeheartedly agree. However, the point here, as it relates to self-improvement, is that a certain level of critical thinking is required in order to engage System 2, before this level of introspection can even take place. If this does not happen, then the tester may now be focused on an unwarranted self-improvement endeavor that may be beneficial, but is not for the intentional purpose of ‘better testing’.

So, why create a metric?

This might be a wake-up call to some, but your manager in not in charge of your learning; you are. Others in the community have created guides and categories for self-improvement, such as James Bach’s Tester’s Syllabus, which is an excellent way to steer your own self-improvement. For example, I use his syllabus as a guide and rate myself 0 through 4 on each branch, where zero is a topic in which I am unconsciously competent, and a four is a space in which I am consciously or perhaps unconsciously competent (see this Wikipedia article if you need clarification of those terms). I then compare my weak areas to the type of testing I do on a regular basis to determine where the major risk gaps are in my knowledge. If I am ever hesitant about rating myself higher or lower on a given space, I opt for the lower number. This keeps me from over-estimating my abilities in a certain area, as well as helps me to stay intellectually humble on that topic. This self-underestimation tactic is something I learned from Brian Kurtz, one of my mentors.

The Metric

The personal self-improvement metric I have devised is meant to be used in a private setting. For example, these numbers would ideally not roll up to management as a way of evaluating if you are a good or bad tester. These categories and ratings are simply created to give you a mental prompt in the areas you may need to work on, especially if you are in a team environment as that requires honing soft-skills too. However, you may have noticed that I have completely abused metrics here by measuring qualitative elements using quantitative means. This is usually how metrics are abuse for more nefarious purposes, such as being used to influence groups of decision makers to take unwarranted actions. However, I am OK with abusing metrics in this case, since it is for my own personal and private self-improvement means. Even though the number ratings are subjective, it means something to me, and I can use these surrogate measures to continually tweak my approach to learning.

My main categories are as follows: Testing Mindset, Leadership, Test Strategy & Planning, Self-Improvement, Tools & Automation and Intangibles. To an extent, all of these have a level of intangibility, as we’re trying to create a metric by applying a number (quantitative) to an item that can only accurately be described in qualitative (non-numeric) terms. However, since this intended for personal and private purposes, the social ramifications of assigning a number to these categories is negligible. The audience is one, myself, rather than hundreds or thousands across an entire division. Below is the resulting artifact that is created, but you can download the Excel file as a template to use for yourself, as this contains the data, glossary of terms, sample tester ratings, sample team aggregate, etc.

personal-metric

Click here to download the current Microsoft Excel version

Application & Terms

Typically, you can use this for yourself or if you manage a team of testers, privately with them. I would never share one tester’s radar graph with another, as that would defeat the purpose of having a private metric that can be used for self-improvement. The social aspects of this can me minimized in an environment where a shared sense of maturity and respect exist. You can also find the following terms and definitions in the “Glossary” tab of the referenced Excel sheet:

Testing Mindset:

  • Logic Process: ability to reason through problems in a way that uses critical thinking skills to avoid getting fooled.
  • User Advocacy: ability to put on the user hat, albeit biased, and test using various established consumer personas and scenarios (typically provided by Product Management), apart from the acceptance/expected pathways.
  • Curiosity: ability to become engaged with the product in a way that can and does intentionally supersede the intended purpose as guided by perceived customer desires (i.e. Like a kitten would with a new toy, yet also able to focus that interest toward high-value areas and likely risks within the product).
  • Technical Acumen: ability to explain to others, with the appropriate vocabulary, what kind of testing has been, is or is going to be completed or not completed.
  • Tenacity: ability to stay and remain persistently engaged in testing the product as it relates to seeking risks related to the item under test.

Leadership:

  • Mentorship: ability to recognize areas of weakness within the larger team and train others accordingly to address these gaps.
  • Subject Matter Expertise: ability to become knowledgeable in both the product and practice of testing for the purposes of supporting both the stakeholder’s desires as well as capability of supplementing the information needs of other team members.
  • Team Awareness: ability to get and stay in touch with the two main wavelengths of the team, personal and technical, in order to adjust actions to alleviate testing roadblocks.
  • Interpersonal Skills: ability to work well with others on the immediate or larger teams in such a way that facilitates positive communication and allows for more effective testing, including the ability to convey product risks in a way that is appropriate.
  • Reliability: ability to cope through challenges, lead by example based on previous experiences and champion punctuality as well as support a consistent ongoing telling of the testing story to Product Management.

Test Strategy & Planning:

  • Attention to Detail: ability to created adequately detailed test strategies that satisfy the requirements of the stakeholders and the team.
  • Modeling: ability to convert your process into translatable artifacts, using continually evolving mental models to address risk and increase team confidence in the testing endeavor.
  • Three-Part Testing Story: ability to speak competently on the product status, the testing method and the quality of the testing that was completed for the given item under test.
  • Value-Add Testing Artifacts: ability to create testing artifacts (outlines, mind-maps, etc) that can be used throughout the overlapping development and testing phases, as well as support your testing story in your absence.
  • Risk Assessment: ability to use wisdom, which is the combination of knowledge, experience and discernment, to determine where important product risks are within the given item under test.

Self-Improvement:

  • Desire: ability to maintain an internal motivator that brings passion into the art of testing, for the purpose of supporting all other abilities.
  • Quality Theory: ability to support a test strategy with an adequate sum of explicit and tacit knowledge though the use of a varied tool belt: models, apps, techniques, etc, as well as maintaining a strong understanding of a tester’s role within the development lifecycle.
  • Testing Community: ability to engage with both the internal and external testing communities in a way that displays intellectual humility to the extent that it is required to share new ideas, challenge existing ones, and move testing forward.
  • Product Knowledge: ability to become a subject matter expert in yours team’s area of focus such that you can better expose risk and provide value to product management.
  • Cross-Functionality: ability to learn and absorb skills from outside a traditional subset of standards-based/factory-style testing, such that you can use these new skills to enhance the team’s collective testing effort.

Tools & Automation:

  • Data: ability to interact with multiple types and subsets of data related to the product domain, such that testing can become a more effective way of exposing important risks, be it via traditional or non-traditional structures.
  • Scripting: ability to use some form of scripting as a part of the test strategy, when appropriate, to assist with learning about risks and informing beyond a traditional tool-less/primarily human-only approach to the testing effort, so that the testing completed is more robust in nature.
  • Programming: ability to write code in order to establish a deeper understanding of a product’s inner working, to gain insight into why and how data is represented in a product, as well as close the gap between tester and developer perspectives.
  • Exploratory-Supplement: ability to embrace tools that can enhance the effectiveness of testing, allowing for a decrease in traditional administrative overhead.
  • Models: ability to embrace new ways of thinking, including explicit testing models that are made available in the course of work, or via the larger community. Appropriate contextual models help to challenge existing biases, decrease the risk gap, and reshape our own mental paradigms for the purpose of adding value to the testing effort.

Intangibles:

  • Communication & Diplomacy: ability to discuss engineering and testing problems in such a way that guide the team toward actions items that are in the best interests of the stakeholders, without overpowering or harming team relationships.
  • Ability to Negotiate: ability to prioritize risks that pose a threat to perceived client desires, such that the interaction with product management allows for informing over gatekeeping and risk exposure over risk mitigation in the service of our clients.
  • Self-Starter: ability to push in avenues of learning for the sake of improve the testing craft without the need for external coaxing of management’s intervention. Ideally, this would be fueled by an ongoing discontent at the presence of unknown risks and gaps in learning.
  • Confidence: ability to display conviction in the execution of various test strategies, strategies that hold up to scrutiny when presented to the larger stakeholder audience for the purpose of informing product management.
  • Maturity & Selflessness: ability to distance one’s self from the product in a way that allows for informing stakeholders and the team with proper respect. This is done in a way that distances us from the act of gatekeeping by ensuring that our endeavor of serving the client supersedes our own agendas for the product.

The practical application of this is triggered when testers become introspective and self-critical on the areas mentioned within the spreadsheet. This can only be done when each area is studied in depth. I recommend that testers do an initial evaluation by rating themselves loosely on each category and subcategory, using the Glossary as a reference. These are my own guideline definitions that I’ve given to each term, on which you can rate yourself using a 0-4 scale. Your definitions of these words may be different, so treat these as my own. This calculation is of course a surrogate measure, and meant only to be used as a rough estimate to determine areas for improvement. Once the areas of improvement that need the most attention have been identified (i.e. lowest numbers and matter the most to your team or project), the tester would then seek out resources to assist with those areas: tutorial videos, books, online exercises, peer-mentorship, and others. Don’t forget to reach out to both your company’s internal testing community as well as those who live in the online and testing conference space.

Conclusion

Please remember, this metric is by no means a silver bullet and these areas of focus are not meant to be used as a checklist, but rather a guideline to help testers determine areas of weakness of which you may not be currently aware. Many times, we do not realize an area of weakness or our own biases, until someone else points that out to us. I have found that a documented fashion such as this can help me recognize my own gaps. As stated previously, this is most useful when applied privately, or between a tester and their manager in a one-on-one setting. This is only a surrogate measure that attempts to quantify that which is only qualifiable. Putting numbers on these traits is extremely subjective and for the purpose of catalyzing your own introspection. It is my hope that this helps give testers a guide for self-improvement in collectively advancing the testing craft.

Career Paths For Testers

Abstract: At the company for which I work, testers have two main pathways they can pursue: non-technical and technical. The information presented in this post is written with our context in mind, but other organizations may also share this framework in structuring their testing community. 

Overview

The career path options available to testers, in every organization of which I have been a part of, have been somewhat shrouded and mystified to an extent. Not yet have I discovered that this is being done intentionally or with any malicious intent, but the needs of the business change so people learn to grow and adapt along with that. Sometimes it is a lack of forethought that is placed on the skillset of testing, but there are external influences from the larger community that also play a part. As things change over time, management should establish a deeper understanding from observing what is happening at the forefront of the industry, and adapt appropriately. The development, product management, and other pathways can also be somewhat unclear in these organizations, so by no means am I claiming testers are a victim of sorts; in fact, the onus is on testers and test leadership and management to push for change in this area for the purpose of making it more accessible. At my company, we have done that by making it clear not only what the various titles are, but what roles and responsibilities come with each title.

In my view, a title is simply that, a guideline word that gives someone a vague context around what roles you might fill.

We have an atmosphere where folks can experiment and learn from each other and easily move between roles, and eventually titles if they wish to change career trajectory, but we felt that making a clear distinction between the non-technical and technical pathways was beneficial in helping people decide where they wanted to land. We’ve also found that testers can use skewed titles when introducing themselves in group settings, which can sometimes be misleading for folks who are not involved in our day-to-day context. For example, a tester might claim “I am a Quality Assurance Engineer” when in fact that person means “Test Analyst”, which carries with it different implied responsibilities. Remember, the job of “Quality Assurance” is that of the team, the division and the company as a whole; since no one person can “assure” quality. If interested, you can read more on that in, “Testers, Get Out of the Quality Assurance Business” by Michael Bolton.

First, how are you defining the word “Technical”?

You can make the argument that any tester who makes use of tools is technical, and since a tool could be anything from a piece of software like Selenium to mental models like Karen Johnson’s RCRCRC, then any tester who knows how to use and apply that model is by extension, technical. However, for the sake of this conversation, we’re going to use what the larger software industry defines as technical, when they speak of all roles, even outside of testing. This definition usually carries with it implications regarding coding skill, ability to write automation, prowess with specific DBs, and software tools that have been traditionally developer-related, such as IDEs, APIs, Git, etc. While I agree that the definition of being technical comprises more than just that limited subset, that is the operating definition I am using for the sake of this post.

Our Two Career Paths for Testers

Hopefully this information will give you some insight into the possible career development pathways available to testers within my organization. Testers have the option of pursuing either a non-technical or technical career path, either of which may lead into management or lateral roles within the company. We want to make it clear that folks can grow their career in both ways, and that we’re not a shop that believes all testers must learn to code in order to be valuable to the company. Can coding provide benefit as a tester? Sure, read more on that in “At Least Three Good Reasons for Testers to Learn to Program” by Michael Bolton, but by no means is it required to do what we would consider good testing work. I even caution testers who claim “I want to code, so I can be a better tester,” to make sure that belief is rooted in the mindset that good testing does not come from any specific practice, but rather a paradigm that evaluates the use of multiple tools for the purpose of exposing risk and informing our stakeholders. Some testers believe that coding inherently means they will get a better title and thus more money, in which case I’d want to speak with them about developing a supporting mentality of how to think critically to pair with that. If you are at a company that says all testers must be able to code to be valuable to the company, then you may not be at a company that fully understands that coding can be a part of testing, but does not necessarily define it.

Our technical path has two divisions within itself, an automation focused one as well as non-automation. Some in the testing community would rather me say “tooling-focused” rather than “automation-focused”, but again, I am using the term that best speaks to the demographic that I wish to hear this at the time of writing this article. So, titles and very brief descriptions of groups are listed below. Beyond what is listed here are other roles that we wish to implement, such as Test Architect, Test Manager, etc. but I’ve decided to focus the scope of this post to discussing our testers that are an immediate part of the co-located scrum teams, so architecture and management are typically outside of that. Below, I’ve listed the desired title first (the one that we’re applying to new hires) followed by our current title in parenthesis (the one that we hope to sunset).

Non-Technical Path (exploratory/model focused)

These testers might have some SQL or scripting knowledge, but may not have the aptitude or desire to use code on a regular basis to enhance their testing work. This is fine, as you can be an amazing tester without generating code. Testers in this path focus more on generating test strategies and using mental models to aid in their exploratory testing endeavors. They will use what most think of as tools in the course of their work from time to time, but usually leverage exploration over software as their main models for testing. These testers can also use explicit test modeling to inform their thinking as well as information from both the internal/external testing community in an attempt to apply them to their team’s unique context.

  • Test Analyst (QA Analyst)
  • Senior Test Analyst (Sr. QA Analyst)
  • Lead Test Analyst (Lead QA Analyst)

Technical Path (exploratory/tool focused)

Testers in this group do much of what the non-technical group do, but with a twist. When it comes to exploring various tools and models of thinking, they seek to embrace tools in a more deliberate and ongoing way as part of their recurring testing strategy. These testers focus daily on how to leverage the technologies that the business is already using, or other tools that may have a low barrier to entry, in order to assist in expanding product coverage in testing, including coding from time to time. They also assist in other areas that a black-box tester might not: code reviews, architecture meetings, automation framework discussions, etc.. A good technical tester is not a developer, but rather a tester that is simply more in-tune with the needs of development in both word and practice for the purpose of exposing product risks.

  • Test Engineer (QA Engineer)
  • Senior Test Engineer (Senior QA Engineer)
  • Lead Test Engineer (Lead QA Engineer)

Technical Path (exploratory/development focused)

This group is primarily composed of testers that operate in a developer role to enhance product testing. These individuals create and maintain the automation within their release train and/or individual team. If an organization does not have automation-complete as a part of their definition of done, then these testers typically reside externally to the team or float among the release train. The primary role of these folks would be to work with the teams and product management organization to establish where automation is most valuable for the business, then target those gaps to build a high value-add suite. These roles not only include sustaining the automation test suite(s) for the product, but could also include providing worthwhile metrics, coaching and mentoring teams on automation ownership, good coding practices, tagging taxonomy, etc.

  • Software Developer in Test (QA Automation Engineer)
  • Senior Software Developer in Test (Senior QA Automation Engineer)
  • Lead Software Developer in Test (Lead QA Automation Engineer)

Conclusion

Testers in any group can jump laterally to another group, given they show the desire and aptitude; however, you typically want to find out early on what folks are interested in so that we are cultivating the right roles within our testers from day one. On the other hand, some people need to remain in a certain role for at least a few months to realize what they really do or don’t want, so we have to be willing to let them go through that self-discovery as well. We work in an environment where a Test Analyst in a non-technical career path can in fact do technical work if they wish, if they feel it will enhance their team process and their own skill-craft, as we do not stifle that. We simply encourage folks to find out which career path best suits the overlap between their own well being as well as that of the company. Maybe Test Engineers is the universal title of the future, but this is where we, as a company, reside right now.

Finally, Seniors and Leads in both technical and non-technical areas are responsible for cultivating the larger group of testers, through community of practice meetings, roundtables, team mentoring, testing guilds and other activities. The more technical leaders would ideally be heavily involved in tech leadership meetings with lead developers, architects and designers, assisting in endeavors that take place across all teams and release trains. They also do this to ensure an ongoing community of practice exists for examining practices, tools, models, etc. Ultimately, at the end of the day, a title is less important than the actual work being done, and at our company, we don’t limit folks to only expressing their potential within the bounds of that title. We do ask; however, that they critically examine their strengths to determine they are operating in the most congruent role. Michael Bolton further clarifies roles in one of his blog posts, if you are interested in going deeper on that topic.

Don’t Lie On Your Resume, Ever.

Abstract: Don’t lie on your resume. Ever. If we don’t immediately shine light on damaging advice, then we secede our integrity to the misinformed.

A question was asked recently in the “SOFTWARE-TESTING” Yahoo group from someone who was seeking advice on how to better construct their CV/resume. A lot of tips starting pouring in from the group, and all seemed relatively innocuous until one specific recommendation was posted. The recommendation pointed to an article that suggested there were appropriate times to lie on your resume, or as the post put it “exaggerate” certain elements.

Here’s a quote from the post that was referenced:

“What should you do in case you have only 2+ years of AngularJS experience and want to apply for this job [requires 3yrs]? Realistically speaking you are the most experienced developer who is willing to work in Montreal with AngularJS. You should exaggerate your Angular experience and put 3 years on a resume. Otherwise there is a risk that your resume may be discarded as not meeting minimal requirement. Could you imagine all developers being honest and this agent getting no response to such job post? Nobody would benefit from such honesty. To make this agent and his client happy you need to lie on your resume. – [Author’s name removed]

Now, usually I credit my sources, but in this case, I will not link to the original article or post the author’s last name, as I believe people can change. If this person were ever to have a change of heart and take down the detrimental post in hopes for redemption among the community, then I would not want this post to stifle that. I did send a strongly worded reply, stated below:

“[Author’s name removed], I do not know your context, but I would steer anyone away from this logic. It is unethical, in my opinion and simply not necessary given the risks. I would strongly suggest NOT doing this. When should you lie on your resume? Never. If a company uses years of experience as a sole deciding factor for eliminating my resume, then I wouldn’t want to work there, so thank you for not wasting my time. I believe a good heuristic is simply to tell the truth 100% of the time. Putting “3 yrs” on a resume for someone that has “2 yrs” of experience is just asking for trouble. It can easily be challenged and then your professional reputation is at stake. Your name becomes tarnished in the industry as a liar. We are not in the business of misleading others, quite the opposite. This is not for me, but that’s just my two cents. – Connor”

But you don’t need to take my word for it, check this out.

Final thoughts

So, did my reply make an impact or simply bounce off? I’m not sure. Would I love to see that person take their article down for the sake of the community? Yes. On the other hand, do also I realize that people have to come to realizations on their own, sometimes learning though experience before they will gain a visceral understanding on some topics? Yes. As soon as I saw this recommendation, I knew a reply from me was going to happen, it was simply a matter of time. This comes from a feeling of responsibility, that the onus is on us to call these things out when we see them appear in the testing or in the larger software communities. If we don’t immediately shine light on damaging advice, then we secede our integrity to the misinformed.

Episode VII: The Tester Awakens

pixelgrill-cdt-starwars

Click image to enlarge

Abstract: A brief blog post on being an intentionally awake tester. There’s a lot of misinformation out there, so be a critical thinker and avid learner to properly combat it.

It doesn’t take much to live in the dark as a tester. In fact, you simply need to exist. Abide by the rules and listen to the establishment when they ask you not to question convention. Standards, certifications and rules are in place for a reason; to sell a structure. That structure might actually cost your money, sometimes in obvious ways, sometimes not so obvious. From convincing naive testers to fork over hard-earned money for completion of a multiple-choice test that somehow guarantees them a place among the stars, to third-order measurements and metrics that are an expensive waste of resources that many times don’t have a snowball’s chance in hell of solving the original problem that was proclaimed. In fact, heavy metrics can be misleading and might even foster the creation of a few new problems, most likely in the social arena. Bad metrics and standards can establish a rift that winds through the company like a bad weed, choking the social atmosphere and corporate culture to a point where the fallout of losing good talent is not even shocking when it happens, unfortunately, as it has become an expected symptom of a broken galaxy. However, part of the universe that makes up the context-driven testing community as a whole, dispersed across both public and private enterprise, is less broken and contains much goodness that can be absorbed.

Now, some testers and even management that push bad metrics and practices, currently live in the dark simply because they have not been informed. These are testers that may in fact be critical thinkers but have been the subject of a bad environment and need to be “awakened”. Testers who desire the improve their skill-craft; testers who join the community and contribute valuable insight; testers who care about their professional reputation; testers who take offense when someone tries to pass off shoddy work as complete; testers who believe we are actually servants to the stakeholder and not some middle-man gateway with imaginary powers to control the product, as if they know better than product management, and the list goes on. The bottom line is this; when I hire a contractor to come over to my house to do some plumbing work and all he has in his tool belt is a hammer with a few nails, my confidence level may plummet. Are you testing with an empty tool belt? What explicit test models can you tell me you have studied and applied? What test strategy can you visually show me that will explain how you are going to move through the product as you test, exploring for value to the customer? Is it all in your head or do you have something tangible that can tell me a compelling story? If you don’t have a good answer, then I probably won’t have a lot of confidence in your filling the testing role on my software project.

Are you testing in the light or in the dark? Where do you reside? Do you seek to stay relevant in your understanding on how to be a competent tester? Are you intentionally learning through discussions with wiser folks in the community or are you on cruise control? Tap into the force, it’s all around you…the testing community! I can guarantee that the universe will have no problem continuing to expand without your approval. Newsflash: Your manager is not in charge of directing your learning; you are.

If you want to solve complex problems, you need to make yourself more complex.’ – Unknown

Aside: Yes, the Star Wars frame I used is from The Empire Strikes Back, before Luke really knew Vader was his father which I considered, and posted anyway (testing points to Timothy Western though for reiterating). I’ll assume though that the reader can allow for some creative license since they are dueling to the death while talking about testing. But seriously, if anyone in that galaxy were to push ISO standards, it’d definitely be Vader.

A Sprint Framework For Testers

A Sprint Framework For Testers

Click image to enlarge. Click here to download the X-Mind.

Abstract: A Sprint Framework For Testers is a brief outline of my suggested processes and practices employed by a Tester that resides within a software development scrum team, in an Agile environment. I have created this document with web-based product software teams in mind, but these practices and recommendations are not necessarily tied to a specific type of software, tester, platform or development environment. I say this simply to give you context into the formative process of this framework, but I believe these ideas have been generalized in a way that should be beneficial across many types of software testing. Having the ability to execute much of this relies on working within a healthy engineering culture, but Testers should also be intentionally employing practices like this to improve their own culture; and hopefully this sprint framework for testers can help with that.

Note: After a recent discussion on Twitter I decided to add this note. This model is in no way meant to be a prescriptive mandate on how to run your sprint, but rather a guide to help prime your thinking as you move through the various stages. Also, test cases may or may not fit into your current paradigm. If they do not, then be sure you have good reasons for that. Some are under the impression that being ‘context-driven’ means being anti-test cases, which is a fallacy. Writing scripted test cases requires a great amount of skill and may be necessary in your context, as I have found it in mine.

  1. Sprint Grooming
    1. Smaller Group: In the interests of efficient time usage, this should be composed of a small group as this part of the process does not require the input of the entire team. A single Developer, Tester and Product Owner would be sufficient, or whichever small group is composed of team members with the most product knowledge and people who will be doing the hands-on work. Two Developers may be required, if there is a large reach in the work being done between both backed and UI. It should be the exception, not the rule, that the whole team would need to be involved in the continual backlog grooming process.
    2. Use models (HTSM, RCRCRC or other Testing Mnemonics) to inform your thinking and team’s awareness of the potential vastness of acceptance criteria considerations.
    3. Models as Litmus Tests (for Story Acceptance):
      1. Using just a smaller part from an existing model (HTSM > Quality Criteria) can many times serve as a litmus test for which stories to bring into the sprint. Of course, business priorities and product management usually serve this role, ideally before it hits the team, but if they were more informed about the various considerations that need to be covered in the development process (Capability, Scalability, Charisma, etc.) then they may have prioritized stories differently. Use models from a high-level in this session to educate your Product Owners, Developers and other Testers on what it really means to accept a story.
  2. Sprint Planning
    1. Larger Group: At this point, it makes sense to have the whole team involved in planning. Now, it is debatable if doing setting quantifiable estimates on user stories is a good or a bad thing, but in a general sense we can at at least agree that having the full team in this session is beneficial from a knowledge standpoint when evaluating work load.
    2. Continue to use models to inform your team so that more solid estimates can be made. Remember, test models can be used to increase awareness for everyone, not just testers, providing more insight into potential product risks to the client.
      1. E.G. Bring up the HTSM > Quality Criteria page and have the developers actually discuss Usability, Scalability, Compatibility, etc. for a given story. I guarantee that it is impossible just to go through this one node of HTSM without it informing your team members’ thinking on development considerations and product risks.
    3. Decide (pre-development) which story/stories will be candidates for Shake ‘N’ Bake (Dev/QA pair-testing process) and then execute them when the time comes.
  3. Day 1 (of Sprint)
    1. Test Strategy creation via collaboration (with other team member(s) and time-boxed per story):
      1. Create the test strategy (not test cases yet) using a model as a template with the other team members (testers, devs, POs, etc) in a time-boxed session. You’ll have to decide what amount of time is reasonable for a small, medium and large stories, but typically this is between 30 minutes and 2 hours.
      2. During this collaboration, I am seeking approval for the test direction I am headed, by evaluating cues from the other team member(s). I do not go into this thinking I know all the risks or proper priorities, otherwise the session is useless. The resident SME (Subject-Matter Expert) for a given story should see test strategies before they are turned into test cases.
      3. Good test strategies do not only explain what we are testing, but also what we are not testing, or cannot be tested by you, the tester.
        1. E.G. Load Testing on a given story might require someone who could write automation checks, but perhaps we do not have that resource available on the team or for the given timeline, so we intentionally make a note of that as a potential risk/gap in our test strategy.
        2. Coverage Reminder: Part of your test strategy involves telling stakeholders what you did and did not test, so be sure that is noted somewhere in your model/test suite creation.
      4. Time-Box:
        1. We time-box our test strategy creation session so that we can get the most bang for our buck, and mitigate time constraints. Many times testers complain about not having enough time to test, but that is because they are simply trying to complete their entire test case without having first created a prioritized test strategy.
        2. Now, in the interest of time management for the sake of the team, we probably cannot spend a whole day filling out the HTSM for one story, so if I have 5 stories, I might dedicate 1 to 1.5 hours to each story. You will need to decide what amount of time can be allotted per story based on your own team/testing capacity.
    2. Test Cases:
      1. Begin writing test plans/cases based on collaborative strategy (if you write your strategy correctly, then you should not have to recreate a lot of the foundation work during the test writing process – copy/paste is your friend)
      2. Automation Reminder: Be sure, early on in the sprint, ideally before the end of Day 1, to decide what can and cannot be automated. This will greatly prevent you from duplicating effort, or doing manual work in places that only make sense to do automation.
        1. NOTE: Automation may not be in your skill-set if you are a manual tester, but it should still be something of which you are aware and can help prioritize. This requires an automation strategy though (ask for our “HASM” model that deals exclusively with creation of automation strategies)
  4. Day 2
    1. By this point you should have already finalized or be finalizing your test strategies for any remaining stories.
      1. Continue to seek strategy approval from other team members, or SMEs outside of your team if others may have worked on the feature or something similar recently.
    2. Continue writing your test cases, making sure both they and your strategies are are visible to all stakeholders, both in an out of the team (via tool, e.g. Jira, Rally, etc.)
  5. Day 3+
    1. Continue test case creation, mitigating time management concerns as dev complete approaches (be aware that this, or the Shake ‘N’ Bake stories may be ready)
    2. Poll The Team (In-Sprint)
      1. Overview:  Ask the team members what they are currently struggling with and find out new information they have gathered since your sprint planning meeting. Typically this is the time when assumptions begin and simply asking around can nip these in the bud.
      2. Developers: What roadblocks are you experiencing? What new information have you found since our planning session?
      3. Product Owners: How is the customer feeling? What new priorities have come in? Have there been any shifts in the customer’s thinking that might affect current sprint items?
      4. Scrum Masters: Is there anything I am doing that might be causing friction? Do you notice any personality conflicts or roadblocks that I can help keep an eye on/mitigate?
  6. At Dev-Complete
    1. Execute Shake ‘N’ Bake on-demand when dev says the previously-decided story is complete
      1. Perform pair-testing process on developer’s box with them, before they make their code commit.
      2. Note: Shake ‘N’ Bake does not take the place of the normal testing process within your sprint. It is done in addition to the testing process.
    2. Execute normal testing process for stories per Testing Process (see next section)
  7. Testing Process:
    1. Assign story to yourself (via Sprint Tracking software and/or Scrum Board)
    2. Notify team which story you are starting to test (sometimes this notifies other team members to speak up about something they have been keeping in their head, perhaps that they had not made a note on yet in the story/case)
    3. Verify Dev-Task Complete (Pre-Testing): Are unit tests complete and passing? If not, have discussion with Developer who worked on the story as this should be complete before the testing process begins.
    4. Execute test cases for a given story in your Dev/Team branch environment
      1. Do not test on the Developer’s machine via IP unless you are doing pre-code commit testing earlier on in the sprint. You should have an initial environment where all code commits live for testing.
    5. Log Dev tasks for any issues found, as you go.
      1. Do not wait until the end of your test run to log the sum of tasks. Many times, Devs can fix items as you test, without invalidating your current testing.
      2. Assign tasks back to the specific Dev who worked the item or make a comment in the team room about it (at team’s discretion, depending on existing workflow)
  8. Story Ready-For-Release or Production-Ready
    1. Verify DoD (Definition of Done) Completion: At this point, the Tester needs to close the loop on any other areas that the team has specified in their DoD
      1. This can include: Test Peer Review, Code Review, Unit Test (code coverage %?), Documentation, Automation (in sprint, or delayed cadence?), Remaining task hours zeroed out, n-Sprints supported in productions, Manually Testing, Owner Review, Product Review, Demo, etc.
    2. SME Review: After testing is complete (Devs have completed all tasks and they have been retested) I would ask the subject-matter expert for the story to take a look at it, within a self-imposed time window.
      1. E.G: Setting Expectations – If I finish testing on a Wednesday, I would say to the PO, “Testing is complete on this story. Please review the functionality by end of day Thursday and let me know if you have any concerns, otherwise I will mark this story as “Ready for Release”.
        1. This may necessitate an “Owner Review” column in your sprint tracking tool (post-Testing but pre-Ready For Release) that would be managed by SMEs (the PO in this case, but this could and probably should have rotating ownership as the SME chosen for a given story should be the one most qualified, not necessarily the PO).
  9. Release Prep & Planning
    1. Attend pre-release meeting (formal or otherwise) to verify that all items that are in the “Ready to Release” state have been through the proper channels (outlined above, and per Team’s DoD).
    2. Clearly communicate post-release coverage (i.e. List of those who will be present directly after the release for any nighttime or daytime releases)
    3. Verify that release items to be tested have been marked (via your tracking tool: Jira, Rally, Release Checklist, etc.)
      1. Targeting: Ideally you reach a point in your continuous delivery process where you trust your deployments to the point that does not require production-time checking/testing of all release items. You should be targeting the high risk/major shift elements for production testing during your releases.
      2. Prioritization: This requires prioritization during the sprint of which items are high risk/high impact rather than trying to do this all at once at release time.
      3. Time Window: Items to be tested should be based on business priority of course, but evaluate release window time vs. amount of time needed to test items cumulatively.
        1. Time-to-Stories Ratio – In other words, if I have 12 stories, and each takes 10 minutes to test, which would take 2 hours. However, our release window is 1 hour, so we should evaluate which stories need to bubble up to the top as our highest risks items to merit production-time testing.
      4. Establish Reversion Hypotheticals for each story (these should be in place before the release starts, not created on the fly during the release when they occur)
        1. Structure: If ‘x‘ happens, then ‘y are the risks to the customer, so we recommend reverting code commits related to story z’.
          1. E.G. If the credit application will not submit in production, then lower conversion rates and lost financing revenue are the risks to the customer, so we recommend reverting code commits related to story #4567.
        2. Stories can have one or multiple reversion hypotheticals, depending on their complexity.
  10. Release & PVT Testing
    1. PVT (Production Validation Testing): This type of testing is done on the product in the production environment and meets all functional and cosmetic requirements.
    2. Test new development: High risk/priority items only (per release checklist created earlier)
    3. Perform basic smoke test (acceptance spot-checking) or related product areas, previous high-risk items, etc.
    4. Execute roll-back (if any hypothetical scenarios are satisfied), after discussions with the team/Product Owner:
      1. It is the testers job to inform the product management about risks caused by a given release, but at the end of the day we are NOT the gatekeepers. Other SMEs and management will have a higher-view of what is best for the business from a risk mitigation perspective, so we can give our recommendation not to release something, but ultimately that decision for go/no-go must come from product management.
  11. Post-Deployment & Monitoring
    1. This takes place within hours of the release/deploy, or during Day 1 of the following sprint.
    2. Performance systems (Splunk, NewRelic, etc.)
      1. Are there any new or unusual trends?
    3. Support Queue
      1. Are we noticing duplicate requests coming in from support teams?
    4. Team-level transparency on this can be hard, so this may require team ownership, not simply just the Tester.
  12. Release Retro
    1. Are you prepared to tell a compelling story about any caveats/prod defects that were found in the release?
      1. Where “compelling story” = define test strategy including what was and was not tested. You should already have this created from earlier in the sprint process for each story so minimal/no additional prep is needed.
    2. Is your attitude constructive rather than combative?
      1. Are you a listener and fixer or just a blamer?
      2. This includes being mindful of your speech: Your intention should be to make developers look good, by supporting their work with your testing. Be sure to compliment the solid work, before pointing out the faulty work.
  13. Team Retro
    1. Actionable Ideas: Arrive to the meeting with ideas on what can be modified (stop doing, start doing, do more, do less, etc.)
    2. Be very vocal in the team retros, but at the same time do it with tact and diplomacy.
    3. Poll The Team (Post Sprint):
      1. Overview: Ask the team members what they need from you, keeping in mind their context within the larger company. A Developer may ask you to be clearer about what you plan to test, while a Product Owner may want you to become more of an SME (Subject Matter Expert) in a given area.
      2. Developers: What more are you wanting out of me, your Tester? 
      3. Product Owners: What can I, your Tester, do to help make your job easier?
      4. Scrum Masters: Is there anything you are not getting from me, your Tester, that you need in order to increase team cohesion and efficiency?

As a professional skeptic and keen observer of human nature, it is incumbent upon me to request and consider feedback from the community on this work. My goal is to give something to Testers that they can immediately apply; however, given the various contexts in which each of us work, it would be foolhardy to think that this framework could apply exactly to any situation. Instead, I encourage you to treat this as a guideline to help improve your day-to-day process, and focus on the parts that help fill your current gaps. Please leave a comment, and let’s start a dialog, as I would appreciate your insight into which parts are most meaningful and provide the greatest value-add in your own situation.

CAST 2015: Distilled

Brian Kurtz and I recently traveled to Grand Rapids, Michigan to attend CAST 2015, a testing conference put on by AST and other members of the Context-Driven Testing (CDT) community. I was rewarded in a myriad of ways such as new ideas, enhanced learning sessions, fresh models, etc, but the most rewarding experience from the conference lies in the people and connections made. The entire CDT community currently lives on Twitter, so if you are new to testing or not involved in social media, I would recommend that you begin there. If you are looking for a starting point, check out my Twitter page here, Connor Roberts – Twitter, and look at the people I am following to get a good idea of who some of the active thought leaders are in testing. This community does a good job on Twitter of actually keeping the information flow clean and in general only shares value-add information. In keeping with that endeavor, it is my intention with this post to share the shining bits and pieces that came out of each session I attended. I hope this is a welcome respite from the normal process of learning that involves hours of panning for gold in the riverbanks, only to reveal small shining flakes from time to time.

Keep in mind, this is only a summary of my biased experience, since the notes I take mainly focus on what I feel was valuable and important to me based on what I currently know or do not know about the sessions I attended at the conference. My own notes and ideas are also mixed in with the content from the sessions, as the speaker may have been triggering thoughts in my head as they progressed. I did not keep track or delineate which are their thoughts and which are my own as I took notes.

It is also very likely that I did not document some points that others might feel are valuable, as the way I garner information is different than how they would. Overall, the heuristic that Brian and I used was to treat any of the non-live sessions as a priority since we knew the live sessions would be recorded and posted to the AST YouTube page after the conference. There are many other conferences that are worthwhile to attend, like STPCon, STAR East/West, etc. and I encourage testers to check them out as well.

 

Pre-Conference Workshop:

“Testing Fundamentals for Experienced Testers” by Robert Sabourin

Web: AmiBug.com, Email: rsabourin@amibug.comrobsab@gmail.com

Slide Deck: http://lets-test.com/wp-content/uploads/2014/06/2014_05_25_Test_Fundementals.pdf

Session Notes:

  • Conspicuous Bugs – Sometimes we want users to know about a problem.
    • E.G. A blood pressure cuff is malfunctioning so we want the doctor to know there is an error and they should use another method.
  • Bug Sampling: Find a way to sample a population of bugs, in order to tell a better story about the whole.
    • E.G. Take a look at the last 200 defects we fixed, and categorize them, in order to get an idea where product management believes our business priorities are.
  • Dijkstra’s Principle: “Program testing can be used to show the presence of bugs but not their absence.”
    • E.G. We should never say to a stakeholder, “This feature is bug-free”, but we can say “This feature has been tested in conjunction with product management to address the highest product risks.”
  • “The goal is to reach an acceptable level of risk. At that point, quality is automatically good enough.” – James Bach
  • Three Quality Principles: Durable, Utilitarian, Beautiful
    • Based on book Vitruvius (book on architecture and design still used today)
  • Move away from centralized system testing, toward decentralized testing
    • E.G. Facebook – Pushed new timeline to New Zealand for a month before releasing it to the world
  • Talked about SBTM (Session Based Test Management): Timebox yourself to 60 minutes, determined what you have learned, then perform subsequent sessions by iterating on the previous data collected. In other words, use what you learn in each timeboxed session to make the next timeboxed session more successful.
  • Use visual models to help explain what you mean. Humans can interpret images much quicker than they can read paragraphs of text. Used a mind map as an example.
    • E.G. HTSM with subcategories and priorities
  • Try to come up with constructive, rather than destructive, conversational models when speaking with your team/stakeholders.
    • E.G. Destructive: “The acceptance criteria is not complete so we can’t estimate it”
    • E.G. Constructive: “Here’s a model I use [show HTSM] when I test features. Is there anything from this model that might help us make this acceptance criteria more complete?
  • Problem solving: We all like to think we’re excellent problems solvers, but we’re really only ever good problems solvers in a couple areas. Remember, your problem solving skill is linked to your experience. If you experience is shallow, your problem solving skill will lack variety.
  • Heuristics (first known use 1887): Book “How To Solve It” by George Pólya.
  • Be visual (models, mind maps, decisions charts)
  • If you don’t know the answer then take a guess. Use your knowledge to determine how wrong the first guess was, and make a better one. Keep iterating until you reach a state of “good enough” quality.
  • Large problems: Solve a smaller similar problem first, then try to use that as a sample to generalize so you can make hypothesis about the larger problem’s solution.
  • Decision Tables (a mathematical approach using boolean logic to express testing pathways to stakeholders – see slide deck)
  • AIM Heuristic: Application, Input, Memory
  • Use storyboarding (like comics) to visualize what you are going to test before you write test cases

Conference Sessions:

“Moving Testing Forward” by Karen Johnson (Orbitz)

Session Notes:

  • Know your shortcomings: Don’t force it. If you don’t like what you do, then switch.
    • E.G. Karen moved from Performance testing into something else, because she realized that even while she liked the testing, she was not very mathematical which is needed to become and even better performance tester.
  • Avoid working for someone you don’t respect. This affects your own growth and learning. You’ll be limited. Career development is not something your boss gives you, it is something you have to find for yourself.
  • Office politics: Don’t avoid, learn to get good at how to shape and steer this. “The minute you have two people in a room, there’s politics.”
  • Networking: Don’t just do it when you need a job. People will not connect with you at those times, if you have not been doing it all the other times.
  • Don’t put people in a box, based on your external perceptions of them. They probably know something you don’t.
  • Don’t be busy, in a corner, just focused on being a tester. Learn about the business, or else you’ll be shocked when something happens, or priorities were different than you “assumed”. Don’t lose sight of the “other side of the house”.
  • Balancing work and personal life never ends, so just get used to it, and get good at not complaining about it. Everyone has to do it, and it will level out in the long term. Don’t try to make every day or week perfectly balanced – it’s impossible.
  • Community Legacy: When you ultimately leave the testing community, which will happen to everyone at some point, what five things can you say you did for the community? Will the community have been better because you were in it? This involves interacting with people more than focusing on your process.
  • Be careful of idolizing thought leaders. Challenge their notions as much as the person’s next to you.
  • Goals: Don’t feel bad if you can’t figure out your long term goals. Tech is constantly changing, thus constant opportunities arise. In five years, you may be working on something that doesn’t even exist yet.
  • If your career stays in technology, then a cycle or learning is indefinite. Get used to learning, or you’ll just experience more pain resisting it.
  • Watch Test Is Dead from 2011, Google.
  • Five years from now, anything you know now will be “old”. Are you constantly learning so that you can stay relevant?
  • Be reliable and dependable in your current job, that’s how you advance.
    • Act as if you have the title you want already and do that job. Don’t wait for someone to tell you that you are a ‘Senior’ or a ‘Lead’ before you start leading. Management tasks require approval, leadership does not.
  • Care about your professional reputation, be aware of your online and social media presences. If you don’t have any, create them and start fostering them (Personal Website, Twitter for testing, etc.)

“Building A Culture Of Quality” by Josh Meier

Session Notes:

  • Two types of culture: Employee (ping pong tables) vs. Engineering (the way we ‘do’ things), let’s talk about the latter (more important)
  • Visible (Environment, Behaviors) vs. Invisible (Values, Attributes)
  • A ship in port is safe, but that’s not what ships are built for – Grace Hopper
  • Pair Tester with Dev for a full day (like an extended Shake And Bake session)
  • When filing bug reports, start making suggestions on possible fixes. At first this will be greeted with “don’t tell me how to do my job”, but eventually it will be welcomes as it will be a time saver, and for Josh, this morphed into the developers asking him, as a tester, to sign off on code reviews as part of their DoD (Definition of Done).
  • Begin participating in code-reviews, even if non-technical
  • *Ask for partial code, pre-commit before it is ready so you can supplement the Dev discussions to get an idea of where the developer is headed.
  • *Taxi Automation – Scripts than can be paused, allow the user to explorer mid-way through the checks, and then the checks continue based on the exploration work done.

“Should Testers Code” (Debate format) by Henrik Anderson and Jeffrey Morgan

My Conclusion: Yes and No. No, because value can be added without becoming technical; however, if your environment would benefit from a more technical tester and it’s something you have the aptitude for, then you should pursue it as part of your learning. If you find yourself desiring to do development, but in a tester role, then evaluate the possibility that you may wish to apply for a developer position, but don’t be a wolf in sheep’s clothing; that does the product and the team a disservice.

Session Notes:

  • It takes the responsibility of creating quality code off the developer if testers start coding (Automation Engineers excluded)
  • Training a blackbox tester for even 1 full hour per day for 10 months cannot replce years of coding education, training and experience. This is a huge time-sink for creation of a Jr. Dev as a best case scenario.
  • The mentality that all testers should code comes from a lack of understanding about how to increase your knowledge in the skill-craft of testing. Automation is a single tool, and coding is a practice. If you are non-technical, work on training your mindset, not trying to become a developer.

My Other Observations:

  • Do you want a foot doctor doing your heart surgery? (Developers spending majority time testing, Testers spending majority time developing?)
  • People who say that all testers should code do not truly understand that quality is a team responsibility, but rather only a developer’s responsibility. Those that hold this stance, consciously or subconsciously have a desire to make testers into coders, and only “then” will it be their responsibility because they will then be in the right role/title. Making testers code is just a sly way of saying that a manual exploratory blackbox tester does not add value, or at least enough value, to belong on my team.
  • By having this viewpoint, you are also saying that you posses the sum of knowledge of what it means to be a good tester and have reached a state of conscious competence in testing enough to make the claim that your determination of what a “tester” is, is not flawed.
  • The language we have traditionally used in the industry is what throws people off. People see the title “Quality Assurance” and think that only the person with that title should be in charge of quality, but this is a misnomer. We cannot claim that the team owns quality then say that it is the tester’s responsibility to be sure that the product in production is free from major product risks. They are opposing viewpoints, neither of which address testing.
  • Developers should move toward a better understanding of what it takes to test, while Testers should move toward a better understanding of what it takes to be a developer. This can be accomplished through collaborative/peer processes like Shake And Bake.
  • I believe that these two roles should never fully come together and be the same. We should stay complex and varied. We need specialists just like complex machines that have specialized parts. The gears inside a Rolex watch cannot do the job of the protective glass layer on top. Likewise, the watch band cannot do the job of keeping time, nor would you want it to. Variety is a good thing, and attempting to become great at everything makes you only partially good at any one thing. Also brands like Rolex and Bvlgari have an amazingly complex ecosystem of parts. The more complex a creation, the more elegant it’s operation and output will be.
  • Just like the ‘wisdom of the crowd’ can help you find the right answer (see session notes below from the talk by Mike Lyles) the myth of group reasoning can equally bite you. For example, a bad idea left unchecked in a given environment can propagate foolishness. This is why the role of the corporate consultant exists in the first place. In regards to testing organizations, keep in mind that just because an industry heads in a certain direction, it does not mean that is the correct direction.

 

“Visualize Testability” by Maria Kedemo

Webcast: https://www.youtube.com/watch?v=_VR8naRfzK8

Slide Deck: http://schd.ws/hosted_files/cast2015/f3/Visualizing%20Testability%202015%20CAST.pdf

Session Notes:

  • Maria talked about the symptoms of low testability
    • E.G. When Developers say, “You’ll get it in a few days, so just wait until then,” this prevents the Tester from making sure something is testable, since they could be sitting with the Devs as they get halfway through it to give them ideas and help steer the coding (i.e. bake the quality into the cake, instead of waiting until after the fact to dive into it)
  • Get visibility into the ‘code in progress’, not just when it is committed at code review time. (similar to to what Josh Meier recommended, see other session notes above)
  • Maria presented a new model: Dimensions of Testability (contained within her slide deck)

 

“Bad Metric, Bad” by Joseph Ours

Email: joseph.ours@centricconsulting.com, Twitter @justjoehere

Slide-deck: http://www.slideshare.net/qaoth/bad-metric-bad-45224921

Session Notes:

  • Make sure your samples are proper estimates of the population
    • I tweeted: “If you bite into a BLT, and miss the slice of bacon, you will estimate the BLT has 0% bacon”
  • Division within Testing Community (I see a visual/diagram that could easily be created from this)
    • 70% uneducated
    • 25% educated
    • 5% CDT (context-driven testing) educated/aware

 

“The Future Of Testing” by Ajay Balamurugadas

Webcast: https://www.youtube.com/watch?v=vOfjkkblFoA

Session Notes:

  • My main takeaway was about the resources available to us as testers.
    • Ministry of Testing
    • Weekend Testing meetups
    • Skype Face-to-face test training with others in the community
    • Skype Testing 24/7 chat room
    • Udemy Coursera
    • BBST Classes
    • Test Insane (hold global test competition called ‘War With Bugs’, with $$cash prizes)
    • Testing Mnemonics list (pick one and try it out each day)
    • SpeakEasy Program (for those interested in doing conventions/circuits on testing)
  • Also talked about the TQM Model (Total Quality Management)
    • Customer Focus, Total Participation, Process Improvement, Process Management, Planning Process, etc.
  • Ajay encouraged learning from other industries
    • E.G. Medical, Auto, Aerospace, etc. by reading about testing on news sites or product risks found there. They may have applicable information that apply here.
  • “You work for your employer, but learning is in your hands.” (i.e. Don’t wait for your manager to train you, do it yourself)
  • Talked about the AST Grant Program – helps with PR, pay for meetups, etc.
  • Reading is nice, but if you want to become good at something, you must practice it.
  • Professional Reputation – do you have an online testing portfolio
    • On a personal note: He got me on this one. I was in the process then of getting my personal blog back up (which is live now), but also plan to even put up some screen recordings of how I test in various situations, what models I use, how I use them, why I test the way I do, how to reach a state of ‘good enough’ testing where product risks are mitigated or only minimal ones remain, how to tell a story to our stakeholders about what was and was not tested, understanding metrics use and misuse, etc.
  • “Your name is your biggest certificate” – Ajay (on the topic of certifications)

 

“Reason and Argument for Testers” by Thomas Vaniotis and Scott Allman

Session Notes:

  • Discussed Argument vs Rhetoric
    • Argument – justification of beliefs, strength of evidence, rational analysis
    • Rhetoric – literary merit, attractiveness, social usefulness, political favorability
  • They talked about making conclusions based on premises. You need to make sure your premises are sound, before you try to make a conclusion based on solely conjecture that only ‘sounds’ good on the surface.
  • Talked about language – all sound arguments are valid, but not all valid arguments are sound. There are many true conclusions that do not have sound arguments. No sound argument will lead to a false conclusion.
  • Fallacies (I liked this definition) – a collection of statements that resemble arguments, but are invalid.
  • Abduction – forming conclusion in a dangerous way (avoid this by ensuring your premises are sound)
  • Use Safety Language (Epistemic Modality) to qualify statements and make them more palatable for your audience. You can reach the same outcome and still maintain friendships/relationships.

My conclusions:

  • This was really a session on psychology in the workplace, not limited to testers, but it was a good reminder on how to make points to our stakeholders if we want to convince them of something.
  • If you work with people your respect, then you should realize that they are most likely speaking with the product’s best interests at heart, at least from their perspective, and not out to maliciously attack you personally. You can avoid personal attacks by speaking from your own experience. Instead of saying “That’s not correct, here’s why…” You can say “In my experience, I have found, X Y Z to be true, because of these factors…” In this way you will make the same point, without the confrontational bias.
  • If you want to convince others, be Type-A when dealing with the product, but not when dealing with people. Try to separate the two in your mind before going into any conversation.

“Visual Testing” by Mike Lyles

Twitter @mikelyles

Slide-deck: http://www.slideshare.net/mikelyles/visual-testing-its-not-what-you-look-at-its-what-you-see

Session Notes:

  • This was all about how we can be visually fooled as testers. Lots of good examples in the slide-deck, and he stumped about half of the crowd there, even though we were primed about being fooled.
  • Leverage the Wisdom of the Crowd: Mike also did an exercise where he held up a jar of gum balls and asked us how many were inside. One person guessed 500, one person guess 1,000. At that point our average was 750. Another person guessed 200, another 350, another 650, another 150, etc. and this went on for a while until we had about 12 to 15 guesses written down. The average of the guesses came out to around 550. The Total number of gum balls was actually within 50-100 of this average. The point that Mike was making was that leveraging the wisdom of the crowd to make decisions is smarter than trying to go it alone or based on smaller subsets/sources of comparison. Use the people in your division, around you on your team and even in the testing community at large to make sure you are on the right track and moving toward the most likely outcome that will better serve your stakeholders.
    • This involves an intentional effort to be humble, and realize that you (we) do not have all the answers to any given situation. We should be seeking counsel for situations that have potentially sizable product impacts and risks, especially in areas that are not in our wheelhouse.
  • Choice Blindness: People will come up with convincing reasons why to take a certain set of actions based on things that are inaccurate or never happened.

 

“Using Tools To Improve Testing: Beyond The UI” by Jeremy Traylor

Slide-deck: http://schd.ws/hosted_files/cast2015/a5/Alt%20Beyond%20the%20UI%20-%20Tools.pptx

Session Notes:

  • Testers should become familiar with more development-like tools (e.g. Browser Dev Tools, Scripting, Fiddler commands, etc.)
  • JSONLint – a JSON validator
  • Use Fiddler (Windows) or Charles (Mac)
    • Learn how to send commands through this (POST, GET, etc.) and not just use it to only monitor output.
  • API Testing: Why do this?
    • Sometimes the UI is not complete, and we could be testing sooner and more often to verify backend functionality
    • You can test more scenarios than simply testing from the UI, and you can test those scenarios quickly if you are using script to hit the API rather than manual UI testing.
      • Some would argue that this invalidates testing since you are not doing it how the user is doing it, but as long as you are sending the exact input data that the UI would send then I would argue this is not a waste of time and can expose product risks sooner rather than later.
    • Gives testers better understanding of how the application works, instead of everything beyond the UI just being a ‘black box’ that they do not understand.
    • Some test scenarios may not be possible in the UI. There may be some background caching or performance tests you want to do that cannot be accomplished from the front end.
    • You can have the API handle simple tasks rather than reply on creating front-end logic conversions after the fact. This increases testability and reliability.
  • Postman (Chrome extension) – this is an backend-HTTP testing tool that has a nice GUI/front-end. This helps decrease the barrier to entry for testers who may be firmly planted in the blackbox/manual-only world and want to increase their technical knowledge to better help their team.
  • Tamper Data (addon for Firefox) – can change data as it is in route, so you can better simulate Domain testing (positive/negative test scenarios).
  • SQL Fiddle – This is a DB tool for testing queries, scripts, etc.
  • Other tools: SOAPUI, Advanced Rest Client, Parasoft SOAtest, JSONLint, etc.
  • Did you know that the “GET” command can be used to harvest data (PII, user information, etc). Testers, are you checking this? (HTSM > Quality Criteria > Security). However, “GET” can ‘lie’ so you want to check the DB to make sure the data it returns is actually true.

My conclusions:

  • Explore what works for you and your team/product, but don’t stick your head in the sand and just claim that you are a manual-only tester. You have to at least try these tools and make a genuine effort to use them for a while before you can discount their effectiveness. Claiming they would not work for your situation or never making time to explore them is the same as saying that you wish to stay in the dark on how to become a better tester.
  • Since Security testing is not one of my fortes, I personally would like to become a better whitebox hacker to aid in my skill-craft as a tester. This involves trying to gain the system and expose security risks, but for noble purposes. Any found risks then go to help better inform the development team and are used to make decisions on how the product can be made more secure. Since testers are supposed to be informers, this is something I need to work on to better round out my skill-set.

 

“When Cultures Collide” by Raj Subramanian and Carlene Wesemeyer

Session Notes:

  • Raj and Carlene spent the majority of the time talking about communication barriers such as differences in body language, the limitations of text-only (chat or email), as well as assumptions that are made by certain cultures about others regardless if they are within the same culture or not.
  • Main takeaway: Don’t take a yes for a yes and a no for a no. Be over-communicative if necessary to ensure that the expectations you have in your head match what they have in their head.

 

Conclusion:

I hope that my notes have helped you in some way, or at the very least exposed you to some new ideas and knowledgable folks in the industry from which you can learn. Please leave comments here on what area you received the most value from or need clarification. Again, these are my distilled notes from the four days I was there, so I may be able to recall more or update this blog if you feel one area might be lacking. If you also went to CAST 2015, and any of the same sessions, then I’d love to hear your thoughts on any important points I may have overlooked that would be beneficial to the community.

The Improvement Continuum

The Improvement Continuum

Abstract: The Improvement Continuum is a dual-pronged concept, containing both product and personal components, like two heads on the same animal. One head pertains to improvement within a solution, product or service, while the other concerns itself with the human mind, particularly our capacity for learning. This idea states that a viable candidate can never reach a point at which it legitimately plateaus in quality. Therefore, by extension, any perceived quality plateau, intentional aside, must be a product of human misunderstanding or mis-measurement of the current state, rather than a lacking of the candidate itself. For the purpose of this article, the term “candidate” is being used to refer to either a solution, product or service that is currently being used in the market, or the human’s individual capacity to increase mental operational quality through learning as long as that human is not inhibited by a medical condition otherwise. In other words, both products and humans have the capacity for improvement.

 

Introduction:

It is widely accepted that a product or service, can never reach a point at which it should intentionally arrive at a plateau in quality, unless that particular solution has been sunset. There’s an unseen and undiscussed black hole where sunset products go to die, but to avoid that, improvements must continually be invented, prioritized and implemented. This may seem like an obvious statement, but what’s not so obvious is how to move the entity forward.

In software development, an increase in quality does not simply mean finding and fixing bugs, as that is only one facet of how the product can be improved.

When evaluating the product as a whole, we should decide as a team which areas need the most focus. What areas of improvement is your division good at? Which areas may have been neglected? Do your efforts to improve as a tester align with the current product risk areas? First, we must establish that candidacy for improvement actually exists. In other words, is improvement warranted for a given feature, perhaps at least a quarterly cadence, or more rarely, for the product as a whole, perhaps on at least an annual evaluation schedule?

 

Determining Improvement Candidacy:

This section only applies to solutions (product and service offerings), not humans, since the latter is a candidate, inherently and indefinitely.

As long as a product or service is a viable offering, then it’s candidacy for improvement remains active. Even software solutions in maintenance-only mode, with a sunset timeline, are still candidates for being improved upon until EOL (the End-of-Life date). Keep in mind, just because something is a candidate for improvement, that does not mean improvement on that solution is mandatory. To understand this, we must look at the four types of improvement candidates that are abandoned.

 

Abandoned Candidates (Four Types):

Improvement candidates can be abandoned intentionally or unintentionally. How we handle these situations, says a lot abut our maturity as testers. This can be positive or negative, depending on how the abandonment was implemented. If you find yourself in this situation as a tester, take a moment of pause before getting upset. Try to realize that there’s a bigger picture outside of you, and the idea that we are the sole ‘gatekeepers’ of the product is archaic. Michael Bolton better frames this point in his blog post here Testers: Get Out of the Quality Assurance Business. Understanding why a given product is or is not getting attention can greatly help you do your job as a tester. It is easy to become disenfranchised with your product offering if you do not understand the business reasons behind the work being given to your team. Use the criteria below to be more informed about which camp your product lives.

  1. Warranted Intentional: The term ‘intentionally abandoned improvement candidate’ sounds like it’d always be a negative, but this can in fact be a sound business decision, thus warranted. In this case, product and upper management within the business has compared the risks of abandoning improvement from both a financial and reputation perspective. There are many sub-level considerations that play into each of these. Perhaps the revenue generated on the given product is negligible and efforts would be better focused on another solution or direction. Perhaps the known backlog of issues has been evaluated from a product risk perspective and deemed as a low-impact, and shoring up this backlog to maintain industry reputation would reside within the realm of diminishing returns.
  2. Unwarranted Intentional: Like the first type, management did at least make an intentional effort to get together and discuss the business needs, but due to a variety of reasons, of which only a small oligarchy may be aware, the product offering has abandoned without justified cause. Unfortunately, an unwarranted abandonment of a given product offering usually does not come with much transparency down to the teams. It’s may not even be a top-down decision, so tech leads and architects may have been involved, but it is possible that an unwarranted abandonment still took place in favor of a different alternative. Many times, unwarranted abandonment cannot be proven by those opposing the decision until it is too late. For example, the new offering craters after a few years in the market, but the previous solution would still be thriving. In this case, it may not be a matter of an oligarchy making the decisions, or lack of transparency, but rather a major miss on industry expertise and demand forecasting. This can sometimes plague startups that hire amazing talent with incredible knowledge and good intentions, but lacking in industry wisdom through experience.
  3. Warranted Unintentional: This is similar to the first type, except that external factors forced the abandonment. In the case of an innovative idea that does have a market, this usually only happens when a major mistake is made that threatens our humanity. For example, public exposure of a security hole in a financial product that shows it is storing PII (e.g. Home address, SSN, DOB, etc.) in clear text in an unencrypted database. This can cause irreparable reputation damage that can take a product out of the industry almost overnight, no matter what damage control may happen after the fact. You could argue that this is unwarranted, but that position is based on the human notion that everyone gets a ‘second chance’, but that often does not exist in many industries. Take a defibrillator, a medical device for example. What if the initial model from a new startup killed patients in some edge cases? As a startup, recovering from the resulting legal situation would be close to impossible. Now, this also happens when the product never had a market to begin with, thus the initial investment was done based on good intentions rather than research of data points within the target industry. However, this usually manifest in one way or another before a mass go-live since scarce target clients would be one of the obvious indicators of a product that is headed in this direction.
  4. Unwarranted Unintentional: This is the most rare type of abandonment, since smart intuitive ideas usually thrive in one vein or another. If in fact the product is sound and innovative with a need in the market, then this somewhat requires the planets to align, in that three factors must be present: External forces, sometimes even from those who wish to see a competing product fail. Bad salesmanship, marketing, and demographic targeting based on the product’s features, thus no traction is gained in the market. And finally, this requires an internal framework of individuals who lack ownership in the product.

 

Quality Plateaus:

Due to the nature of how improvement works, both within a product or a human, quality plateaus can be both intentional and unintentional. We’ve discussed various types of improvement abandonment, and now similarly need to discuss the different forms of quality plateaus that can take place. A product quality plateau can only be justified in the case of the Warranted Intentional abandonment case described earlier. A quality plateau within a human can only be justified in the case of rare medical conditions, but ethically there’d never be an intentional case of such condition, thus is it an outlier, external to this discussion.

 

Plateaus Within Products And Services:

Other than these edge cases, it has been established that an unaffected entity (see Abstract) cannot legitimately arrive at a quality plateau. By extension, any perceived quality plateau within a product must be a symptom of misunderstanding or mis-measurement of the current state in order to make that determination, rather than a failing of the product itself. This also means that such plateaus always require the need to be remedied.

 

Plateaus Within The Human:

In the case of the human mind, a tester’s skill is a qualitative property, and cannot be mathematically or objectively measured; therefore, a quality plateau would be subjective at best. If this plateau does exists within a tester, then it must be dynamically linked to an ethereal ‘learning to date’ + ‘ongoing learning’ measurement to equivocate to some qualitative scale or understanding. Most of the time this can be due to many easy to identify (and easy to remedy too) factors, such as: work ethic, laziness, lack of resources, poor management, misdirection and misinformation, etc. These problems are age old though, and can be fixed. However, for those who have reached a state of Unconscious Competence, this can be a very legitimate concern.

With that said though, these people are few and far between within the testing community and none of  them of course have mastered all areas of being a good tester either.

The Perimeter Assumption:

Something else to be aware of when it comes to learning in various areas of testing, is The Perimeter Assumption. This is something that many struggle with when it comes to testing and learning. This is the idea that as long as I know the most important items  (the extreme edges/test max capacity), and I understand the general framework, then I don’t need to worry about the little things (other considerations within those boundaries). This is something that is troubling but still influences us as testers. It can make us comfortable and complacent in our learning, if we are only worried about the most extreme scenarios when testing. For example, when testing a credit application, we focus on making sure the form submits but might miss that some negative testing exposes a major flaw in the website, exposing PII stored in the database. Remember, not all showstopper product risks are found from testing in high-risk areas.

 

Time Management:

Learning is hard. If it weren’t hard it wouldn’t be beneficial, nor have the allure like it does for so many. I used to say that time management was my biggest roadblock, but then realized that I created that problem for myself, so I needed to stop complaining about it. Learning is liquid. By saying this, I mean that learning is this huge phase space that has no boundaries, so it is easy to create insurmountable learning obstacles that we never address. Brian Kurtz, one of my mentors, often says, ‘If you give me one hour to test something, I’ll use that entire hour. If you give me one week to test that same item, then I will use the entire week.’ The same is true with learning; we will fill the time given with testing. However, there’s so much to learn and not all of it is valuable to us as testers. We have to prioritize what should be learned since business priorities force us to time box our learning to an extent.

So, I like to use this ‘gate’ visual to describe how we self-sabotage our own learning, in hopes that this might help others become more aware of their own potentially self-imposed learning roadblocks:

It may sound ironic, but unlike other roadblocks in my life, learning roadblocks have little to do with learning itself being the problem. Product feature roadblocks for example are usually based on knowledge about that feature being needed to continue development and testing. With learning, the roadblocks tend to come from all other angles, and this is simply because I have traditionally prioritized learning after all my other duties.

As humans, we will naturally seek to fill our time, and typically we will fill it with items that are familiar to us. We might not even enjoy some of these items. For example, excessive meetings can sometimes creep up in the scrum process; however, we get into a cycle of what we believe is expected of us and rarely challenge it. Sure, we challenge acceptance criteria, developers, bug reports and product management, but at some point through the sprint, testers have satisfied some of these priorities, yet continue to use the remaining team priorities as reasons why individual learning cannot be achieved.

Sometimes, we must work with our team to evaluate schedules and product risks, in order to open these gates consciously so that we can target the learning we want done. We must reprioritize, and minimize the time it takes to address some of these expectations. If we time-box our activities, then we’ll have time to address learning on a continual basis, and set aside time within within each sprint for this. So, stop self-imposing these barriers on yourself. We invent this structure, then complain and beat ourselves up when we don’t get to do the learning we want. When asked why we haven’t focused on using a certain test model or pursued reading that book about testing we said we would months back, we pass it off as not being good at time management. Make learning as a tester, one of your main priorities, and if you have to, work with your Scrum Master, Product Owner and Developers on your team to build time into your sprint to prioritize learning just like you would a user story.

Constantly improving as a tester, also involves a great amount of humility to realize we do not have all the answers. Imagine a contractor being called by a customer and they walk into the house with only a nail and a hammer, under the assumption they can handle anything the homeowner might throw at them. Well, when stated like that it’s a completely ludicrous assumption on the part of the contractor. So, why do we do this as testers? We try to take on all testing scenarios with our current knowledge set, but that set is just like having an inadequate toolbox for the jobs put before us. We need to ‘Fill our mental toolbox’, as my collegue Brian Kurtz says, for better ways to address this common pitfall.

 

Action Items For Testers:

Now that you have this knowledge, how do you use it? How does being aware of ideas like improvement abandonment and quality plateaus actually help you in your day to day to better understand how to continually improve? Ideally this article serves as jumping off platform. Awareness of a problem is the first beneficial step to moving your mental state out of Unconscious Incompetence, which is the “I don’t know what I don’t know” state. What’s the barometer to know if you are in this state? Simply ask yourself if this article came across a little heavy. Does this information seem confusing or foreign to you? If so, then perhaps it’s worth investigating if you genuinely desire to become a better tester, and combat any potential learning plateau of which you may or may not be aware.

So, where do you go from here? Read Quality Concepts and Testers Tell A Compelling Story. It is our job as testers to “cast light on the status of the product and its context, in the service of our stakeholders.” – James Bach. We can only do this effectively if we continually educate ourselves on how the product works, and continually improve our own mentality when it comes to how to test. If you are testing for the sake of testing, and simply present in your job to collect a paycheck, then I encourage you to take a introspective look at the reason why. Our responsibility as testers is to the product and its stakeholders, so ultimately if you are occupying a test position within a company, but know you are lacking that passion to be a constant learning for the betterment of our stakeholders, then it may be time to evaluate if testing is your true calling.

Conclusion:

The improvement continuum exists because there is no true zero. Conversely this also means there is no true 100%. In short, quit trying to quantify things that are only qualifiable; rather, concern yourself with identifying the actions needed to reach the next step on the infinite staircase of learning as a tester.