Time Trial Testing Episode 2: Risk Heuristics

In this episode of Time Trial Testing, Brian Kurtz and I time-boxed ourselves to a 45-minute session to perform risk assessment of the X-Mind product. We used a heuristic-based risk analysis model to take a look at the UX/UI of this mind-mapping product. See Time Trial Testing – Episode 1: SFDIPOT Model for more details on how ‘Time Trial Testing’ sessions are meant to work.

  • Model: Risk Analysis Heuristics (for Digital Products) – by James Bach and Michael Bolton
    • Note: We limited our scope to only two of the sub-nodes.
      • Project Factors: I approached this from the perspective of a tester on an internal development team.
      • Technology Factors: Brian approached this from the perspective of an external tester, outside of the company.
  • Session Charter: UX/UI Product Risk Analysis
  • Product: X-Mind
  • Time: 45-minutes
  • Artifact (See image below or X-Mind file)

Click Image to Enlarge

Brian’s Observations (Technology Factors):

  • Conscious competence is alive and well. Using something that you have not used in a while or in a specific context takes effort. Sometimes it can be a downright struggle.
  • In this time trial we started with a mission. Find risk to the UX and the UI. Still I think next time it needs to be more focused based on the 45min window we are giving ourselves. Maybe risk to the UX and UI on the menu bar or icon/toolbar.
  • Every time I use a model I am reminded again how beneficial the results are to me after it is over. They always help me think about aspects of “something” that I would not have thought of on my own. I can always see the value afterwards.
  • I have only had to evaluate a third party application for purchase a few times. This time trials remind me what a daunting task it is to evaluate something as an outsider.
  • Although each of these time trials has produced a mind map that illustrates the value of just 45 minutes. It would be nice to take one to a more complete “state” to really illustrate what a more finished Strategy would look like.
  • I would remind people when you are creating these kinds of artifacts that it’s ok not to know all the answers. Because asking questions and having dialogue with stakeholders that do is what this is all about. Asking questions and picking others brains is a huge part of the learning process.

Connor’s Observations (Project Factors):

  • Not Yet Tester: This was actually my highest-priority item, so I am moving it to the top of this list, in the even that you get distracted and stop reading. Areas that have not yet been tested are likely going to have new bugs that we’ve never seen before, thus they have the potential to take longer to fix than familiar buggy areas. Also, these areas of the code typically only have one or two subject mater experts, the developer(s) that create it. The Product Owner and the Tester have no knowledge of how this area of the product was actually developed, post requirements, post planning, etc. so during these times, brain-dumps from the creator, original developer, are key. In our case, a UI Developer would knows how and why the product is made how it is, and what caveats there may be. Having this discussion up font with the developers, before diving into testing, will greatly increase your effectiveness at creating a more thorough test strategy and uncovering potential product risks. In these cases especially, we need to make sure we do not silo ourselves as a tester, under the guise of simply ‘needing to get the work done’. I have had many pre-test discussions that have drastically change the type and amount of time I plan to spend testing a given area, making me more efficient int he endeavor.
  • Learning Curve: This node forced me to consider the biases of the team, and how their existing knowledge of UX/UI from previous project or workplaces might positively or negatively influence the creation of a mind-mapping product. For example, if one of the UI Developers used to work in a vastly different industry with different customer needs (e.g. Medical Device Software), then this person may consciously, or subconsciously project those former needs on his new user group, even when the demographics are worlds apart.
  • Poor Control: This was a good reminder about making sure we control what we can, and not spending a lot of time trying to influence external factors. Do we have a solid DoD (Definition of Done)? Are we doing code reviews? Are the right people doing code reviews? Are we working from customer-approved mock-ups or are we just hoping that the UX/UI work is desirable? Are UX/UI Architects outside of the immediate team involved or are we just winging it with our limited knowledge?
  • Rushed Work: Every development team in the history of software development has struggled with time management. Either dev complete late in the sprint, so testers then have to rush, or product management sets hard-date deadlines in the mind of the customer, then the team has to release whatever they have, rather than move toward a more healthy ‘release when ready’ model. Perhaps estimates are created without UX/UI mock-ups, and then they arrive mid-sprint completely turning the original estimate on its head. Sometimes teams have good intentions, and do not intentionally think about how to best manage and section of their time. We need this to be one of the first things we think about, not the last.
  • Fatigue & Distributed Team: Before using this heuristic, I had (for some reason) always separated the fluid attributes of the workplace from the actual work that gets done and pushed out in releases. I had never considered the team being tired or distributed as a “product risk” persay. Since I was always comfortable with the deliverable being molded a hungred times along the way (Agile, not Waterfall), then whatever we got done, we got done, no matter how we felt along the way, and that would be accepted as our deliverable. I saw it as a performance risk to team operations rather than to the content of the product. While remote communication can sometimes spawn assumptions and miscommunication, I always felt like resolution in the 11th hour could handle any of these concerns. However, in using this model, it made me realize that this paradigm I had operated under was in fact the symptom of working in a blessed environment. I only thought this was because I’ve mostly worked with teams that were able to resolve major risks pre-release, or at least know about them and push intentionally. I feel that if I had more experience working in an environment with only remote teams (e.g. offshore), or less knowledgable folks, then I may have had this realization sooner.
  • Overfamiliarity: I think this is most easily noticed when we hire new people or bring others into an already well-oiled machine. These new perspectives can help expose ares to which the current development team(s) have become jaded. We should think about this with long running project teams especially. Perhaps shifting work from team to team is beneficial from time to time. Sure, Team A will not know what Team B is doing, and the velocity might slow down for a little while, but swapping teams’ works has many other upsides that I think are worth the time investment. If you cannot do that, then bring in external team members for a week, let them act as product, code and quality consultants. As it relates to our charter, perhaps they will see obvious avenues of UX improvement that you have just become used to. Remember, the barometer for good UX is determined based on how much user frustration is caused. How many times do new hires join the team who say, “Why does it work this way? That’s unintuitive.” to which we reply, “Oh, it is just like that, here’s the workaround…” In these situations we are part of the problem, not the solution. We are increasing product risk by ignoring the advice that comes from the fresh set of eyes simply because we have ‘gotten used to it’. Shame on us (us = team + product management, not simply testers).
  • Third-Party Contributions: You can decrease UX/UI product risks by limiting your dependency on 3rd-party technology. It typically requires a spike (development/technology research sprint, or two) to make such a determination, but if you can ‘roll-your-own’ tech that gives you exactly what the customer wants, and removes dependencies (and thus risks), then I would encourage product management to consider doing it, even if it takes twice as long (given the customer has been trained to accept a ‘release when ready’ development model).
  • Bad Tools: The Scrum Master should be in constant communication with the developers and testers on the team (and vice versa) in order to alleviate these kinds of concerns. A good Scrum Master does not need technical knowledge to help facilitate technology changes.
  • Expense of Fixes: First, let’s dispense with the following statement, “The later bugs are found, the more expensive it is to fix them.” Not necessarily. This statement does not contain any safety language (Epistemic Modality) or take into account context. This statement has been used historically to point fingers or use fear to motivate naive development teams, both despicable tactics. A better statement would be, “Depending on customer priorities and product priorities, bugs found later in the development process might be more expensive to fix, depending on their context.” E.G. What if we find a typo an hour before release? That’s a five minute fix that is not expensive. Now, if you have a broken development process that requires you to spend hours rebuilding a release candidate package, then sure, it might be expensive, but let’s be careful not to correlate unrelated problems and symptoms from two disparate systems.


Many testers do not even consider using some form of risk heuristics, mainly for two reasons: it is outside of their explicit knowledge, or they do not see value in it, usually due to never having tried to do risk assessment in a serious manner. Acceptance criteria is the tip of the iceberg, so don’t be the tester that stops there. What are your thoughts on this? Have you tried using this Risk Analysis Heuristics (for Digital Products) before, or used something similar? Do you even see value in risk analysis? Why or why not? What are your other takeaways? I encourage all Testers to do this same exercise for themselves. Reading through the model vs. actually using it, provided greatly different experiences for me. In reading it I found some nice ideas that sounded correct and good, but it was in its use that I found applicable value to what I do as a tester and am now compelled to use it again; a feeling I never would have experienced, had I only read through it.

This blog post was coauthored with Brian Kurtz.

Leave a Comment

Your email address will not be published. Required fields are marked *