Poor performers: Hiding the sins of bad test planning

The QA analyst/engineer role in game testing is one of taking features and creating test plans which define what should be tested and what isn't necessary to test. In many ways it's being the 'ideas guy' of testing, but with the caveat that you need to come up with all your ideas on-demand and can't say no to a feature.

Experienced QA members also know that good test plans include ideas far beyond the information provided in documents, user stories and from feature authors. The meat of a test plan are the ideas stemming from the experience of the test planner. It's the added value they bring into the business.

Unfortunately, many test planning sins can be easily hidden by over-allocating or misusing tester resources.

In this article I’ll highlight the problem that it’s easy to hide ineffective test planning, with some QA analysts not performing well in their role, regardless of whether they’re aware of it.

The larger problem is that we commonly measure the performance of a test plan wrongly by assuming if no major bugs were missed, then the test plan must have been effective and the person who wrote it effective in their work. This incorrect assumption hides the finer details of the work and allows ineffective test plans to go undetected. When we explore this further, we also find that many poorly written test plans still catch important bugs. 

How do they do it? They misuse the tester resources they have at their disposal. Let’s explore why this is a problem and the various ways that teams of testers can be misused. 

Relying too heavily on exploratory testing

Focused exploratory test sessions with a well-defined charter are much quicker to create than concrete scripted tests, loosely defined exploratory tests even more so. Exploratory testing is also well-known as an effective way to catch bugs. So, what's the negative? The problem is that exploratory testing is very wasteful of tester time and difficult to measure just how wasteful it is. If the QA analyst isn't providing timeboxing for their sessions and the testers aren't recording their actual combined hours spent testing, then the inefficiency is hidden. Unfortunately, the reality is QA analysts that chose to rely on exploratory tests are also more likely to set up their test sessions without estimates.

Furthermore, exploratory testing provides no measure of test progress or test confidence because bugs are the only output. This uncertainty doesn't provide a clear exit criteria for testing and causes QA analysts to run 'just one more test session to be sure'. Other team members are also more likely to request additional testing when the lack of concrete test results is creating a lack of confidence. 

These factors exacerbate the inefficient use of tester time when relying too heavily on exploratory testing. The end result of this approach is that the number of testers assigned to this test plan will be higher and they’ll be engaged for longer. The inefficient use of test time may even lead to overtime requests, which increases the cost of testing the feature even further.

What I’m describing here is essentially the ‘spray and pray’ approach to test planning. The QA analyst doesn’t have the time or experience to direct the test effort efficiently and instead relies on the combined experience of the testers in the hope they’ll identify an acceptable percentage of the most important bugs.

Not reducing combination tests down to a manageable set

Features within games frequently create huge combinations of states and configurations which are very time-consuming to test. Whether it’s a set of playable characters and the various weapons that they can equip or testing across the different hardware platforms the game runs on. QA analysts need to take a data-driven approach to combinatorial tests and carefully consider if/when tests should be duplicated out across multiple configurations. 

Using the basic example of supporting multiple platforms: Xbox, PlayStation and PC. Should a feature test plan be entirely duplicated on each platform or is it enough to divide the plan equally across the platforms? The former option could increase the test effort by x3. Real project examples are rarely this simple and provide even more opportunity for inefficient tester use. The problem here is that both options will catch the important bugs if test planning is conducted well, but one of them might be grossly inefficient. 

Making decisions on how to efficiently manage test duplication and combinatorial test scenarios requires effort, expertise and time for the QA analyst. What’s more, a QA analyst who’s done this analysis could come to the same conclusion as a QA analyst who has just chosen to test everything through laziness. The outcomes are the same, but for very different reasons, which makes it very difficult to separate the two.

This is another area where the over-allocating testers to a plan hides the lack of effort spent during test planning. This is sometimes caused by lack of experience, other times because it's too easy to select the buttons to duplicate a test run without thinking about the impact of that decision.

Asking testers to write tests or set up their own tasks

Many QA analysts see testers as generalists at their disposal for any small task or request that they might have. Sometimes this includes setting up test tasks in the database or even writing the scripted tests for the QA analyst. Some development teams are also unsure of the QA analyst's responsibilities and aren't well-placed to call out this misuse. Indeed, some QA analysts identify their roles to be more like middle-managers, to liaise instructions between the development team and the testers.

Whenever this happens, test plans are very likely to be ineffective because most testers don't have the experience to guide the test direction for an entire feature. In most test teams the tester role takes instruction and executes on that instruction. Sure, they'll still find bugs. They may even find some good ones. They’ll also have some of their own ideas that they exercise during exploratory testing. But asking the testers to do the work of the QA analyst is a gross misuse of the team and hides the poor performance of the QA analyst. What's more, because they're being asked to both write and run the tests, the total time spent increases substantially.

Failing to align test resources with test plan and feature delivery

Throughout the project schedule, QA analysts must align the dates when a feature will be ready for testing, when the test plan will be ready to run and when the testers will be available. Failing to plan adequately results in testers readying themselves to begin testing and waiting around because either the feature or the tests aren't ready. In the best scenarios, the testers temporarily move to another task that would benefit. However, if this isn’t an isolated incident within the test schedule, the cumulative effect creates a lot of wasted test hours. Because of the complexity of test execution schedules on big game projects, I want to stress that a single day of missed test time isn't a serious issue. But I’ve seen some testers waiting for test plans, features or bug fixes for many days or weeks, which silently burns through the test budget.

The creation of the test plan and the organisation of the testers are entirely the responsibility of the QA analyst and so they should be held accountable when these are badly aligned with feature delivery. While QA analysts can’t control when features will be ready for testing, they are responsible for communicating with feature authors and correctly identifying when feature delivery is off-track. Experienced QA analysts will know if feature delivery into testing is likely to be delayed and organise contingency measures for the testers who are assigned to the work. 

When poor organisation of tester resources throughout the schedule is consistent for a single QA analyst, the impact isn’t obvious unless the tester allocation is reviewed day-by-day. Otherwise this is just another small contribution to the huge cost of testing.

The problems with tester over-allocation

At this point you might consider, what is so wrong with using the tester resources to remedy ineffective test planning? If the important bugs are found then the desired goal of testing is achieved, right? Let’s look at the answer to that question and wrap up this article by looking at the reasons this is such a big problem.

  1. Over-allocating testers for a feature contributes to the already huge spend on testing for the project, increasing the likelihood that company leadership restructures the QA team or investigates outsourcing some roles.

  2. Over-allocating testers means other QA analysts have less resource contingency for their own test plans.

  3. Relying too heavily on exploratory testing and the experience of the testers is not a repeatable approach and will fail to catch the important bugs sooner or later.

  4. Relying too heavily on exploratory testing and the experience of the testers means the test plan will not benefit from development team input and the embedded team experience of the QA analyst.

  5. Asking testers to perform any tasks apart from test execution is asking them to do work outside of their job description and unfair. In most cases, testers are not paid or trained to create test plans.

  6. Poor test planning is more likely to trigger overtime for the testers, which negatively affects both the test team morale and the test budget.

  7. It’s difficult to separate QA analysts who are diligent with the use of test resources from those who aren’t, creating an unfairness across a team.

Enjoying my content?


Consider buying my book to keep reading or support further articles with a small donation.

Buy me a coffee
Previous
Previous

Chasing Ghosts: When a bug isn't a bug

Next
Next

Real QA contributions to quality at source