The ‘Scope of Work’ Problem in QA

"How do you know when you are done testing?" goes the popular question in testing theory. It relates to the core testing principle that exhaustive testing is impossible. You can't test everything, so it's difficult to know when to stop.

The scope of work problem identifies that the extent of testing, and therefore work, is self-defined by the QA analyst creating each test plan on the project. 

As a result, it's difficult to determine if the scope of testing has been logically deduced from project factors or simply limited by the experience, attitude and drive of the individual planning each feature. 

How did the QA analyst arrive at the decisions of what to test, what not to test and when to stop testing? The answers to these questions are dictated by their skills and experience (or lack of).

In this article I'll compare how rigorously we define the 'definition of done' for test plans versus the work from other game disciplines and how, more often than not, QA analysts are left to define the scope of their own work and what that means for the discipline as a whole. 

Defining the scope of work for other disciplines

A great deal of time and effort goes into scoping, recording and verifying the work completed by other game development disciplines of code, art and design. We collectively agree on the acceptance criteria for a feature and then test against it. While this scoping is not without its own problems, teams are quick to solve them and the existence of the term 'definition of done' attests to the focus on getting this right.

Large features are broken down into smaller deliverables, showing clear and understandable progress towards completion of the whole scope of work. Often (though not always) the progress by other disciplines is also very visible to everyone on the team. They can see and interact with a new feature being built, observe the progress of it and provide input. The entire process is very transparent to everyone involved*.

Most game teams have project processes like playtests and sprint reviews, which are orientated around reviewing the work done and providing feedback. The primary goal of sprint reviews is to exhibit the progress made during the sprint and collectively agree that it's complete. The sprint structure encapsulates this entire development process, providing starting and stopping points for feature development when a new sprint starts.

All of these attributes mean that it’s very clear when any individual can stop working on a feature and move onto something else. 

Why is test planning so different?

When we compare this with test planning for a feature, a change or a release, there is much less external scrutiny (outside of the QA discipline) on defining and tracking the work within a test plan. There are often no project-mandated processes or meetings to review that work and agree on its original scope or track completed work against that scope. While embedded QA analysts (‘dev QA’) will conduct component testing within the sprint structure and be held accountable to testing the acceptance criteria of sprint Stories, a majority of test work across a project exists outside of the sprint structure. 

This exclusion of test planning work from the sprint process is a big organisational and motivational problem for QA analysts because it removes their work from the scrutiny of the wider team. The scope of a test plan, the adherence to that plan and the velocity of completed test writing are just a few of the many attributes of daily test planning work that go untracked by the wider team. The QA analyst is left to self-organise the entire process, with oversight from their QA manager if they are lucky.

I’ve seen many cases where a test owner makes updates to their development pod during daily meetings that are received at face value with no further questions asked. Not because the team doesn't care, but often because everyone is focusing on their own work and orienting their conversation around the tasks on the Kanban board. If the test owner says the "test plan is done", it must be true. No further questions. No scrutiny on how they have reached their definition of done. The test owner is simply trusted that the plan they’ve put together covers all of the most important game areas and mitigates for the highest feature risks.

This doesn’t even consider publisher and external QA teams who are writing test plans. These individuals are even further removed from the scrutiny of the sprint process and the wider project team input. They often create and execute test plans in complete isolation. I’ve spoken to many development team members who aren’t aware of the work publisher and external teams at all. They’re a black box.

All of this highlights the lack of oversight there is on the scope of test coverage defined within any test plan. The QA analyst must carefully self-assess their work to produce enough tests that will provide confidence in the feature, but not too many that testing would be an inefficient use of time (or more likely, produce a plan too large to be completed in the given time). Producing an inefficient test plan that also doesn’t provide confidence is a real possibility, which can be seen whenever exploratory testing is overused by a QA analyst.

So, we’ve identified that every test plan is limited by the skills and experience of the QA analyst creating it, and without oversight, the coverage in these plans varies wildly from person to person. Many test writers produce excellent test plans that are concise, comprehensive and well-written. They’re able to maximise the time given to produce the best possible plan. While others may be slower to identify all the test areas that need coverage and run out of time to write all the tests, or are forced to write less detailed tests to complete them in time. Less experienced test writers may also miss important areas that need coverage within the plan and are deprived of any feedback telling them otherwise. 

This lack of oversight means test planners only find out that their plan was lacking when a bug is found after testing has completed. However, there are other ways to provide feedback.

Reviews, reviews, reviews

Most teams have recognised this as a problem and introduced a feedback loop for test planning, even if they haven't identified it in such an explicit way. 

The first line of defense is the QA manager of the analyst conducting the test plan. Like any discipline, the manager should have visibility on the work of their reports and have the time and knowledge to provide feedback on their test plans. This would be the equivalent of a lead developer feeding back on the quality of a developers code. However, this requires the manager to play a hands-on role in day to day project life, something which isn't the case for many QA managers. Many managers are not well positioned to conduct such a thorough and incisive analysis because their own work forces them to take a more holistic view of the project. Reviewing the completeness of any test plan also requires the reviewer to conduct a quick version of the full feature test planning process to identify missed areas. A time consuming task when done well, and a task which is easily underestimated.

Peer reviews of test plans by other QA analysts are sometimes used to tackle the inconsistency between individuals and improve the skills of more junior team members. This is one of the most effective tools I've seen for reviewing the scope and coverage of each test plan. Incomplete or poorly organised plans are clearly identified amongst the group. Additionally, analysts are able to improve their own plans through reviewing the work of others, it works both ways. The increased number of possible reviewers also improves the chances that someone is available to review a test plan in a timely manner. 

Even better, some teams request dev and pod reviews of their plans. These reviews aim to get the feature author buy-in on the test plan and ensure the scope and focus of testing is aligned with the developers views. When conducted well with engaged developers, this can be very effective. However, many teams still struggle to foster a culture where time on reviews is time well-spent, instead preferring feature authors to maximise their time on only their own work. In other cases, review requests are met with unhelpful responses like, "looks good to me, very comprehensive!". Working towards effective test plan reviews and the solutions to these problems is a larger topic beyond the scope of this article, but it still highlights an important solution to the problem.

Conclusions

Because the scope of test planning isn't an inherent part of the sprint process, QA are left to define and regulate their own definition of done, choosing how much or little work goes into each plan. Many teams are trying to tackle this by introducing feedback and review loops for this work with varying degrees of success. Without a mature feedback loop, QA analysts across the industry are neglected feedback and can spend a long time creating limited and ineffective test plans. The extent and maturity of the QA analyst/ QA engineer role still varies across the industry as companies understand more about the depths of the test planning role. 

*There is a tangential topic here that I'd like to recognise. It's very possible for a developer to complete a User Story, fulfill all acceptance criteria and have it pass all tests, but still have the code be poor quality. Like the QA analyst in our scenario above, during development there are many ways to reach the solution, some are hackier than others. It's up to the developer to define how much work to put into the solution and their manager to review and catch poor quality code. This is the closest comparison I can make to the problem I described in this article.


Enjoying my content?

Consider buying my book to keep reading or support further articles with a small donation.

Buy me a coffee
Previous
Previous

Being a technical tester doesn't mean you have to write code

Next
Next

The ‘Proof of Accountability’ Problem in QA