QA vs. Testing: The role of QA in games and how getting it wrong can create a terrible team culture

Through working in different game teams and talking with colleagues and other QA professionals, I see how differently test teams operate and specifically in their use of the term ‘QA’. QA has a very broad scope and includes many quality processes outside of the responsibility of the test team, while ‘testing’ has a much more limited scope within QA and has different goals.

The reason for this article is because it’s easy (and unfortunately quite common) for the game studio leadership to not understand the scope of responsibilities of the test team and their goal as a discipline, confusing QA with testing.

If the test team is inexperienced, has weak leadership or only has junior members, then it can easily be led astray and moulded over time in directions that the team was never designed to go. The failures aren’t always obvious and usually manifest themselves in poor quality releases, distrust, disrespect and/or lack of communication between the test team and the rest of the project team. The testers themselves might also get frustrated with the situation and could contribute to a high staff turnover, which is also sometimes dismissed as ‘the way things are’ with the testers and ignored by the wider studio leadership as a warning sign.

Making the problem worse is the reality that there are generally many more junior game QA professionals than there are senior ones and a lot of companies don’t have the most senior members of QA to remedy the misunderstanding. Finally, the frustration I have with QA team misdirection is that it feeds many of the negative stereotypes associated with games testers as a group; QA are always working against us and blocking our releases, QA is just a career-hopping discipline, QA are generally negative about the work created by others, QA have less to add to the project than others, QA are there to take instruction only, etc.

Without further ado, lets talk about some specifics.

Who’s responsible for quality?

One of the major and frequent pitfalls for projects is the assumption that the QA team exists to be the owners of quality for the product and are ultimately there to ensure the quality bar for the release of that product. But responsibility for the final product means responsibility for every individual contribution towards that product.

Is it reasonable -or even feasible- for the QA team to be responsible for the quality of work produced by a cross-discipline team of other people?

Of course it isn’t, that would be an impossible task

Even if you had a tester for every contributor towards to the game, you can’t force that contributor to increase the quality of their work, you can only give them feedback on improvements and they’d still have to agree and be willing to do the additional work. In this theoretical scenario, the tester would also have to have such a good understanding of that person’s work that they’d be able to do the role themselves (be that art, animation, modelling, code, etc. ) which is also completely unfeasible and unrealistic. Projects that fall into this trap create a culture where the QA team become the ‘guardians of quality’, the gatekeepers who decide if something is good enough to release. This mentality leads to a conflict between QA and the other disciplines and creates opposing goals where the QA team value higher quality against other disciplines which may value other business goals. This has downwards spiral effect where the QA team are seen as barriers to success and something to be worked around and undermined where possible.

At this point you might be thinking “Chris, are you saying we shouldn’t be striving for better quality?”. The answer is that we should be, but we can’t do it so blindly and stubbornly that we dismiss all other business requirements to try and achieve it.

Side note: There’s a great section in the ISTQB syllabus about the ‘business value of testing’ and it talks at length about the reasons that it’s worth investing time and money into testing and the return that you get from that investment. It also does a good job of putting the work into perspective. It’s one of the few parts that the syllabus has right.

Going back to the topic of quality responsibilities. I’m going to give this advice. The best teams I’ve worked with have the mentality that everyone on the project is responsible for the quality of their own work and the integration of their work into the wider project. The test engineers are seen as quality consultants and serve to help guide others in best quality practices like: work product review types, risk analysis and mitigation, bug and story workflows, the bug lifecycle, bug triage processes, health and burndown tracking, amongst many many others. Sure, the testers still do a good deal of the work themselves, but quality is seen as a joint-venture.

Go or No-go: Who decides?

I mentioned above that QA can sometimes get themselves into a situation where they are seen as the gatekeepers of quality and they are the sole stakeholders that need to give the ‘OK’ for a release or update to go live. This situation is a lose-lose because QA are seen as a barrier to release and will encounter friction with other disciplines on the run up to a release, then after release if failures occur the sole responsibility of the failure lies on the QA team and the sign-off that they gave. In the worst scenarios huge knee-jerk reaction email threads kick off from upper management full of phrases like “how did QA miss this?” and unfortunately this is common enough that most of you reading this will have probably been in a team where this has happened at least once – I know that I certainly have.

The correct way of working is that all things created by a game team, individual features or entire releases, have stakeholders that have an interest in that thing. These people share in some part of the ownership of that feature or release and have an inherent interest in the success of it. Usually you’ll also get only a single stakeholder from each discipline: code, art, QA, Production, server, depending on the composition of the thing.

When you go through milestones, like release to live, the stakeholders get together and all give a GO or NO-GO; importantly QA in just one voice in the group of stakeholders that agree and they all share in the responsibility of that thing.

In my book I define testing as simply collecting data about the health of the game (through bug reports and test results) so that project stakeholders can make informed project decisions on that data. Do we release on time and risk active bugs or do we delay the release and spend the extra time to fix those bugs? Do we release on time and take the risk that some of the tests haven’t completed or do we delay the release and allow the tests to complete?

I also emphasise that the timeliness of this data collection and presentation is important; stakeholders need enough time to take control actions to get poor-health features back on track. Reporting test data on the very last day of a deliverable isn’t helpful to anyone.

Lots of QA members try to take this critical decision making on themselves and present a stubborn, non-diplomatic stance when faced with difficult release decisions. This stubbornness can create huge conflicts when the team get together to make decisions on what to do, especially if it makes more business sense to take the option that is being opposed by the tester.

Checking vs. Testing

A practise that I see quite frequently is other disciplines thinking that the QA team is there to check their work so that they don’t have to do it. While checking of work by peers is definitely a quality best practise, it’s the latter part of that sentence that causes the problems. If the project team (feature creators) is at capacity on their workload, then people want to finish their current task and then immediately move onto the next piece of work on their backlog, which means that documentation, database hygiene and checking of their work can get dropped by the wayside.

Particularly during these times, some team members think that they’re able to commit new work without checking that it does what they intend it to do, and that it’s the responsibility of the QA team to check it for them.

Some might even tell the tester how they should check the feature. Now, another discipline telling the tester how they should test doesn’t sound quite right does it? That’s because there’s an important definition here between checking and testing; checking is just the straight forward positive verification that the components does what it was designed to do, testing is everything that comes afterwards and includes the integration of that component in to the rest of the system and the system into the environment that you’re running it in. Testing includes a wide set of other positive (verification) tests and an even wider set of negative (failure) tests. The testing part is all of the expertise that the QA discipline bring to the team, it’s their ‘added value’.

Unfortunately, others sometimes don’t realise this and think that QA is there to pick up their checking.

Not checking your own work is a global bad practise and using another person’s time to get a build and log bugs for you is an ineffective use of everyone’s time. This also creates scenarios where builds fail and entire teams are blocked from working because the work wasn’t checked at the point that it was created – instead the failure is found further down the flow and often too late. Allowing this practise to continue also undermines the skill of QA staff and allows others to think that QA are there to just take instruction. It can also promote the unhealthy mentality that the time of the creator is more costly and more important than the testers time, therefore the creator should offload as much of the admin work as possible.

Try to recognise when this situation is occurring in the team and knock it on the head when you see it to quickly eliminate the problem. I find that the cause of this is usually just non-QA team members not understanding the role of the testers on the team and the QA members not being confident enough to correct them – or not knowing exactly what their roles are. The solution to this also comes back to making sure the project team has some senior QA roles to guide the team in the type of work that they should be doing.

If you’re in a games project team and some of these negative points are sounding familiar to you, then you could consider getting some of the key project management together to write a Test Policy document. It doesn’t have to be an essay, but the point of it is to document the goals of QA for the company and what the QA team are responsible for. It should be project agnostic and written in general terms. To fix problems with QA being incorrectly used, you will need the buy-in of the project management so the test manager can be aligned with other management when speaking their teams about issues.

I hope this discussion helps some QA folk identify problems that they’re having and I hope any non-QA folk reading this now have a better understanding about the testers on you team.

Enjoying my content?


Consider buying my book to keep reading or support further articles with a small donation.

Buy me a coffee
Previous
Previous

ISTQB Game Testing (CT-GaMe) : A MGT Review

Next
Next

9 Types of bug fix that every game tester should know