Being a technical tester doesn't mean you have to write code

In this article, I'll list the benefits of improving the technicality of a game project QA team and how doing so helps bridge the gap between dev and test by ‘meeting your developer half way’. I'll also list some of the ways you can be more technical without learning how to write code.

Through writing about game testing, I've spent a good deal of time thinking about the skills and personal attributes of QA analysts around me (people who write test plans). What trends can be seen in those that are celebrated by their teammates, both in the QA discipline and other disciplines? I'm particularly interested in the feedback of developers and other feature authors who QA analysts support with their test plans. Since QA effectiveness is so difficult to measure quantitatively, I tune in to any circumstantial praise and take a mental note of the details.

One trend has become apparent, QA analysts who are more technically competent in the game area they are testing, are able to support, understand and communicate more effectively with feature authors. 

A more technical understanding allows QA analysts to ask more specific and incisive questions during feature test planning, which provides ‘fresh eyes’ to the implementation and challenges developer assumptions. Technical competence also allows QA analysts to identify feature components that are not explicitly recorded in the user stories or documentation by the developer. Logging, debug and functional details can be read directly from the updated code by the QA analyst, relieving the pressure on the developer to document every detail of their work. This is particularly helpful when functional details diverge from the intended designs or aren’t explicitly recorded in the designs. Developers are frequently required to define small functional details on-the-fly and solve problems during implementation, making it impossible to document every detail to be tested. There is huge value to be gained if QA analysts are able to independently extract this type of implicit information and include it in test plans.

Furthermore, some QA analysts are able to take a share of administrative work from the feature author by assisting in semi-technical tasks that often fall to developers and take their time away from the 'heavy lifting' of development work. To name just a few, tasks like creating new builds, reading game logs for bug root cause analysis, setting up test data, setting up the test environment, searching through crash reporting tools, testing in an editor to collect bug artefacts and designing automated tests. There are many small tasks that often fall to developers by default, but can be safely completed by other disciplines with some guidance.

This 'learn to feed yourself' QA skill is commended because it goes some way to solving the common problem of lack of documentation during test planning. Software development theory often paints an idealistic picture of the completeness of user stories and documentation which is rarely seen in real projects. …Instead of constantly badgering developers and designers to update their user stories and documentation, this approach understands that documenting every detail is impossible and instead seeks the information directly. We all know that repeatedly asking the same person to update their documentation isn’t going to win you much cooperation or respect from that person.

I’m not suggesting that teams don’t need to document their work at all or that the QA analyst should do everything independently. It’s still important to have input from the feature author and formally document feature details. But QA analysts who exhibit these technical skills are seen as independent problem solvers who developers can trust to pick up small details of the work without spelling it out in writing. 

It is these implicit details that can turn a good test plan into a great one.

Semi-technical areas for QA analysts

How can QA analysts be more technical? Here are some areas which can be learnt during regular work and without formal training. All you need is the right access, someone who can answer some questions and regular time allocated to familiarise.

Side note: The first point in that list is unfortunately where a lot of QA members might become blocked. Improving technically often means you need to work within the development team and have access to all the code, processes and tools of the team. QA analysts in publisher and external test teams will be limited here.

Learn to set up and maintain your own development environment

Setting up the game project to build and run on your PC/Mac is the single best way to improve your technical competency and understand the gory details of how your project works ‘under the hood’. Setting up and maintaining a working version of the game on your machine forces you to appreciate and understand the supporting technologies and tools your project relies on. Running a project locally also requires a combination of different tools, providing knowledge about source control clients, graphics engine editors and development environment tools (IDE’s) and how they work together. 

How do you do it? Most teams have onboarding guides to set up a new dev environment, since everyone contributing to the project will need to do this when they join the team. Comprehensive, plainly-written documentation is required here because many non-developer roles such as artists and designers need these tools to complete their work, and they, like QA, are not in a technical coding role. In addition to the first time setup, regularly syncing your files and dealing with breaking changes to keep your project up to date will force you to understand nuances and gotchas! of the project setup.

Having a local running version of the game has many more direct benefits too. QA analysts can sync the very latest changes and test them, even if no target device build exists. Editor tools provide access to individual game assets, files, scripts, metadata and even tools without running the game. This level of access allows QA analysts to independently collect new information to create better test plans. An example from a previous project was viewing how our 2D assets had been atlased and the packing system used to organise them. We captured a screenshot of each atlas and included it within the test plan.

Improve your knowledge of the game engine editor tools

The engine's editor isn't just a method of running the game on your local PC/Mac, it's a deep and complex tool. Exploring the different windows in the editor will provide insight into the way the project files and scenes are organised. Exploring each scene in the game provides further insight into how the cameras and UI are set up, providing a ‘backstage’ view on what the player sees. The editor also provides tools to display many raw game assets by directly opening their file; 2D assets and their atlases, 3D assets, UI layouts, particle effects and more. Directly viewing assets individually can help QA understand more about how the game is built but also serves as a lookup if only a file name is known during test planning.

It's common for devs to add tailored scripts and tools to the editor for each project, usually to make things easier for themselves and other non-technical team members who use the editor to conduct their work. These are usually ‘power tools’ that improve productivity by automating repetitive tasks or provide an aggregated view on assets or data. These customisations can be utilised by more technically adept QA members to understand features and changes in more detail.
Having a deeper knowledge in this area doesn’t always provide an immediate and direct benefit to the QA analyst in creating their test plans, but instead provides passive benefits which support technical conversations and problem solving throughout project work. 

However, I’ve had some success using editor tools to set up test data and gather information during test planning. One example from a past project was using the editor to view project atlases when we changed its 2D asset packing method to optimise memory usage. I was able to save each atlas from the editor and attach it to the test task, showing the testers which 2D assets were grouped together in each atlas. Another example was when a project was optimising predictive loading between different game scenes, I was able to use the editor to get a list of all scenes instead of playing the game and trying to guess which parts had their own scene.

Learn how to navigate source control tools

Every project has a system for organising the project files, providing a facility for multiple contributors to work on the project simultaneously, but also allowing the project to be forked into multiple copies (branches) that can be worked on simultaneously. To eliminate any confusion, I’m talking about tools such as Perforce (P4), Subversion (SVN) and Git.

Having direct access to the project source files provides many direct and indirect benefits to QA analysts. Let’s begin with direct benefits. Feature commits can be read to understand more about the changes and basic information can be extracted without reading a single line of code. Checking how many files were changed and what type of files were changed (data, code, 3D assets, 2D assets, sound files, etc.) can provide a rough, but helpful, measure on the scope of the changes and the risk involved. This information also provides a springboard to ask more targeted questions to feature authors during test planning. To go further, QA analysts can also practise reading the code changes to pick out specific useful information, such as newly added debug, logging or even developer comments. If the files and methods on the project are given sensible human-readable names, then more technically adept QA analysts should be able to identify which game areas have been changed and follow some of the logic without learning how to write code. The first step here is to introduce the habit of scanning through code commits first, even if they make no sense to you the first time you do it. 

A major indirect benefit of observing project progress through source control tooling is the exposure to branching strategies and the state of project branches at any single time. There is a lot of hidden complexity and nuance that comes with managing multiple copies of the project files existing in parallel. These complexities gravitate around the points where the branches are forked off and merged together. Merging two project branches together often creates conflicts in the files which need to be resolved, a problem which worsens the longer the two branches have been separate for. Even if two branches are merged successfully, the result of combining game components and features together often causes new failures at runtime, which surface as bugs. We, as testers, are very interested in bugs and project stability, which by extension makes us interested in branching strategies too. A basic understanding of this stuff allows QA analysts to know where features and fixes are at any given time, as well as plan integration tests after branches are merged; while a more comprehensive understanding allows QA analysts to collaborate with developers and provide input to define the project’s branching strategy.

Learn to read game logs, callstacks and other bug artefacts

When testers find bugs, they capture a myriad of different artefacts which help feature authors diagnose the root cause of the bug to fix the issue. QA members can practise trying to extract the key information from artefacts to present a possible root cause to the bug fixer. Doing this, and getting it right, saves valuable time for the developer from reading through all the artefacts and improves the technical competency of the QA team member. This could be a comment in the bug or via a conversation with the developer.

Reviewing bug artefacts also acts as a pre-triage review, allowing QA analysts to identify unhelpful or misleading files, like logs that are far too long or crash files that aren't linked to the bug being reported and were likely generated in an earlier game session. 

Let's look at some details and examples.

For logs, learn the difference between device logs generated by the operating system and game logs generated from the game. Device logs are helpful to show us the state of the whole device and help us identify bugs that are related to hardware or resource usage, like game terminations due to memory or CPU utilisation. But these logs don't provide the in-game detailed logs that have been added by our developers. Game logs however, report anything explicitly added to the game and so are far more detailed, but don't report on anything outside of the game. 

QA members would do well to learn the main keywords/tags used in logs for your project to quickly identify the area of interest and prevent inefficiently scrolling through the whole log. Different game systems should have their own tags and commonly require being enabled through debug before they even appear within the game log. Once you are able to identify the logs from the feature in testing, you can CTRL-F the keyword and step through only those lines in the log you are interested in. Do this enough and you should notice patterns and be able to identify what is normal behaviour and what is a failure. Often, what the tester sees as a bug in the running game is one symptom of a root cause failure printed in the log; usually an error, exception or assert. Armed with this information, bug titles can be edited to be more precise in their language, which helps everyone (e.g. “Error XXXXXX is triggered leading to YYYYYYY game behaviour” ). 

For device logs, search for your game name and identify the process ID that the operating system has assigned to your game and then search this too. Some devices only refer to the process ID in relevant lines of the log. Be aware of keywords for your operating system. For example, I see “kill”, “killing” and “win death” as common keywords associated with app terminations and crashes triggered/identified by the operating system. A full example might be: “lowmemorymgr: killing com.mycompany.mycoolgame 30042” which is indicator of a out-of-memory termination that I frequently see on mobile devices. If QA knows what to search for, identifying root causes like this one empower the QA team and enable them to research bugs more thoroughly before handing them to dev.

Callstacks (stacktraces) are generated for crashes, exceptions and asserts triggered by the game. Callstacks are often printed directly to the game log, but are frequently also recorded in separate crash dump files and reported to external error logging services. Just like game logs, callstacks provide information on the reason for a failure, pinpointing the code class and function where the failure occurred. While callstacks may appear scary and technical to QA, they’re actually very readable to non-technical folk. Here are a few pointers and key learnings:

  • The callstack is a list of code functions that were executed to reach the function that failed. It shows the ‘route’ through the code

  • The top line in the callstack shows the function that failed and each line below it steps backwards through the execution path

  • Callstacks also have a title and subtitle which prints what failed within the function. Sometimes these are specific, other times they are too generic to be helpful to us

  • Call stacks need to be 'symbolicated' before the full details can be read. This is done with a symbol file from the build which triggered the crash

By understanding the basic anatomy of a call stack, QA can log more specific bugs and use the information to aid in investigations. A "Crash when loading the settings menu" turns into "A fatal exception is triggered in settings.myUIFunction() when entering settings". While this is a basic, generic example, there is a lot to be gained in logging bugs in a more technical and root-cause style.

Know the difference between warnings, errors, exceptions, asserts and crashes

Not all bugs are obvious. In fact, many bugs are ‘silent’ failures that are only surfaced in game debug logs and don’t exhibit negative behaviour to the player. These often go undetected by testers because they don't know how to identify what is and isn't a bug, only recording the most obvious of player-facing issues. Debug logs also show information about potential failures, things that aren’t currently bugs, but could become them. Some teams may want to record these potential issues in the project bug database because they often indicate a technical debt or poor code hygiene that will need to be addressed in later development. 

By understanding the different types of logs and how each type is used on your project, QA are able to more effectively identify problems when they see them in the log and not rely solely on player-facing behaviour. 

Let's start with three log categorisations: info, warning and error. These are categorisations that developers on your project can assign to anything printed in the game log. The idea here is that:

  • Info logs are purely informational and do not indicate a failure. This category forms the majority of information in any game log. 

  • Warning logs indicate bad practice or hygiene that could potentially result in a failure but currently doesn’t. Depending on the hygiene of your project, warnings may be commonly seen in logs.

  • Error logs indicate that a failure has occurred. That failure may or may not result in a player-facing bug. Errors should be seen only occasionally in the log. 

Every code team owns how they use these categorisations and how much information they choose to log to each category. The QA team should speak to their project code leads to understand the context of how each category is used on the project and use it to guide bugging practices. For example, the team could agree that all errors triggered should be logged as a bug, but warnings can be ignored. This helps the test team identify more bugs, even if they don’t produce negative player-facing behaviour. This greater understanding of common warnings and errors on the project also helps the QA team be more aware of the inner workings of the game, fuelling more specific and technical future test plans.

Exceptions are often reported in the error category and can indicate both fatal (a crash) and non-fatal failures. Testers will experience a fatal exception as a crash but may not notice non-fatal failures at all. However, it’s important for QA to understand that developers add in custom exceptions into their code, triggering them intentionally. This is done to escalate awareness of root cause failures because if left unchecked, the failure will likely cause more severe knock-on failures later, which will be more difficult to trace back to the root cause. The guidance for exceptions is the same as other errors, QA should seek to align with dev on the bugging policy.

Asserts are also added intentionally into the code, but unlike exceptions, asserts are always fatal and will stop game execution. I’ve heard these referred to as ‘tech hangs’ by some testers, indicating that they are triggered by the code. Since they stop execution, testers will always perceive these as crashes and so should be able to identify them easily. However, it’s still useful to understand that an assert has likely been added to stop something worse from occurring, like player save file deletion or corruption, which is far more permanent than a one-off crash. Understanding the difference between asserts and other types of crashes allows the test team to log more specific bugs by naming the assert in the bug title and not simply saying "a crash occurs".

Wrapping up

These are just some examples of things that QA can do to teach themselves to be more technical in their work and benefit from it. I’ve chosen these examples because they don’t require formal training or that the person learn how to write code. Instead, knowledge and competence in these semi-technical areas can be improved simply by allocating time to be curious, to explore and inspect these areas. The trick is to try to understand a technical area, then confirm your understanding with a developer (or other expert) you’re working with so you know you’re on the right track. If QA analysts ask small ‘distributed’ questions over time, they’re much more likely to gain knowledge incrementally and successfully; as opposed to asking “can you teach me how to use the game editor” which is a much larger question and can’t be answered easily or quickly.

Teach yourself how to be more technical and your test plans will be better for it.


Enjoying my content?

Consider buying my book to keep reading or support further articles with a small donation.

Buy me a coffee
Previous
Previous

Real QA contributions to quality at source

Next
Next

The ‘Scope of Work’ Problem in QA