Real QA contributions to quality at source

Quality at source has been a buzz phrase since I've been in the industry. Articles, talks, job postings and social media comments are awash with vague recommendations that project efficiency is tied to achieving better ‘quality at source’, whatever that means. As if repeating the words again will give it some meaning. Quality at source. The phrase has become so trite that we've even renamed it shift left. Nice work. A slow clap for whoever came up with that one. 

The goals and intent behind the phrase are still valuable and effective to modern game development. The topic has just lost its pragmatic details that would help a team take steps to improving their own quality at source. For example, what can a single QA analyst do in their day-to-day work to practise techniques in this area? Embedded test teams exist for a reason, so clearly companies see the value in having QA close to the source of development, even if they don't know why (yes, QAS goes beyond just the QA team, more on that shortly).

In this article I'll give a definition of this term and provide examples of where I've been able to contribute to quality at source, often identifying bugs before a single line of code is written. While the full scope of the quality at source initiative extends beyond the test team, in this article I'm going to focus on the contributions from QA, since this article is aimed at game test teams.

What is quality at source?

If you're not familiar, this term describes any measure which improves or prioritises quality at the earliest possible stage of development. Most QAS activities are not thought of as 'testing' at all, because they can be conducted by all disciplines and unlike most testing, take place before the code is running. 

QAS activities focus on proactive bug prevention instead of bug detection.

The scope of QAS is wide and includes activities such as: design reviews, technical design reviews, static analysis of code, branching strategy, branch merge approval process, test driven development, project coding standards and asset pipeline standardisation. This list isn't exhaustive and can include any measure taken to prevent mistakes or errors. You'll also see in my examples below that QAS successes don't always come from a formal process and can come from more casual and unplanned communication. 

Astute readers may notice I’ve included some items in that list that occur after the code is written (like the branching strategy). I’ve included these because I think it’s productive to think of quality at source instead as ‘quality as close as possible to source’. Preventing merge mistakes with a solid process is still preferable to finding merge bugs through testing later on.

Without further ado, here's some real project examples of bug prevention that were only possible by having QA members like myself, close to the source of development and involved early.

Rebalancing the hard currency cost to re-roll rewards

This example comes from a change that I wasn’t even working on, but caught a conversation in Slack at the right time. This was a mobile project which ran leaderboard events every weekend. At the end of the event, the player received their semi-randomised rewards and we had a feature that allowed them to pay hard currency to re-roll their rewards if they didn’t like the first selection. Our product team wanted to reduce the cost of re-rolling rewards to see if it enticed more players to spend their hard currency. This change wouldn’t have required any new code and was also set up to be configurable via remote JSON data. The original cost was 300 crystals and the plan was to reduce it to 200 and run it as an AB test. Simples.
However, I’d spent a good deal of time testing these events and changing the remote data to make sure the game was correctly referencing it and updating accordingly. The current functionality actually allowed the player to re-roll their rewards multiple times and with each re-roll, the cost would reduce. The data behind it was a list, something like this: (300, 150, 100, 50). The code would accept a list of any length and would iterate through it for each reward re-roll. First re-roll costs 300, second costs 200, etc. The player would be allowed to infinitely re-roll their rewards by paying the final value in the list. 

Entering a single value of 200 would have actually increased the re-roll price for any players who hit the button multiple times. Since this was a live ops adjustment, it didn't go through the usual development pipeline, but was being discussed in an instant message chat between product and design who didn't have access to the raw data. I entered the chat and corrected the assumptions about the current re-roll logic before any changes were made and got a few “Nice catch!” responses from the team.

The course-correction prevented the new 'bad' design from being implemented in the AB test, as well as the potential for a lot of time to be wasted setting up and analysing the new data before the mistake would be found later in the pipeline.

While this scenario might sound like nothing to brag about, small QAS wins usually are. They're small and consistent contributions that prevent mistakes and assumptions from escalating. 

Closing a long standing AB test and migrating players to the winning variant

Mobile free-to-play games often exist in many states simultaneously, from overlapped AB tests of game functionality trying to find the most effective balancing and functionality. This example comes from such a project. 

We had players in 3 very different variants of a feature, different enough that the player save data for each set of players had a different structure. The product team had identified the winning variant and all we needed to do was migrate the remaining players into the winning config. Easier said than done. The migration required a new client release which would run the migration code on game launch. 

I had owned the testing for the original AB test and so knew the data setup for each variant and the deltas between them. The developer working on the migration code started chatting to me in an instant message, more speaking his thoughts out loud than actually asking me a question. I was the rubber duck, there as a sounding board. After some time he was pretty sure he had the logic figured out. However, I was following his logic closely and asked what would happen in a specific game update scenario from the old client to the new one with the migration logic. A failure in the migration, it turned out. Time to go back to the drawing board.

Through discussing the solution out loud we had identified a bug simply by thinking about the problem really hard! No code had been written yet, but we were able to explore how it would work and mentally follow the potential outcomes. In some ways, this was a casual form of test driven development. I was using the initial design the developer was describing and thinking of tests I would run that might cause bugs, then posing them as questions.

As for quality at source, consider the alternative path. The hours spent implementing the code, writing and running tests, finding and logging the bug, then fixing it. Plus all of the admin and other team members involved to triage the bug and track it through the production process. This is exactly why QAS is so powerful and sought after. 

Testing design documents

Design reviews are common practice for every game team that I’ve worked on, but most are intended to capture feedback on the design, KPI impact or technical feasibility. For one project that I worked on, the design director was in the habit of sending her UI wireframes and flow diagrams to the embedded QA team for review first because she knew that we had the best knowledge of the player-facing app. We were able to identify popups, menus and navigational flows that were missed in the wireframes and provide quick feedback so that they could be included. As with most projects, there were flows and popups which weren’t documented and were triggered by interruptive or edge case scenarios instead of standard button-pressing navigation. Network disconnection popups which could trigger at any time. Various error pops that would trigger for specific failed actions. Unusual navigational flows that weren't displayed in the menu as buttons, but could be triggered from interruptions like multiplayer invites.

The design director was keen to capture all of these nuances when designing new features and UI changes, allowing the wireframes to be far more explicit and complete. Without this, some missed areas might have been caught by devs during implementation, but there was a real possibility that some could be missed entirely. If testing were to find entire menus, flows or pops not implemented later on, this was far more disruptive to the flow of development than a simple bug, because the code team would only be expecting minor fixes. Additionally, this would have also triggered tests to be blocked and re-run once the missing development was added.

The focus on getting the designs and wireframes complete the first time resulted in much time saved later in development. It was also a good lesson in what you define as 'complete' for designs. 

As an addendum to this story, other more recent projects have used 'stressing' meetings to review design documents. These were carried out with each discipline as separate meetings, both to allow each person the time to provide their input and to capture different types of feedback. The code team would be interested in technical feasibility, the product team interested in feature KPI performance, the QA team interested in inconsistencies, incorrect assumptions and potential risks. While holding several meetings was a lot of work for the designer of a feature, the feedback and discussion raised important questions before code implementation began. 

Conclusions

While this is just a short list of more recent examples, I wanted to provide these pragmatic details because I think these real stories are far more valuable to you the reader, than writing pages on the theory of quality at source. You want something that you take into the office tomorrow and begin looking for opportunities to use. I hope these give you some clues on what kind of things you can look out for or encourage within your teams. It could be only within your feature subteam or it would just be you and another team member. As the examples show, you don’t even need to be assigned ownership of the feature to contribute.


While project-wide quality at source initiatives are the responsibility of QA leadership and other project discipline leadership, QA analysts and other team members can still contribute to greater QAS on a smaller scale. You don’t need to wait to be told to do these things. Assess your project process before and during code implementation, then consider how you can contribute towards that. While you can suggest changes to the current process to introduce more reviews or other steps in your feature process, you don’t even need to do that. Simply review designs and involve yourself in conversations. If you are adding value in your contributions, other team members will see the value and follow your example. 

A final note on this topic. Many quality at source wins rely on a close and healthy relationship between team members, with continuous and open communication at the core of that relationship. This is a strict prerequisite. Fostering and improving this is a bigger topic, but it’s worth calling out here.

Enjoying my content?


Consider buying my book to keep reading or support further articles with a small donation.

Buy me a coffee
Previous
Previous

Poor performers: Hiding the sins of bad test planning

Next
Next

Being a technical tester doesn't mean you have to write code