Game Quality Forum ‘23 : MGT Coverage

I recently flew to Toronto to visit some colleagues and was reminded that cannabis is legal there. On seeing my floppy hair and -clearly- chill vibes the airport security guard gave me a thorough tap down and drug swab on the return journey. Fast forward three weeks and I headed out to Amsterdam to attend this years' Game Quality Forum conference, another location famous for its relaxed drug laws. The conference was nothing but fantastic, but I'm pretty sure my flight details are now flagged on a drug smuggling database somewhere. 

"Frequent trips to Canada and Netherlands, sir? What's the reason for your visit sir? Step into this room please sir" 

Queue imminent and invasive body search.

Travel drama aside, how the hell was the conference? Very good. Actually. 

This year we saw a much bigger conference than in previous years. We had a bigger sponsor space with dedicated tables and a wider spread of sponsors, now including companies of automation frameworks and other specialties beyond the usual localisation and testing providers. We had competitions. We had giveaways. 

The talks were more varied and insightful too. We had more content from automation engineers, build engineers, developers, automation framework founders and game engine experts; as well as content from DEI experts and employee resource group organisers. On top of this we had much less filler sales-y content from sponsors where the majority of the talk is summarising how big and brilliant their company is. 

Also, tasty pilsners, check. Off site party with arcades and laser tag, check. Sunshine, check. 

Day 1 workshops

The workshops are always a favourite of mine, running 90 minutes each; they provide an interactive deep dive into a specific topic, allowing the speaker to hand-hold you through the content. 

Getting started with automated testing using GameDriver

  • Shane Evans, Co-founder and CPO, GameDriver

  • Robert Gutierrez, Co-founder and CEO, GameDriver

  • Ricardo Hoar, Engineering Manager, GameDriver

  • Ethan Evans, Developer, GameDriver

The first workshop this year was an introduction to the automation framework GameDriver. They walked us through the framework. It includes support for both Unity and Unreal engines, as well as integration for various VR peripherals. It appears to be compatible with both mobile, PC and consoles, but we didn’t hear specifics about how each of those integrations work. For mobile at least, the framework (like others) leans on Appium to interact with the device and launch the app before the in-engine GameDriver automation can get started.

We saw some nice Unity editor tools for searching and printing the hierarchy path for objects, as well as a hierarchy path debugger. In addition, we saw a recording feature which captures input in the editor and generates a basic automated script. This wasn’t full capture-playback but is intended to provide a base for writing an automated test by cutting out some of the manual effort. All in all, some nice additions.

A large part of the talk also focused on their method for querying objects (‘HierarchyPath’) which includes a multitude of options for finding the object you want to interact with. They correctly called out the difference between interacting with objects which have a static path and can be hardcoded in the autoscript vs. objects which are created at runtime and may be one of many clones of that same object.

I have yet to use the framework myself so I won’t comment further, but what was presented looks very promising for Unity developers. More interactivity in the workshop would have been nice, and the team did plan for the workshop to be interactive, but we didn’t have time for it on the day; but a minor point in an otherwise insightful workshop.

Applying modern software QA practices to the game industry

  • David King, Director of Technology, EA

  • Adrian Maroiu, Development Director, EA

The second workshop of the day was a series of five lessons in applying modern software practices into games; this was centred around unit testing and content validation within Unity. The speaker, David King from EA, made a small game and prepared five unity projects for us to download and follow along with the five lessons of the workshop:

  1. Writing unit tests in Unity using Unity Test Framework

  2. Tackling how to safely unit test legacy code which violates SOLID code principles

  3. Using interfaces to create stubs of singleton classes. Avoiding the problem where unit tests need to load dependent objects to run the test

  4. Test Driven Development (TDD) using unit tests in Unity

  5. Content validation checks against objects and files in the project

The workshop required no coding skills and had us follow along with David uncommenting parts of the code to see a before-and-after result for each lesson. It was explained that we were following extremely simplified examples of each lesson, which turned out to be the benefit of massively increased understandability. I managed to follow along in real-time throughout the 90 minutes and saw the tests running locally on my machine - magic! Most folks learn better by doing, and this workshop was no exception. Kudos to David for writing the game, setting up each lesson and getting through it all in the time slot.

What of the lessons themselves? Here’s a brief breakdown of the solutions to the five lessons

Solution 1

We defined our tests in Unity by importing using UnityEngine.TestTools; and adding the method prefix [UnityTest] to identify our tests. We then used the Test Runner window within Unity to view and execute our unit tests. As an aside, unit tests are small tests that test specific game functions in isolation. They don’t require the game to be running, but instead execute the code directly and ‘statically’. 

Solution 2

We learned that unit tests require that code is implemented according to SOLID principles, one of which means that every code method should have a single purpose only, something which is often violated in older legacy code by using large and complex functions that perform many different operations. In this lesson we learned how to setup an intermediate test layer that can switch between testing legacy code functions and new implementations in parallel.

Solution 3

Here we discussed that many code methods require many dependencies from the game in order for them to execute and trying to include these dependencies in unit tests is both unwieldy and increases the scope of the unit test beyond the target method which is the focus of the test. To combat the dependency issue, we learned how to create an interface class which mocks the original class avoiding us having to import it. We then set up some fake data within our interface class in order to run our test.

Solution 4

In this lesson we followed the core flow of how an engineer would write unit tests first and then implement a feature until their unit tests passed.

Solution 5

Here we learnt how to add new menu items to the Unity editor using the method prefix [MenuItem("Menu/Sub menu")] to allow team members to execute debug tools and validation tests. We also learnt the basics of validating object attributes, surfacing failures as errors in the Unity log and triggering those validations to run automatically at compile time. 

It's difficult for me to share the full depth of these lessons through words, but I'd highly recommend these workshops if you're tempted to attend in the future. 

Day 2 talks

9 years, 11 months and 20 days of automated dev QA

  • Matt Ditton, Mighty Games

  • Ben Britton, Mighty Games

Did I mention this year's conference was heavily focused on automation? No? Well it is! 

We got a fresh perspective from the folks at Mighty Games who have spent many years as a support team building automation setups for various small dev teams. We quickly discovered that doing the same thing repeatedly for lots of different projects tends to mature your process and make you pretty damn good at it. We saw this with Mighty Games. 

One of the big statements of automated testing is that you don't get a quick return on your investment and it's best suited to large and long-running projects. Mighty Games proved that wrong by exhibiting small and effective automation setups for small projects, some of which only had a team of a few people. 

They recalled a story where they built a 'test wall' on one of their own projects, which was a wall of screens showing the game being played in various ways, and was always on. New builds were automatically deployed to the wall and auto played. Devs on the team would see and walk past the wall all day, often noticing bugs which they would then fix immediately. The automation was less about verifying logic and more about exposing and presenting the game in different states. 

They explained their approach to help new projects; the first port of call would be to get every new build automatically deployed and launched on a device. Then they would build autoplay automation so that the daily builds were played and shared on a screen. I really like this pragmatic and low-cost approach to automation. It was a nice example of the various forms that automation can take. 

Other nice ideas from their talk included a single screen which showed the same game flow being automated on every supported language across multiple mobile devices and spliced into a single view. This would capture screenshots to hand to the localisation test team, saving many hours of manually navigating through the game to find the various localised strings. The multi-language comparison made it easy to see how any single text string appeared in other languages, aiding in Loc testing. 

Their approach can be summarised by being lazy and building things so you don't have to do things manually. They also admitted that they found unit tests heavyweight and unhelpful. 

A nice talk and a fresh perspective. Now I just want a test wall of my own…

———

The conference boasts three tracks: QA, localisation and Customer Service. Naturally I followed the QA track, but the agenda showed some great content from other folks. I spoke to peeps from Player XP, who have a solution for collecting community sentiment from across the web on specific topics, using AI to reach websites which don't have direct integration with existing tools. With the move to games-as-a-service, community management applies to more games than ever before. I know our game has more limited reach, only collecting data through direct player questions on HelpShift and through app store reviews, so new tooling like this is less of an unwelcome sales pitch or and more of genuine interest. 

We had various discussion panels throughout the day, some of which I was involved in, so naturally those were shit. 

I jest. My contributions were shit. Everything else was gold. 

It was great to hear tales from studios around the UK. I wanna give a shout out to a few familiar voices which we've heard from in previous conferences. Damien Peter, Head of QA at Sports Interactive and Carla Condemi, QA and Localisation Manager at Dovetail Games. I've got a lot of time to listen to what these guys have to say. Their stories and insight throughout the conference was a very welcome addition. 

If you'll allow it, this leads me neatly down a rabbit hole on the benefits and enjoyment of networking. There were a lot of other UK industry folks who didn't present but that I've met before. Kat Faiers and the crew from Frontier Developments are a great bunch and arrived with nothing less than full squad this year. Strong play! I also met some of the team at Sumo Digital, some of which I'd spoken to online a bit. Shout out to Alex Dorans from Sumo. I spoke with these guys offline and their stories and insights were also great to hear. It turns out you don't need a microphone to make a difference! 

Panel: Diversity, equity and inclusion - Pressing play on ambition into action

  • Marina Ilari, CEO, Terra Localisations

  • Tamara Tirjak, Head of Localisation, Frontier Developments

  • Marina Ilinykh, Supervisor, Localisation Operations, Riot Games

  • Jonathan Garcia, Manager - Inclusion, Diversity & Purpose, Discord

We concluded day two with a panel that combined the conference tracks again into a single room. I've seen Tamara talk before and she has a lot of value to add to topics like this, making this a very insightful panel. What was particularly insightful from these speakers was the view from different regions worldwide on DEI. 

The other point I'd like to raise here is that I consider myself to be pretty open and aware of DEI, but I'm continually educated by experts like these guys that contribute fresh perspectives and provide food for thought. Throughout the discussion, I like that Jonathan boiled the topic down to safety. That anyone has the ability to walk out their door and travel to work safely and without fear; something which is not possible for some minorities in parts of the world. I'm soberly reminded of the recent laws in the US.

A welcome end to the first day and a change of scenery from the automation topics working their way into my brain. 

———

Coffee turns to beer as we head into the evening of day two. Beer turns to more beer, and even more as we head to the evening party.

No. Sorry. Not a party.

"Sponsored drinks". All civilised like. Definitely not a party. 

I chat to people, drink pilsner, watch volleyball, camp during laser tag and lose at old school street fighter to Amie Lawson, Senior Dev Support from PlayStation. To be fair, I won the second round, so it wasn't a clean sweep. All in all, a fantastic evening. Side note: Another team that is avoiding the stereotype names of 'QA'. I like it. 'dev support' is pretty accurate in my opinion. 

Day 3 talks

Day three brought a new selection of talks, discussion panels and breakout groups. We saw a larger variety of talks in the main track, hearing about effective community management for service based games as well as the pain points of inaccurate game credits for many disciplines that work on game projects. The talk gave many examples of localisation and test teams who frequently don’t get included in game credits, making comparisons with the mature and strict process of producing credits for TV and film.

Since we’re here to talk about QA, I’ll give an overview of two talks that stood out.

Case study: Simple as possible - As a mindset for test automation

  • Maël Nagot, Test Automation Lead, Fatshark AB

  • Bilal El Medkouri, Test Automation Engineer, Fatshark AB

Fatshark is the studio behind Warhammer 40K: Darktide and Warhammer: Vermintide 2 games. In the talk we hear about their approaches to test automation for these projects and how they focused on fewer, higher quality automated tests in the face of a large and complex project.

They focused their automation efforts on scenarios that were very difficult to manually test, retesting crash bugs that were critical to ensure they would not regress and running tests that needed to be run very frequently. Throughout development, the team became more bold with removing tests that were flaky or had become irrelevant over time, freeing up machine time to run other tests and reducing the overall test time. 

For clarity, the type of automated tests described here are on-device, end-to-end tests. Some tests they shared were on a single device, while others were across multiple clients communicating together. They gave a hint of unit testing on the team, but it wasn’t the focus of the talk.

The real value I found in this talk was the comprehensive range of screenshots the team shared of their automation frontend tool which is their in-house test management solution. Seeing the details how the tests were organised, the options for viewing and running tests locally, and the metadata attached to runs was all very insightful. This part of the talk described how the team increased their adoption of automation through better Slack reporting and making it as easy as possible for the wider team to run automated tests locally. They described how people responded more positively to automated alerts and shaming from failed tests than if team members facilitated the chasing of failed tests themselves. They also worked hard to lower the barrier to view the details of a failed test and also to re-run the failed tests locally; both of which were one-click actions embedded within the automated Slack message. 

Their in-house test management tool 'Testify' links to their crash reporting tool 'Crashify' (I sense a theme forming) to provide embedded call stack information, showing all the relevant data in the same place. The screenshots from Testify show all of the automated test runs in red-amber-green coloured rows to show health at a glance. Columns in each test run result show metadata for the run. The tool also allows the user to select subsets of tests to run locally or run on the build machine. 

Unfortunately I can't share it all here, but the deep dive and visual aids really helped explain their setup and provided much food for thought for the attendees. 

Case study: Testing in Nordeus' football engine team

  • Andrija Djuric, Software QA Engineer, Nordeus

  • Svetislav Ponjavic, Senior QA Engineer, Nordeus

The last call for this roundup is this talk from Nordeus on their automation work for their game Top Eleven, a graphically rich 3D football game on mobile, which is powered by their football engine. This talk centred around testing features built on this engine and the challenges of the non-deterministic nature of each match. Previously, Top Eleven was a simpler 2D game and the business goal was to build a visually stunning experience and increase their audience, with the QA team supporting this new increased complexity. 

Their team composition was an interesting and key part of this talk; each of their development pods has a QA engineer who is both a test expert, responsible for the test strategy of each feature, and an automation engineer, capable of writing automated tests for the feature. This allows the QA team to choose whatever approach is effective to mitigate for each risk. During the talk they described some game scenarios which would be very difficult to set up manually within a match. In addition to the player bots being controlled by the game, many actions were coded to occur only a % of the time. 

The guys from Nordeus shared a story from a feature which allowed the player to lob the keeper with an overhead ball, which was one of these low-chance actions. They also described a bug where the lob shot was triggered even after the player had dribbled the ball around the keeper. Players discovered the bug and loved it, so they decided to keep it in the game, but regardless it's a great example of the edge cases and complexities the team had to test for. 

I spoke to the team after the talk about how they built their QA team and if they had trouble finding technical QA to fill their roles. They echoed my own experience by saying that they found it a challenge and had put a lot of effort into university outreach and had hired engineering graduates who had yet to choose a specialisation. I've heard an increase in stories like this across the game testing industry, investing in junior candidates who have the right skills and attitude instead of trying to find candidates already in the roles we want to hire for. 

Conclusions

This year's conference was bigger than ever and contained far less filler talks and shameless vendor ads, something which I really appreciated. There were also a lot of additional talks, discussions and networking which I couldn't include here, as well as everything in the CS and Localisation tracks! Overall a strong play and a lot to learn.

Being held in Amsterdam, the representation from European teams was strong, which made a nice contrast to Qualicon. I got good -different- perspectives and takeaways from each conference.

I hope you enjoyed this overview and that you’ll be tempted to attend in the future. If you do, make sure to come and say hi 🙂

Until next year.

Enjoying my content?


Consider buying my book to keep reading or support further articles with a small donation.

Buy me a coffee
Next
Next

Qualicon '23 : MGT Coverage