QA plays a sometimes forgotten but hugely important role in making sure the games consumers play are near bug-free and run as smooth as possible.
As games get bigger and the way developers make them changes, such as early access releases and the regular post-launch updates many titles now receive, the QA process has had to adapt.
To find out how developers and quality assurance firms are dealing with these new challenges, we spoke to SCEE director of global first-party QA Dave Parkinson about budets, new hardware, mobile and games-as-a-service.
Are budgets for QA increasing?
I think what we’re seeing in our business is the increasing polarisation of the portfolio. Previously, in a boxed offline product sense, products were fairly standardised in terms of size – there was not a lot of deviation from one product to the next. Obviously working now on mobile, digital and triple-A, there’s a much greater polarisation. On an individual test budget level and a product-by-product basis, there’s a lot of deviation in-between.
But overall, budgets are always a challenge. It’s challenging times. We always have to be as fiscally responsible as we possibly can. This is where things like automation and becoming smarter at how we test, rather than just throwing more and more people at a project, are always going to be key considerations for us.
Do publishers/developers allow enough time and money for QA, or is it still an afterthought?
I wouldn’t say it’s necessarily an afterthought. It’s a key component in terms of the overall development cost. The way we budget is that we essentially provide an estimate for a studio, and there’s a two-way collaboration in terms of how that goes about. It’s not set by development, it’s set by the QA organisation. So there’s still a collaboration, it’s still ultimately their profit and loss that’s impacted, but it’s a collaborative effort.
What challenges do QA teams face when new hardware comes out? How have PS4 and Xbox One impacted how QA teams work?
We’ve obviously been through many hardware iterations over the years. It’s always a challenging situation for us, but it’s hugely motivating for the staff. It evolves the way we do things, gives staff new opportunities, gives the teams a greater insight into newer elements of testing.
Obviously the initial investment in hardware is a budgetary implication for us. We have to then do a lot of fact-finding in terms of getting under the hood of what the hardware can actually do and how the underlying firmware is going to change the way we want to develop and test games.
In terms of training staff, that’s another key component. The thing to understand is hardware is very much an organic thing – it’s no longer a black static box that is never upgradeable. In that regard, with every new firmware release, there’s always going to be new functionality that ultimately interacts with the software in different ways. That increases exponentially the level of interconnectivity and complexity of the software that we test.
What about newer platforms like mobiles, tablets and even virtual reality headsets? Is the testing process different?
In some respects, software is still software. You still have some of the same considerations, same challenges, regardless of the medium or platform.
In terms of things like virtual reality, again, it’s still software. There will still be a lot of the common issues that you’ll have to look out for and report on, but it’s getting into things such as the actual physical environments in which you’re actually involved in.
You have to think about the test labs and how they’re laid out, and even things like health and safety considerations to ensure the safety of staff when you’re dealing with things where the products and games are so immersive when the reality could almost become secondary. You have to watch out for the physical wellbeing of your staff.
It’s issues like that you’ve got to consider. We went through a similar process with stereoscopic 3D. Obviously this is taking that immersion to another level, so you have to be aware of that.
How do you think relationships between publishers, developers and QA teams are changing?
I think we’re seeing a trend in the industry of greater integration of QA and testing within development itself. We’re potentially moving away from the premise of a completely independent, centralised QA organisation. We’re seeing tighter, more intimate levels of engagement with development and test. Because really testing is a discipline of product development.
I think we’re in a healthier state than we have been historically. I’m confident about the way forward and the way that testing will be viewed by development across the industry.
Another element is... a lot of QA organisations have historically tried to act as a quality gate. That may have been suitable at a certain point in time in the past, but what we need to move towards is very much a partnership: greater integration and convergence between development and testing, making them one and the same in many respects. As far as a test organisation’s remit is concerned, it’s principally there to provide information that’s going to support stronger, better business decision-making across the organisation.
A lot of games are developed as software-as-a-service. How does that change how QA and localisation occurs?
It becomes a lot less deterministic in terms of lifecycles. That presents budgetary and resource challenges as well. Ultimately, it’s about supporting a very agile process – not just from a development perspective but also from the perspective where you’re able to understand what the consumers’ activity actually is, what their needs and wants are, and be able to rapidly, dynamically put out content that is ultimately designed to support the community you’re trying to aim for.
From a QA perspective, that obviously has an impact. We need to make sure we complement the very agile, very dynamic release pipeline – that’s everything from resourcing and release management to deploying new content. It needs to be a very different pipeline than historically has been seen with boxed offline products in the past.
In terms of free-to-play, there’s an argument that the quality bar should be as high, if not higher than you’d apply to a triple-A product. Unless you’re going to take the quality of that product seriously then it could have a detrimental impact on the revenue you’re expecting from that product.
Dave Parkinson, SCEE
How does the process differ between digital games and boxed games? Are there different challenges that each format presents?
Well, in many cases you’re seeing examples where you’ll do both. It’s essentially the same software under test, but it’s the deployment and the release elements that are different. From an operational business perspective, that’s where a lot of the really profound differences are: the back-end.
Things like release management, platform standards and certification processes change as a result of having a digital rather a boxed product, but generally it’s still a game that we ultimately have to test. It’s more the release mechanism that varies more than the QA process per se. But I think what the digital market really has allowed for is, like I said before, greater polarisation in your portfolio. You wouldn’t see a lot of the smaller, very casual games on more traditional gaming platforms that you now see on mobile and tablets.
How easy is it to strike a balance between upskilling your workforce to handle QA and budgetary concerns?
Obviously, there’s always going to be a need to provide return on investment, particularly if you’re looking to bring in external hires with very special skillsets that you don’t possess internally. But it’s also a challenge to continually upskill your internal and existing workforce. Things like coaching, mentoring and leadership don’t really cost anything other than time.
What you’ve also got to do is understand all employees as individuals and get a clear understanding about what their career aspirations are so that you’re not investing in training that doesn’t support what their needs or your needs are. It’s about getting smarter in how you pinpoint training.
For example, a thing we initiated within Sony is the internationally recognised software testing qualification – the ISTQB Foundation in Software Testing. It obviously comes at a cost, so what we’ve established is a means for any staff to apply for a mock exam. If they show an appetite and an ability to pass that test, they’ll be put forward for the actual examination process. It’s about separating those who have a genuine interest or need of these things, and those that maybe don’t.
Internally, you can also look at internships from QA into software development disciplines and production roles as well.
Any areas of QA that developers aren’t aware enough?
Testing is a discipline of product development, but in many respects it is treated in this industry – and probably others as well – as a process that takes place towards the end of a product’s development cycle. I think studios are now working in a much more agile, very iterative methodology, and testing needs to be part of that.
I’d like to see testing taken more upstream and become a true lifecycle activity of product development. Budgets are always going to be a consideration, but you want to find defects as early as you can so they’re easier to fix. If there’s one desire I have right now, it’s that testing is evangelised and advocated right across development. Quality assurance is a shared responsibility, not that of a central organisation.