Pokémon Go has, in many ways, united the world. Players across the globe have enjoyed the thrills of heading out into the wild, hunting down Pokémon and hatching eggs in their efforts to ‘catch them all’.
They’ve also probably experienced the disappointment of server down time, crashes and generally annoying technical jitters.
Although it would be churlish to blame Niantic Labs for these problems (after all, how many developers release a game that becomes more popular than Twitter in under a week?), Pokémon Go’s problems have reminded the industry of just how challenging it can be to keep a game stable when millions of players are online at the same time. Especially at a time when games are increasingly requiring constant connectivity, with more complex backends.
The most obvious technical problem that has beset Pokémon Go has been server downtime. While this has been an off and on problem, Saturday July 16th was the most notable period of downtime.
After working flawlessly in the UK during the morning, it completely fell over in the afternoon and resolutely refused to let players back in for the best part of six hours.
At the time, it was rumoured that the game had fallen victim to a Distributed Denial of Service (DDoS) attack from a hacking group.
But in all likelihood, the real reason it went down was because the game launched in an additional 26 countries at the same time – adding millions of players to the game within a few hours.
On the one hand, this was obviously a good thing for the developer. Getting the best part of a hundred million people to organically download a game over the course of a few hours is a dream scenario for practically any mobile gaming business.
On the other hand, the rush precipitated the server crash. Whichever backend service Niantic Labs used to support the game did not scale up at the same rate as player demand, causing the complete technical white out.
"Just about any service provider would struggle with what must rank as an unprecedented amount of traffic heading its way."
What could Niantic have done differently then? The answer isn’t totally clear, as just about any service provider would struggle with what must rank as an unprecedented amount of traffic heading its way.
Other developers looking to prepare for a surge in demand and usage post-launch, should be confident that when a load hits, their game will still be able to support it.
To do this, a stable foundation should first be laid through one of the platform-as-a-service offerings which support gaming backends, such as Amazon Web Services, Microsoft Azure or Google Cloud Platform.
This can then be built upon with a scalable software layer that is able withstand a game’s load beyond levels of expectation.
Architecture like this allows developers to head from zero to a million users to scale up at a steep rate and prepare for this level of usage through extensive load testing, meaning the downtime could have been avoided.
Even if it couldn’t have stopped the downtime completely, it’s worth considering platforms that make scaling up simple, but also offer a higher tolerance to failure, just in case you have a hit on your hand.
Beyond the frustrations with the server, there were two further technical problems that faced players after launch.
App freezes are one of the problems. In particular, app freezes have often occurred when a player is catching a Pokémon – forcing a trainer who has just caught a Gengar to close and reopen the frozen app and pray that it sits in their caught Pokémon list.
While the cause of the freezes is hard to pinpoint, players who have lost Pokémon they caught after a freeze, could be suffering from problems with the game’s cloud save system.
Developers faced with this problem could solve it by making sure that the link between the game state on the client (where – in this case, Pokémon – data could theoretically be stored) and the player’s server-side state is tolerant of failure.
When a player catches a Pokémon, the client should be able to synchronise this achievement with the server. However, many uncontrollable variables such as network connectivity can prevent this from happening. In this case the client should be able to store the achievement, allowing the server to search and send for it once the connection has been restored.
IAP Purchases Lost
Lastly, and possibly most severely, some Pokémon Go players who purchased Pokecoins to help them in game have reported that their purchases have not appeared in their inventory.
This is a big deal. If a consumer pays for premium currency and it does not appear in their virtual wallets, they will feel ripped off.
And if they do, it makes it more likely they will churn and that they will complain – potentially in public – about their treatment.
But what’s causing this and how can it be solved? The most likely reason it is happening is that an intermittent network connection fails to process the IAP request properly – registering the transaction, but not delivering the virtual currency to the player.
The best way to solve this problem is to store unsent IAP requests on the user’s device and send them their currency when a connection restores. That way the consumer definitely still receives their purchase, preventing them from feeling ripped off by the game.
Pokémon Go had an extraordinarily successful launch. Even though it had some clear technical issues, it’s hard to think of many small companies successfully bringing on board tens of millions of players within a week without experiencing some teething pains.
Admittedly, not every small company was a start-up within Google, is run by ex-Google engineers or receives $30 million investment from Nintendo, The Pokémon Company and Google ahead of the launch of its third app.
That said, we can all learn from Niantic Lab’s crazy experience to try to anticipate problems before they happen. Not every studio is going to be lucky enough to have a title where players will overlook some initial teething pains.
"Not every studio is going to be lucky enough to have a title where players will overlook some initial teething pains."
By working with a cloud-based platform infrastructure management system that can scale up server capacity quickly in case of a sudden surge of users, testing a game as extensively as possible prior to launch and thinking ahead to possible problems, development teams can nix some issues before they develop.
But equally, it’s worth remembering that technical problems can happen to any team – however well prepared they are. Accepting that’s the case, and adapting to problems when they appear, is important for any business hoping to succeed in mobile gaming.
Games like Pokémon Go highlight how important global server scalability is and why player management is so vital to ensure a consistently good experience. Pokémon Go is somewhat of an anomaly where the issues actually serve to help promote the game even further. Whenever players find the game is live again, there is a rush of social media activity and excitement, adding to the sense that you need to play it to be part of the cultural moment. That’s pretty rare.