Sonyâ??s Vinod Tandon explains the best ways to achieve bandwidth optimisation

Bandwidth Aid

Gone are the days of the arcade and lining up to play against the best hardcore gamers in the neighbourhood.
Technology has brought us into an age where the in-home networked game console and computer have taken over. With online games, players can collect achievements or trophies, view and compare their stats, create clans for team play, and participate in online tournaments. Now the average gamer can try their luck against their neighbourhood or distant friends, as well as play against the best hardcore gamers in the world.

Gamers can even be dynamically matched against other players in skill, all from the comfort of their own home, 24/7.

Increasingly, more multiplayer games are running over the internet and the ever-expanding in-game player counts bring about the challenge of bandwidth optimisation. Additionally, sending increased amounts of latency-critical data across the internet can be hard to manage. Developers implementing online games in the real-world will encounter many technical challenges, such as choosing an effective network topology, determining the normal send rate for the specific type of game, or estimating the bandwidth rate for a targeted territory.

This article highlights the challenges a developer will face and provides them with the information they require for creating and optimising a game title for real-time online play. Whether a title offers competitive or co-operative online gameplay, having a robust online implementation that can operate under a multitude of consumer internet connections can increase the longevity of a game title, and can serve as an effective tactic to fight against used games sales.


Game clients communicate with one another in-game through a selected network topology, which directly affects their bandwidth usage.

While many network topologies exist, the three most widely used on the PlayStation Network are: peer-to-peer, dedicated server and integrated server.

Peer-to-peer (P2P) offers a direct flow of network traffic from all game clients to all other game clients without the need for a central game server. In this topology, each client is connected to every other game client, forming a fully connected grid.

A dedicated server utilises a centralised game server hosted in the internet cloud.

All clients in-game are connected to this server. Client communication travels to the server before being routed to all other clients in the game.

An integrated server is similar to a dedicated server, except one of the clients in-game is responsible for routing the game data instead of a separate dedicated machine. All clients are connected to this server where game traffic travels to before being relayed to all other clients in-game.

Selection of a network topology is a critical first step in implementing the online component of a multiplayer game, and is predominately based on player count. Games with large player counts (more than 50 players), require more bandwidth and horsepower, and will choose to use a dedicated server configuration. Medium sized games (under 50 players) will generally use the integrated server solution. Games with smaller player counts (under 10 players) can benefit from the use of the P2P topology, which typically boasts lower latency than the integrated server model.

Online console games now feature higher player counts than were previously available to consumers. To manage the resources of a 256-player game, game titles such as MAG utilise the dedicated server topology and take advantage of cutting edge workstation hardware. A dedicated server can easily host more players in game than their integrated server counterparts, which are limited by the resources available on today’s generation of consoles. However, dedicated servers cost money to purchase and host, which is the key contributing factor to lowering player count requirements and choosing a cheaper network topology.

The integrated server network topology is utilised by games like Warhawk to reduce hosting costs. In this topology, one player in-game is chosen to be the server in which all in-game clients connect to. Before selecting a client to be the server, the game title will try to choose the most appropriate client (the one with the best network connection) to be the integrated server. Hosting costs are reduced since the game is being hosted on one of the player’s internet connection.

Although a dedicated server can be costly to host, it generally allows a certain level of connection quality that can be assured for all players in-game. Dedicated servers rarely disconnect in the middle of a game. Integrated servers, on the other hand, are controlled by a player in-game. Players can quit out of game or shut down the server at any time. With the integrated server model, developers must devise solutions to handle if the server prematurely disconnects from the game. This amounts to migrating the server responsibilities and state to another client, which is referred to as host migration.

Host migration and the need for NAT traversal are two of the biggest disadvantages for integrated server and P2P network topologies. NAT traversal issues, however, can be mitigated by using an in-game networking middleware with competent NAT traversal mechanisms.

The success of the NAT traversal algorithm in the network middleware depends on a combination of factors: the consumers NAT Type, Universal Plug and Play (UPnP) support, and its port predictability.


Message aggregation is a technique that reduces the transmission frequency by merging information from multiple messages into the same packet. The process of aggregating more data into fewer packets reduces the overall network bandwidth usage since less network packet header information is being sent.

Packet aggregation is an essential component to optimising the server’s send rate as well as the client’s receive rate. Servers in either an integrated server or a dedicated server topology generally have larger packets – more data – being sent to clients.

The server has the privilege of receiving all the messages from the clients in-game, bundling them up into one packet, and sending them to all clients. The server can also intelligently remove messages deemed inappropriate for other clients; team-based messages are only sent to members of the same team.

A useful metric in determining how well the data is being aggregated is to calculate the sent payload efficiency percentage (PE%) or the ratio of in-game data versus the entire size of the packet, including network packet headers. This calculation provides the percentage of game data that exists in the packets and can signify the amount of bandwidth being used by the network packet headers:

< Sent PE% = (sent game payload data/total sent packet size) * 100 >

Integrated server and dedicated server network topologies generally have clients with low payload efficiency if they are optimised to send small amounts of data frequently. Nevertheless, the developer should attempt to aggregate as much data as possible and reduce the client’s packet send rate to decrease the overall bandwidth usage. Clients in heavily optimised console multiplayer games tend to offer a packet send rate near 10 packets per second.

The downside of aggregating more data into less packets is that it will add latency to the game since the data is not being sent as soon as it ready to go out.
The trade off between latency and bandwidth usage is a constant balancing act that any network developer must face, but there are a few techniques that can be used to lessen this problem.


Client-side prediction techniques such as dead reckoning can be employed to hide latency. Employing dead reckoning using best-effort communication (UDP/IP), instead of a more expensive reliable communication (TCP/IP), will smooth over gaps when packets are lost. Thoroughly testing the game with varying latencies, packet loss, and jitter also helps to confirm the latency hiding techniques will work under poor network conditions.

Motions of jerk or sudden changes of acceleration and direction can be a game’s worst enemy when making predictions based on latency.

This can happen when a player is stopped, moves full burst in one direction, stops and moves full burst in another direction. Slowing game play is an effective technique that can be used to hide jerk. This can be done by adding moments of inertia to a player’s movement during game play, such as between full speed start and stops.


Data in-game can be sent via unreliable (UDP) or reliable (Reliable UDP, or TCP) network protocols. Unreliable data is traffic that is sent best effort. If the unreliable data is lost on the network it will not reach its destination. When traffic is sent reliably, an additional sequence number must be sent with the data. Once the reliable data has been received, an acknowledgement must be sent to the sender to signify that the data was obtained.

If a host does not receive an acknowledgement, it must retransmit the data over the network and wait again for a returned acknowledgement. Data flagged to be sent reliably requires additional memory since the data must be kept until it has been acknowledged, in case it needs to be resent over the network.

About MCV Staff

Check Also

All the pics from our IRL event last week!

All the photos from the industry's comeback event