Reliable Resource Scalability and Building the Game Room
Being a multiplayer online game, Awakening of Heroes is a complex project that includes numerous components, many unknown coefficients and requires a substantial amount of resources. In order to support thousands of users at once without any hiccups, the game must feature a dynamically scalable infrastructure, while the entire project must be approached systematically and with great attention to detail.
Our first task was to establish a thorough communication with the team developing the video game and to understand the following notions:
- What all the moving parts of their project are and how they work
- Which components we need to focus on the most
- What our client’s overarching goal is
Once we had all the necessary input, our team at SuperAdmins was able to come up with a workflow strategy and provide a concrete and actionable plan on how to properly execute all the tasks and deploy tangible solutions that would improve the entire project and move it onto the cloud.
We came to the conclusion that the client needed help with one particular aspect of their system in order to quickly launch their project and release it to the global market.
The aspect in question was developing the Game Room for Awakening of Heroes.
Building the Game Room
Game Room is a component that physically hosts the group of AoH players participating in one gaming session. Our client decided to go with Google Cloud as their cloud-based platform of choice, while our task was to build servers in 3 different locations around the globe:
In order for the video game to run smoothly and withstand estimated workloads, the servers required a stable and scalable infrastructure. Since the players come from all corners of the world, they need to be grouped according to their location and placed on the appropriate server.
From the infrastructure standpoint, our task was to build a scalable environment that could be expanded later on in terms of adding more servers when and where necessary. We needed to find a way to execute this task in a quick, simple yet efficient way and without any backend issues so the players wouldn’t experience any hiccups while playing the game.
Our solution was to build a dynamic server infrastructure that makes seamless scalability possible and allows the platform to integrate new players at any given moment.
This was achieved through auto-control in terms of the number of players and load amount, and through adding additional virtual machines and CPUs when the average load reaches certain levels. We also make sure the database can handle the increasing workload.
These project aspects were especially vital for tackling load peaks, which are quite common in multiplayer games and depend on the time of day, location, promotion of the game, etc. When these load peaks happen, the number of players can reach unexpected heights, which means the game’s infrastructure must be ready to withstand sudden surges in terms of the number of active players without any downtime. Load drops are also very common, so we had to build a system that is able to automatically shut down unnecessary server instances when they are no longer needed.
This resulted in a substantial boost in cost-efficiency. Through using a dynamically scalable infrastructure, our client is now able to use only the servers that are necessary at that particular moment, and therefore optimize the amount of money spent on resources.
The Importance of Load Balancer
The game itself has several vital components, one of which is the control server that regulates which player is placed on which server. Load Balancer is located in front of these game servers and its purpose is to balance the traffic for the servers and make sure there is only one endpoint for them, which is crucial for the aforementioned scalability.
Load balancer also regulates the usage of servers according to the current number of players. As the increase in the number of virtual machines directly depends on the number of players, we came to the conclusion that the load balancer component should be placed on the control server.
Google Cloud Platform load balancers were deployed, while the instances within the autoscaling group are not being added via GCP. Instead, we created a separate mechanism through which adding and/or removing of the instances from the autoscaling group is performed. This means that this type of scalability is now on the app-level.