- Posted by Rastko Vasiljevic
- On October 18, 2019
- 0 Comments
How SuperAdmins increased infrastructure automation and optimized the workload
About the Client
Carnegie Technologies d.o.o Belgrade is an innovative communications company with a focus on SaaS solutions. With their communication software, the company aims to help entrepreneurs take advantage of the connected future. SuperAdmins is a proud partner of Carnegie Technologies d.o.o. Belgrade, on the journey of cloud management and optimisation and DevOps.
Project Details and Overview
A B2B application implemented as a single and multi-tenant environment on AWS with custom builds for mobile apps for both iOS and Android devices that communicate with the back-end services via API. Each application is custom-built for a specific customer and the AWS environment is created in an automated way with specific parameters. The application ecosystem is written in Go and consists of six different micro-services each playing a different role. All the micro-services communicate via gRPC and connect to a PostgreSQL database (except for the front-end application which is written in AngularJS).
What was the Challenge?
The client wanted to accelerate the production of their next-gen platforms, migrate their existing microservices from bare-metal servers to cloud-native servers, increase automation in infrastructure management, and optimize the distribution of different workloads.
The request was to build scalable, cost-efficient and a highly accessible platform for two use cases:
a) single-tenant environments
b) multi-tenant environment
Both environments should be created in a short period and to accommodate the expected workload.
For this purpose, three different sizes of environments should be available:
1) demo environment with the latest builds
2) medium environment where only core services should be highly available
3) high traffic environment where all the services are load-balanced and highly available
This would allow the client to choose a specific environment and its size in order to have everything up and running in a short period of time and minimize the AWS cost. CI/CD pipeline was required in order to allow for zero-downtime deployments. The challenge was to make sure that gRPC would work in an Auto-Scaling environment and that the different microservices can communicate with each other via gRPC.
We used Ansible for configuration management in order to provision the servers for each role and then used Packer to bake the initial Ansible plays into the AWS AMI images to create images that are ready to be used in Auto Scaling Groups.
Using Terraform we took care of provisioning and updating the AWS cloud infrastructure resources. We then combined all the elements into Jenkins where the client’s requirements were fine-grained to specific environments and infrastructure sizes. From here the client can run new deployments with zero downtime by connecting to AWS CodeDeploy.
An example of an architecture for the high-availability solution:
Impact and benefits
Through deploying cloud-native technologies and using modern tools like Ansible, Packer and Terraform we achieved significant improvements in production speed and were able to deliver software and maintain environments as Infrastructure as Code (IaC). Due to the nature of the request, the client could choose the best environment size for custom builds that required it and decrease infrastructure costs.
Client quote/project impression
“For the project ‘Creating multiple AWS Environments’, SuperAdmins provided expertise, with on-time analysis, implementation and delivery. They proved to be a valid partner, followed our instructions and suggestions, and provided feedback during each stage of the project.”
Zoran Matic, Project Manager @ Carnegie Technologies doo Belgrade