Cutting costs of microservices in the cloud using containerisation and ECS – part 1/2
26.05.2020 - Read in 3 min.
Cloud solutions vs. microservices architecture? Do you wonder how you could lower costs by using containerisation and ECS? Read the 1st part of our article.
We can all agree that both the business and development teams want to quickly deploy their hard work. It goes without saying that in order to do that, we need great QA and automation processes. However, to make that possible, products must be deployed step-by-step in each environment, while maintaining the highest quality and effectiveness.
As a result, only end-customers generate profits, and the architecture is designed for them. Now, moving with the times, we choose microservices. Firstly, due to their performance and low impact of possible failures on the entire system. And the number of services must grow, and sure enough companies felt that in their pockets. If you want to start your microservices adventure, check out the “Microservices or Monolith” free e-book. The cloud and microservices can devour money, and as a contractor, we don’t see the opportunities for savings. “It has to cost that much because people use it”. I’d like to highlight one of the commonly missed aspects that can help you save up to hundreds of dollars each month.
Amazon Web Services charges on a “pay-as-you-go” basis, so you pay exactly for what you use. Let’s say that our software development department uses at least 2 temporary environments—these costs are known to us and should not surprise us month to month. But what if we could cut it fourfold?
The cost of microservices architecture on AWS?
I don’t want to and will not describe ways to optimise the costs of a production environment. This differs for individual sectors and technologies. Let’s focus on ways to optimise the development and the Staging environments.
There are at least a few ways of maintaining such environments on AWS:
- A machine for each service
This tends to be one of the first choices and it’s not always right. When we’re at the beginning of our microservices journey and we haven’t thought-out the infrastructure well enough, it’s easy to fall into a trap. The costs will be so high that they can sink the entire project. If you chose this option, keep reading to learn about other ways that I believe you could easily implement.
- Elastic Beanstalk – multicontainer environments
Most beginner DevOps are familiar with this tool. It’s often considered when launching new projects. We use containerisation here—that already reduces our costs by a considerable amount. This solution—as long as it’s simple—does not guarantee the best possible outcome. With this option, all our services are executed in containers on an EC2 machine.
We reach the limit when we try to scale only selected microservices differently (e.g. to run performance tests of a specific service or check how our service is being scaled). In that scenario, we need to create another EC2 instance that will run all apps with identical proportions. Doesn’t sound very cost-efficient? That’s because it’s not. If our services are not using all the resources of a single EC2 instance, we’d like to tap into this unused power of the first instance to scale just a few services. This cannot be achieved with Beanstalk.
- ECS — Elastic Container Service
This function is available in ECS. It operates on a similar principle to Beanstalk in a Multicontainer configuration, but lets you do much more. We can utilise all leased EC2 resources by scaling each service separately, configure auto scaling, and considerably reduce our costs with a few tricks.
What is ECS?
Elastic Container Service is an orchestrator for Docker containers in which our Services are executed. It excels at scaling, it’s free and relatively easy to use. You can control it with the AWS console, CLI, or dedicated API.
This tool helps us control containers, called Tasks in this context. Those Tasks are part of a Service that manages the number of Tasks used by our service at a given moment. Each Task has a definition that specifies the resources it needs and the Docker image it must use. We also have to define a Cluster that gives us access to the machines. ECS decides on which instance of the EC2 cluster the Tasks will be launched. It does that in the best possible way, which gives us certainty that our resources are allocated in the most efficient way.
In the next part, I will describe a few simple, yet effective ways to cut the costs of your non-production environments with or without ECS.