Embracing Microservices : Deployment Options

This is the second article corresponding to Microservices Series.

Overview

As we all know, microservices are becoming increasingly popular as a way to build and scale modern applications. However, deploying microservices can be a challenge, as there are many options to choose from. Two of the most popular options are serverless and container-based deployments. In this article, we will compare these two options and discuss their strengths and weaknesses.

This is one of the main concerns for dev teams considering migrating towards a cloud-native microservices architecture. Inevitably, the discussion revolves around the optimal approach: whether to employ containerization or a serverless “Functions as a Service” methodology like Azure Functions, or AWS Lambda Funcions. To contextualize this debate, let’s assume that we plan to host our application on Azure. Should we establish an AKS (Azure Kubernetes Service) cluster or ACI and create each microservice as a container, or should we utilize Azure Functions and develop each microservice as a Function App?

Serverless Computing

Serverless computing is a model where the cloud provider takes care of the infrastructure required to run an application. Developers write code in a language of their choice and deploy it as a function to the cloud provider’s platform. The provider handles everything else, including scaling, load balancing, and provisioning.

Examples are : Azure Functions, AWS Funcions, GCP Cloud Functions, etc..

Benefits of Serverles

One of the main benefits of serverless computing is cost efficiency. With serverless, developers only pay for the resources their functions use while they are running. This means that for applications with low usage, developers can see significant cost savings.

One of the key advantages of serverless computing is its ability to support rapid scale-out. Azure Functions, for instance, can scale very quickly from zero to dozens of servers during times of heavy load, and you only pay for the time your functions are running. Achieving this level of scale-out can require more configuration work with containerized platforms. However, on the other hand, container orchestrators provide greater control over the specific rules that govern scale-out, giving you more granular control over how your applications and services are scaled. GCP Cloud Run can scale until 300 node instance with a single line of configuration.

Serverless computing promotes somehow an event-driven model, whereas containers do not impose any restrictions on the programming models used. While containers can perpetuate older development paradigms, such as large and heavyweight services, serverless platforms encourage event-driven approaches that are inherently more scalable. Serverless platforms promote the use of small and lightweight “nanoservices” that can be quickly discarded and rewritten to adapt to changing business requirements, which is a fundamental aspect of microservices architecture.

However, serverless computing also has some downsides. One of the main challenges with serverless is that developers have limited control over the underlying infrastructure. This can make it difficult to optimize performance and troubleshoot issues.

Container-Based Deployment

Container-based deployment involves packaging applications and their dependencies into a container image. The container image is then deployed to a container platform, which can run the container on any system that supports containerization technology.

Examples are : Docker, Kubernetes and cloud services arround them like AKS (Azure Kubernetes Services), ACI (Azure Container Instances), AWS ECS (Elastic Container Service), AWS EKS (Elastic Kubernetes Services) , GCP GKE (Google Kubernetes Engine) GCP Cloud Run, GCP Compute Engine, etc…

Benefits of containers

Container-based deployments offer several benefits. One of the main benefits is portability. Developers can package their application and its dependencies into a single container image, which can then be deployed to any system that supports containerization technology.

Containers are especially useful for migrating legacy services. If you already have a batch process or web API implemented, it’s often much easier to get it up and running in a container than it is to rewrite it for serverless computing. Containers provide a lightweight, portable runtime environment that can help to reduce the complexity of moving legacy services to the cloud and provide better resource utilization than traditional virtual machines.

Additionally, containers provide greater flexibility than serverless platforms, allowing you to run a wide range of legacy applications and services without the need for significant code changes.

On the other hand, containers offer a simple way to adopt third-party dependencies that may not be easily available or cost-effective as PaaS solutions. There is a vast selection of open-source containerized services, such as Redis, RabbitMQ, MongoDB, and Elasticsearch, that you can quickly make use of in your applications. With containers, you have the freedom to choose when and if it makes sense to switch to PaaS versions of these services. For example, a common pattern is to use containerized databases for development and testing environments and a PaaS database like Azure SQL Database in production. This gives you greater flexibility to choose the best solution for your specific needs at each stage of your application’s development lifecycle.

Containers offer an excellent solution for local development, especially in scenarios where you are working with multiple microservices. By bundling all of your microservices into a Docker Compose file, you can quickly start up all of your services in one go. This is particularly useful when you’re dealing with complex applications that consist of many microservices. In contrast, serverless platforms can require you to come up with your own strategy for testing microservices in the context of the overall application. This can be more complex, as it may require setting up multiple different functions, APIs, and other resources to create a fully functioning application.

Also, a containerized approach can simplify the security requirements of a microservices architecture. With serverless platforms, each microservice is typically exposed publicly on the internet with an HTTP endpoint. This means that every service is potentially vulnerable to attacks, and it’s essential to ensure that only trusted clients can access each service. In contrast, with a Kubernetes cluster, you can choose to only expose specific microservices using an ingress controller. This reduces the attack surface, making it easier to secure the services that are exposed. By limiting public access to only the services that need to be publicly accessible, you can significantly reduce the risk of security breaches

not mutually excluded options

It’s worth noting that hybrid architectures are a viable option when it comes to leveraging both AKS and Azure Functions (and the corresponding solutions for AWS and GCP), taking advantage of the unique strengths of each platform. Additionally, if you prefer the programming model of Azure Functions, it’s possible to host them in a container. You can also combine AKS with technologies like Azure Container Instances to achieve consumption-based pricing and elastic scale, similar to the benefits of serverless computing.

While serverless architectures tend to push users towards PaaS solutions for databases, event brokers, identity providers, etc., you can achieve the same outcome with containers. There’s no reason why you can’t utilize PaaS services for these requirements, rather than containerizing everything. It’s also worth noting that if you’re migrating from a monolith, you may already be running alongside some legacy Virtual Machines.

Conclusions

Both containerized and serverless approaches are excellent options for building microservices and are continually incorporating each other’s best ideas. As a result, the differences between the two may become less significant in the future.

So which approach should you choose? For a startup application, where greenfield development is taking place with a small team of developers trying to prove out a business idea, serverless architecture can be an excellent choice. The consumption-based pricing model and rapid scalability can be ideal for startups looking to minimize costs and quickly scale up when necessary. However, for enterprise applications with more components, development teams, and legacy components involved, containerized approaches can be more promising. Containers offer greater flexibility and control, making it easier to manage complex applications with multiple components.

In reality, most systems today are essentially “hybrid” in nature, combining aspects of serverless, containers, and traditional virtual machines. This allows for greater flexibility and the ability to choose the right tool for the job at hand. Ultimately, the choice between containerized and serverless approaches depends on the specific needs of your application and your development team’s expertise.

See you in the next one !