The scalability, flexibility and reduced cost promised by serverless architecture resulted in a massive growth rate of 75% compared to other AWS cloud services. That was compelling enough for us to get our feet into the Serverless computing. It’s not a magic bullet for everything, but there are specific use cases where it outperforms other cloud services.
In this blog, we share our experience — both perks, and quirks we have faced with the serverless architecture.
First, a little bit of introduction, the diagram below shows the deployment architecture we’ve used over time.
A monolith application has all services like authentication, queue, and different modules as a single application in a single server. It used to be our way of developing applications.
The pros of this architecture are
- Less maintenance cost
- Easier testing
- Good initial performance
- Enables rapid application development
Moving on, to maintain independent modules and larger development teams, we started fiddling with the microservices architecture. We slowly started to move every new project to follow this architecture. It is highly scalable, better organized, and quickly deployable.
Nevertheless, there were some common challenges with both monolithic and microservices architecture like:
- Network Security
- Infrastructure Pricing
- Operational Cost
- High Development Cost
- Environment Incompatibility
We solved autoscaling and environment incompatibility in the development and production environment with containerization using Docker.
However, they still use IaaS (EC2 Instances), which adds cost when the application state is idle. We saw that this cost is very high. Due to pre-provision or over-provision of storage and computing, there is always an overhead of cost. This was one of the reasons we started experimenting with a serverless architecture.
Serverless computing is a cloud-computing model in which the cloud provider manages the server and dynamically manages the allocation of machine resources.
This kind of environment runs on top of an OS and utilizes virtual machines or physical servers. However, the responsibility for managing or provisioning the infrastructure belongs entirely to the service provider. The pricing model uses a metric of actual resources consumption.
Advantages of Serverless
- Less operational complexity
- Scale within a second
- High availability
- Multiple programming language support
- Lower development cost
- Secure infrastructure
- Easily create/develop microservices
- Faster release cycle
Limitations of Serverless
- Latency and concurrency, there is the terminology of warm and cold function
- Memory limit, can’t do massive computing
- Billing attack if not properly monitored
Moving to Serverless
We first experimented with an authentication module developed as a microservice. We hosted it in AWS Lambda services and chose the SAM framework provided by AWS for rapid application development.
A few hiccups accompanied our transition to serverless. Here are a few things to consider if you are moving to serverless architecture in AWS.
- RDS connectivity
- Internet connectivity
- NAT Gateway
- Roles and Policy management
- Secret Credential management
The authentication microservice needed to connect to an existing application database. We quickly figured out that to get access to an RDS from Lambda service, we needed to attach our AWS Lambda to a VPC. However, if we attached AWS Lambda to a VPC, it did not have internet connectivity.
Finally, we designed a network architecture so that our AWS Lambda could have access to RDS as well as the internet. The solution — we had to launch a NAT gateway instance.
In the diagram below, we have attached AWS Lambda function to a private network. All the internet traffic is passed to the NAT Router which forwards traffic to NAT gateway. The internet gateway handles Internet traffic from public subnet and NAT Gateway. We hosted RDS and NAT gateway on a public subnet. The yellow box represents that the AWS Lambda and RDS are in the same network or VPC.
We continuously worked to design our serverless app architecture. We wanted to ensure our team had a proper deployment pipeline that generally has three stages — dev, QA, and production. And, we finally came to this.
Let’s walk through some of the AWS services we’ve used:
S3 Bucket: The blue S3 bucket hosted static content, and a red S3 bucket hosted our AWS Lambda function.
To simplify credential management, we integrated Vault for storing the secrets. We designed the CI/CD pipeline in GitLab for respective git branches to ease the process of deploying code and maintaining them.
After a few weeks of the experiment, we deployed our app on serverless architecture. Our development team can now focus on only writing the code.
We have found serverless to be very well suited for adopting a microservice architecture without the hassle of maintaining the servers, scalability, and availability headaches.