The core focus of any DevOps team is evolving and improving products rapidly. At Leapfrog, we value the speed at which we deliver applications and services efficiently and provide higher value to businesses and clients.
For the past couple of months, we have been researching and testing various tools and services to help us be more efficient, scalable, and reliable. Here is the list of devops tools and technologies that we have been using in our projects you might find useful.
1. AWS Fargate – Run containers without managing any servers
In the past, we have been using Amazon EC2 instances for all of our projects. If anyone wanted to deploy a React, Node, or a Python app, they were given access to an EC2 instance. We also deployed the dockerized app using EC2.
There isn’t any problem with using EC2. It’s perfect for testing small-sized applications, but issues come up when making the application scalable down the road. To prevent such issues, we use AWS Fargate as our go-to service for deploying our Node/Python/Go/* apps while dockerizing it. You create a Docker Image for your app, push it to a docker registry and let Fargate do the rest. Your containers will be up and running without you having to manage any servers. The best part is, it auto-scales your applications as per the load or with the push of a button.
2. Amazon S3 for Frontend Deployments
Most of the frontend applications that we build are Single Page Apps built on React, Angular or Vue. Now, the beauty of Single Page Apps is that they are all rendered client-side. Meaning, your server doesn’t have to do anything to render pages, except for serving static files (HTML, CSS, and JS). The heavy lifting of the view logic is done on the end user’s web browser.
We have been looking for a replacement to the EC2 instance for hosting Single Page apps because it is genuinely an overkill for just hosting simple static files for frontend applications. The replacement we found for was none other than Amazon S3. With S3, we can host static websites without the need of an EC2 instance. Couple it up with AWS Cloudfront and now we have our CDN. The great thing about S3 is its scalability
We have been hosting all of our frontend apps to S3. It significantly minimizes the cost and maximizes app scalability.
3. AWS Secrets Manager – Secure and Centralized Secrets
We have many different secret keys when developing an app; these secret keys can be for a database, server credentials, Google Auth Keys, and even more.
From our experience, once your application starts growing, managing these keys get harder. Without proper caution, development keys get deployed to production just because of the sheer numbers of keys to manage appropriately.
This is where AWS Secrets Manager comes in. It can store all the secrets of all the environments here, and fetch the keys when you need them! We can also add internal access control depending upon who gets to view which environment keys and many more.
Now, to use AWS Secrets Manager, we have to write some extra lines of code. However, with this adjustment, it increases the security of the application vastly to save you the time and for the ease of mind that your excellent code remains accessible to those who need it and unavailable to those who don’t. But since integrating AWS Secrets manager takes the extra time we made an open-source tool called Envault.
Our homebrew program, Envault, once set up, fetches the secrets from AWS and inject it to your application process with ease. We have been using this tool for every project and so far we love it. Not just because we built it, but it helps in speeding up the time to integrate AWS Secrets into your application.
4. Serverless with Apex Up
Serverless has been a trending topic recently. When you use serverless, you get charged for the resources you consume. Now, serverless does not necessarily mean that there aren’t any servers running; your code still needs to run on a server. It just means that you don’t have to manage them. In a nutshell, your code runs on a Lambda function, triggered by an API Gateway.
However, to truly be serverless, we have to change our programming paradigm. Meaning, each of our services has to run on separate lambdas. Now, we don’t want to change how we code, but we do want the serverless feature. This is where Apex Up comes in. It wraps your code and deploys it in a single lambda while provisioning all the resources required.
5. Logging and Monitoring with Elasticsearch and Kibana
First of all, we needed a logging standard. Most of the companies already have a logging standard on how we should handle our log formats and how we should properly “log” in our applications. But where do we store them?
As the topic suggests, all logs are shipped to Elasticsearch and visualized and searched through Kibana. We won’t be hosting our elastic search but use the elastic search service from Elastic Cloud. Logs are not kept in the server where applications are hosted, which takes most of the space of our servers.
6. Terraform – Infrastructure as code
We have to keep provisioning various resources over and over again creating EC2 instances, S3 Buckets, RDS, for different environments. Not only create, but also keep modifying them. This makes it harder to manage every single resource we are using for our project and keep track of them.
So what we planned to extensively use Infrastructure as code with Terraform. All the resources provisioned and modified live as code and version-controlled in Github. If we have to create an S3 bucket, we have to write code for it, decreasing the time to get a new environment active.
7. CI/CD – Travis CI | Circle CI
Most of us are already using CI servers to run their tests (if you write them) and automate your build processes. Some are using TravisCI, some CircleCI, VSTS, Gitlab CI, and even Jenkins, which is hosted on our servers.
We have avoided hosting our CI servers. Jenkins and Gitlab CI (hosted by us) have given us many headaches.
If you would like to test out CI Servers, here is our breakdown on why you should use each of the ones mentioned below.
- We prefer TravisCI the most, as it’s easy to set up and use, but it’s only free for public repositories.
- We are also using CircleCI, it’s free for private repositories, but with only 1000 build minutes per month.
If we have to use a CI server, we usually choose between these two.
8. CLI tools and scripts in Go
It was always hard to ship our tools to our team if written in Python. Even while making docker images for NodeJS, we had to install Python2, Python3, followed by pip and then our scripts, which was just a hassle.
Moving forward, we are adopting Go. Go allows us to create a binary and can be shipped and executed without dependencies. We have been using Go for the past few months, and so far, we love it.
Like what you read? Here is a list of our favorite open source apps for development.