Complete Guide to End-to-End DevOps Implementation on a MERN Stack Application

TLDR: This blog post provides a comprehensive guide to implementing an end-to-end DevOps pipeline for a MERN stack application, covering infrastructure automation with Terraform, CI/CD with Jenkins and Argo CD, and monitoring with Prometheus and Grafana.

In this blog post, we will explore how to implement an end-to-end DevOps pipeline for a MERN stack application. This guide will cover the entire process, from infrastructure automation to continuous integration and delivery, and finally to monitoring. By the end of this post, you will have a clear understanding of how to set up a complete DevOps workflow.

The MERN stack is a popular technology stack for building web applications. It consists of:

  • MongoDB: A NoSQL database for storing application data.

  • Express.js: A web application framework for Node.js, used for building the backend.

  • React.js: A JavaScript library for building user interfaces, used for the frontend.

  • Node.js: A JavaScript runtime for executing server-side code.

This stack allows developers to use JavaScript throughout the entire application, making it easier to manage and develop.

GitHub Repo for Application Code and Jenkins Files

GitHub Repo for Terraform files to create EKS Cluster

Complete Documentation for the Project

Overview of the Architecture

The architecture for our DevOps implementation is designed to automate the entire process. Here’s a brief overview of the components involved:

  • Terraform: Used for infrastructure automation.

  • Jenkins: Used for continuous integration (CI).

  • Argo CD: Used for continuous delivery (CD) to the Kubernetes cluster.

  • Prometheus and Grafana: Used for monitoring the application.

Jenkins-server instance setup
EC2 → launch instance
server name → Jenkins-server
ubuntu → 20.04 → confirm changes
instance type → t2.2xlarge
key pair → proceed without key pair
network setting → * select existing security group → search and add → 2 ports open → 8080(Jenkins) , 9000 (sonarqube)
configure storage → 30gb
Advance details ———>
→ IAM instance prof[ile → AA-access → [but](amanpathakdevops.study) in real world you should not provide admin access, → create a new IAM instance according to need] → disabled SSH access to jenkins
→ user data → INSTALL →jdk → jenkins → docker → terraform → aws cli → sonarqube (docker container) → trivy —→ launch instance
to check package installed on Jenkins-server correctly
Jenkins-server is created → connect —→ session manager → connect —> terminal is opened
sudo su ubuntu → sudo htop (see the process that is running on machine)
use command to see package is installed correctly —>
java --version
jenkins --version
docker version
terraform --version
aws s3 help
docker ps
trivy --help
Jenkins-server ui setup on browser
copy public IpV4 add of Jenkins-server → paste the ip to new browser tab → add port 8080 —→ eg: 54.159.16.96:8080
systemctl status jenkins.service —> get admin password → copy it and paste into Jenkins-server on browser
installed suggested plugin → took some time to install it
create admin user → username,password,confirm password, full name, e-mail address → save and continue
instance configuration → jenkins url : http://54.159.16.96:8080/ → save and continue
Jenkins is ready → start using jenkins
Jenkins-server install plugin and setup credentials (aws)
Dashboard → manage jenkins (left side) → Plugins (option)
available plugin (left side)→ search available plugin → Aws credentials , pipelines : Aws steps , Terraform and pipeline : Stage View → (select and install all)
→ AWS Credentials Plugin → Manages AWS credentials securely in Jenkins.
Pipelines: AWS Steps → Provides AWS-specific pipeline steps for deploying and interacting with AWS services.
Terraform and Pipeline: Stage View → Integrates Terraform into Jenkins pipelines and improves stage visualization.
Dashboard → manage jenkins (left side) → Credentials (option)
stores scope to jenkins → store (system) : domain (global) → click on global
Global Credentials → add credential
select Aws credentials from dropdown (it is available since we install the plugin) →scope : global → id : aws-creds → [ access key id : —- → secret access key : —- → from aws ]
AWS → select IAM →[ IAM resources →users: no→ select a user from users list → security credentials →Access key → create access key]
Jenkins-server setup terraform
Dashboard → manage jenkins (left side) → Tools (option)
terraform installation → add terraform
got to session manager → terminal → whereis terraform (path to installed terraform) → copy it
name : terraform → install directory : paste the path there → apply and save
—> single jenkins node architecture → jenkins act as both master and slave
Jenkins-server , create pipeline → setup the stages → run the terraform script → create vpc + subnet,eks (2 nodes) deployed to vpc
copy the code from JenkinsFile from terraform github code given above
dashboard → create a job → name : Infrastructure-Job → pipeline → ok
definition : pipeline script → script : paste the copied code in it → apply and save
Infrastructure-Job → status (left side) → stage view (plugin is required)
dashboard → build with parameter → environment : dev → Terraform_Action : apply → build
properties([
    parameters([
        string(
            defaultValue: 'dev',
            name: 'Environment'
        ),
        choice(
            choices: ['plan', 'apply', 'destroy'], 
            name: 'Terraform_Action'
        )])
])
pipeline {
    agent any
    stages {
        stage('Preparing') {
            steps {
                sh 'echo Preparing'
            }
        }
        stage('Git Pulling') {
            steps {
                git branch: 'master', url: 'https://github.com/AmanPathak-DevOps/EKS-Terraform-GitHub-Actions.git'
            }
        }
        stage('Init') {
            steps {
                withAWS(credentials: 'aws-creds', region: 'us-east-1') {
                sh 'terraform -chdir=eks/ init'
                }
            }
        }
        stage('Validate') {
            steps {
                withAWS(credentials: 'aws-creds', region: 'us-east-1') {
                sh 'terraform -chdir=eks/ validate'
                }
            }
        }
        stage('Action') {
            steps {
                withAWS(credentials: 'aws-creds', region: 'us-east-1') {
                    script {    
                        if (params.Terraform_Action == 'plan') {
                            sh "terraform -chdir=eks/ plan -var-file=${params.Environment}.tfvars"
                        }   else if (params.Terraform_Action == 'apply') {
                            sh "terraform -chdir=eks/ apply -var-file=${params.Environment}.tfvars -auto-approve"
                        }   else if (params.Terraform_Action == 'destroy') {
                            sh "terraform -chdir=eks/ destroy -var-file=${params.Environment}.tfvars -auto-approve"
                        } else {
                            error "Invalid value for Terraform_Action: ${params.Terraform_Action}"
                        }
                    }
                }
            }
        }
    }
}

jenkins-code

  1. parameters

    1. string → defaultValue : dev , name

    2. choice → choices: [‘plan‘,’apply’,’destroy’] , name

  2. pipelines

    1. agent

    2. stages

      1. Preparing

      2. Git Pulling

      3. Init

      4. Validate

      5. Action

        1. plan

        2. apply

        3. destroy

Jump-server instance setup
EC2 → launch instance
server name → Jump-Server
ubuntu → 20.04 → confirm changes
instance type → t2.2xmedium
key pair → proceed without key pair
network setting →vpc : dev-medium-vpc → subnet : dev-medium-subnet-public → auto assigned ip add : enable → * create security group
configure storage → 30gb
Advance details ———>
→ IAM instance profile → AA-access → [but in real world you should not provide admin access, → create a new IAM instance according to need]
→ user data → INSTALL →AWSCLI (get kube config)→ kubectl (manage eks cluster and resources) → helm (monitoring) → eksctl (create service acc) —→ launch instance
to check package installed on Jump-server correctly
Jump-Server is created → connect —→ session manager → connect —> terminal is opened
sudo su ubuntu → sudo htop (see the process that is running on machine)
use command to see package is installed correctly —>
kubectl version
eksctl
helm
search → eks → dev-medium-eks-cluster
→ kubernetes v1.29
overview (tab) → openId connect is created automatically by terraform itself
resources (tab) → GUI —→ replica set
compute (tab) → 2 nodes has been created
we are unable to connect to eks cluster outside of vpc
so to check apart from Jump-server , we can’t access eks cluster from any other cluster(eg : Jenkins-Server) —→ Jenkins-Server → session manager → terminal → aws configure → aws (access and secrete key) : —- → region name : usa-east-1 → output format : json —→ aws s3 ls ——> aws eks update-kubeconfig -- name dev-medium-eks-cluster -- region usa-east-1 ——> kubectl get nodes → gives error not able to connect to api itself
Jump-server → which is inside vpc, can only connect to eks cluster
—→ Jump-Server → session manager → terminal → aws configure → aws (access and secrete key) : —- → region name : usa-east-1 → output format : json —→ aws s3 ls ——> aws eks update-kubeconfig -- name dev-medium-eks-cluster -- region usa-east-1 ——> kubectl get nodes → gives output for 2 nodes
Jump-server → IAM policy grand permission to EKS’s Ingress controller pod → create LoadBalancer
IAM →Access Management → polices → search → AwsLoadBalancerControllerIAMPolicy
download IAM policy → curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
Create the IAM policy → aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document [file://iam_policy.json](file://iam_policy.j..
EKS cluster consist of running Ingress controller pod → now how Ingress controller pod create LoadBalancer (which is another aws resource) —→ using IAM policy grand permission to EKS’s Ingress controller pod → create LoadBalancer
create eks cluster→ eksctl create cluster --name Three-Tier-K8s-EKS-Cluster --region us-east-1 --node-type t2.medium --nodes-min 2 --nodes-max 2 aws eks update-kubeconfig --region us-east-1 --name Three-Tier-K8s-EKS-Cluster
No need to create OIDC provider → since it is already configured in terraform
kubectl get sa -n kube-system → to see loadbalancer is created or not
Jump-server → helm → to deploy ingress controller to eks cluster
→ helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
→ helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=my-cluster --set serviceAccount.create=false --set serviceAccount.name\=aws-load-balancer-controller
kubectl get deployment -n kube-system aws-load-balancer-controller → loadBalancer(aws-load-balancer-controller) is up or not → ready
argocd Setup → kubernetes manifest file (not used helm)→ install argocd
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
kubectl get pods -n argocd → All pods must be running
kubectl get svc -n argocd → get all services of argocd
kubectl edit svc argocd-server -n argocd —> yml file will open → edit → type : ClusterIP → type : LoadBalancer
now the loadBalancer is created by cloud control manager of eks cluster
search → loadbalancer → copy DNS url → paste it on browser and open it —> Your connection is not private → Advanced → proceed link → Agrocd ui is opened —→ username : admin →( password : —- (paste the password )→ generated) → click on sign in
kubectl get secrets -n argocd → to get password for sign in for argocd
kubectl edit secrets argocd-initial-admin-secret -n argocd -→ yml file will open → copy the password for login
decode the password → echo <password>

step 2

sonarqube setup
docker ps
curl ifconfig.me → get ip → copy ip → paste on browser with port 9000 → (eg : 54.159.16.96:9000)
username: admin → password : admin → login
→ Administration (tab) → security (dropdown) → users → click on update token (table → tokens(column))
generate tokens→ name: three-tire → expires-in : 30 days → generate —→ copy the token generated —> paste it on googledocs
→Administration (tab)→ configuration (dropdown)→ webhook→ after the code is analyzed , it notifies to Jenkins
create → name : jenkins → url : 54.159.16.96:8080/sonarqube-webhook → secret : none
→Projects (tab)→ create project manually → project display name : frontend → project key : frontend→ main branch name : main → setup
—→ overview → analysis your project locally → use existing token : paste token → continue —> describe your build? : other(js,Ts) → os : linux → execute scanner : commands (just copy it ) —> paste it on googledocs
→Projects (tab)→ create project manually → project display name : backend → project key : backend→ main branch name : main → setup
—→ overview → analysis your project locally → use existing token : paste token → continue —> describe your build? : other(js,Ts) → os : linux → execute scanner : commands (just copy it ) —> paste it on googledocs
store lots of token / secrete in Jenkins
Dashboard → manage jenkins (left side) → Credentials (option)
stores scope to jenkins → store (system) : domains (global) → click on global
→ Global Credentials → add credential
secret text (dropdown)→scope : global →secret : <sonarqube-token>→ id : sonar-token → create
→ Global Credentials → add credential
secret text (dropdown)→scope : global →secret : <aws-account-id>→ id : ACCOUNT-ID → create
→ Global Credentials → add credential
secret text (dropdown)→scope : global →secret : frontend→ id : ECR-REPO1 → create
→ Global Credentials → add credential
secret text (dropdown)→scope : global →secret : backend→ id : ECR-REPO2 → create
→ Global Credentials → add credential
Username and Password (dropdown)→scope : global → username : gokul-devops → password : <github-access-token>→ id : GITHUB-APP → create
→ Global Credentials → add credential
secret text (dropdown)→scope : global →secret : <personal-access-token>→ id : github→ create
create ECR repository in aws → frontend and backend
search ecr → create →create repository → repository name: ——/frontend → create repository
create repository → repository name: ——/backend → create repository
install required plugin for jenkins pipelines
Dashboard → manage jenkins (left side) → Plugins (option)
available plugin (left side)→ search available plugin → Docker, Docker Common, Docker Pipelines, Docker Api, nodejs, owsap Dependency-Check , sonarqube Scanner , → (select and install all)
configure tools in jenkins → which we installed above
Dashboard → manage jenkins (left side) → Tools (option)
search → nodejs installations → add nodejs → name : nodejs → install from nodejs.org → version : node 22.5.1
search → sonarqube scanner installations → add sonarqube scanner → name : sonar-scanner → install from maven central→ version : sonarqube scanner 6.10.4477
search → Dependency-Check installations → add Dependency-Check → name : DP-check → install from github.com → version : Dependency-Check 10.0.3
search → docker installations → add docker → name : docker → install from docker.com → version : latest
—> apply and save
setup of webhook for sonarqube in jenkins
Dashboard → manage jenkins (left side) → system (option)
search → sonarqube installations → add sonarqube → name : sonar-server → server url: <jenkins-url>:9000 → server authentication token : sonar-token —> apply and save
create pipeline in jenkins for frontend→ code : https://github.com/AmanPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project/blob/master/Jenkins-Pipeline-Code/Jenkinsfile-Frontend —> remove jdk from tools, rename the sonarqube Analysis → projectName and projectKey → frontend —> got from googledocs → execute scanner
dashboard → new item → name : Three-tire-frontend → pipeline → ok
copy the above code —-→definition : pipeline script → script : paste the code there
—> apply and save
→ frontend → deployment.yml → image tag should be increased by 1 (eg 2 → 3) → so that , Argocd knows which app to deployed in kubernets →[image: 407622020962.dkr.ecr.us-east-1.amazonaws.com/frontend:2 → —/frontend:3]
sonarqube → Projects (tab)→ frontend → can see all analysis and can share this report to developers
docker image is pushed to Ecr → frontend
create pipeline in jenkins for backend→ code : https://github.com/AmanPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project/blob/master/Jenkins-Pipeline-Code/Jenkinsfile-Backend —> remove jdk from tools, rename the sonarqube Analysis → projectName and projectKey → backend
dashboard → new item → name : Three-tire-backend → pipeline → ok
copy the above code —-→definition : pipeline script → script : paste the code there
—> apply and save
sonarqube → Projects (tab)→ backend → can see all analysis and can share this report to developers
docker image is pushed to Ecr → backend
argocd → connect to private github code repository
manage your repository,project,setting (sidebar→ setting icon)—> repositories → connect repo using https →
type : git → project : default → repository url: <> → username : <> → password : <> → connect
code path for mainfests-file→
https://github.com/AmanPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project/tree/master/Kubernetes-Manifests-file
create name space → three-tire → (before deploying application)
aws→ session manager →
—> kubectl create ns three-tire —>( create name space)
argocd → create db application —> deploy db through argo → (no ci/cd for mongodb)
(sidebar→ stack icon) → create application → application name : three-tire-database → project name: default → sync policy : automatic (self heal → checked) → [source —→repository url: https://github.com/AmanPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project.git → revision : HEAD → path : Kubernetes-Manifests-file/Database] → [destination —> cluster : https://kubernetes.default.svc → namespace: three-tire ] → create
view → three-tire-database (application) [status : healthy] → all linked graph view
aws→ session manager →
—> kubectl get all -n three-tire → (can see replicaset,services,deployment,pod → for mongodb)
argocd → create backend application —> deploy backend through argo
(sidebar→ stack icon) → create application → application name : three-tire-backend → project name: default → sync policy : automatic (self heal → checked) → [source —→repository url: https://github.com/AmanPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project.git → revision : HEAD → path : Kubernetes-Manifests-file/Backend] → [destination —> cluster : https://kubernetes.default.svc → namespace: three-tire ] → create
view → three-tire-backend (application) [status : healthy] → all linked graph view
argocd → create frontend application —> deploy frontend through argo
(sidebar→ stack icon) → create application → application name : three-tire-frontend → project name: default → sync policy : automatic (self heal → checked) → [source —→repository url: https://github.com/AmanPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project.git → revision : HEAD → path : Kubernetes-Manifests-file/Frontend] → [destination —> cluster : https://kubernetes.default.svc → namespace: three-tire ] → create
view → three-tire-frontend (application) [status : healthy] → all linked graph view
argocd → create ingress application —> deploy ingress through argo → helps to expose application to external world
(sidebar→ stack icon) → create application → application name : three-tire-ingress → project name: default → sync policy : automatic (self heal → checked) → [source —→repository url: https://github.com/AmanPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project.git → revision : HEAD → path : Kubernetes-Manifests-file/] → [destination —> cluster : https://kubernetes.default.svc → namespace: three-tire ] → create
view → three-tire-ingress (application) [status : healthy] → all linked graph view
ingress host → amanpathakdevops.study → cant open it on browser
created (three-tire)lb → with load-balancer controller → by ingress controller
search→ load balancer → copy dns name of created (k8-three-tire-lb)lb → paste it on browser → cant open it on browser —> add dns on route 53
search→ Route 53 → hosted zone → amanpathakdevops.study → create record
record type : A → route traffic to : alias to application and classic load balancer, us-east-1 → k8-three-tire-lb → create record
wait for 1 to 2 min
amanpathakdevops.study → now we will be able to access website
aws→ session manager →
—> kubectl get all -n three-tire → (can see replicaset,services,deployment,pod → for mongodb,api,frontend)
—> kubectl get ing -n three-tire

step 3

install helm → prometheus
helm repo add stable https://charts.helm.sh/stable
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/prometheus
helm repo list
kubectl get all → to check pod is up or not (prometheus)
kubectl get pvc
kubectl get deploy
kubectl get svc → to get all service name
kubectl edit svc prometheus-server
—> type : ClusterType to LoadBalancer → (expose to external world → we will get external ip) → new load balancer is created → copy dns name → open it in browser → ui is opened
kubectl get svc prometheus-server → external ip
install helm → grafana
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install grafana grafana/grafana
helm repo list
kubectl get all → to check pod is up or not (grafana)
kubectl get pvc
kubectl get deploy
kubectl get svc → to get all service name
kubectl edit svc grafana
—> type : ClusterType to LoadBalancer → (expose to external world → we will get external ip) → new load balancer is created → copy dns name → open it in browser → ui is opened → [ username: Admin , password: from below command ]
kubectl get secrets
kubectl edit secrets grafana → get admin password → copy and paste during login
create dashboard (using prebuild dashboard)
grafana → data source → prometheus → prometheus server url : <dns-name> → save and test
dashboard (sidebar) → new (dropdown → import)→ [default dashboards → using id (6417) → load]→ select prometheus data source → import
name space : three-tire
dashboard (sidebar) → new (dropdown → import)→ [default dashboards → using id (17375) → load]→ select prometheus data source → import

Step 1: Automating Infrastructure with Terraform

The first step in our DevOps pipeline is to automate the infrastructure setup using Terraform. This involves creating an AWS environment with the following components:

EC2 → jenkins + terraform script ——> EC2 instance is created manually
jenkins run the terraform script → to create VPC
VPC → EKS cluster → (worker node1 + worker node2 + jump server)
jump server → helps us to connect with EKS cluster (to perform any admonitory activates) ——→ only jump server has access to EKS cluster
  • An EC2 instance to host Jenkins.

  • A Virtual Private Cloud (VPC) to isolate our resources.

  • An Elastic Kubernetes Service (EKS) cluster to run our application.

Setting Up the EC2 Instance

  1. Launch an EC2 instance using Terraform.

  2. Install necessary tools such as Jenkins, Docker, and Terraform on the instance.

  3. Configure security groups to allow traffic on required ports (e.g., Jenkins on port 8080).

Creating the VPC and EKS Cluster

  1. Use Terraform scripts to create a VPC with subnets.

  2. Deploy an EKS cluster within the VPC, ensuring that it is private and secure.

  3. Set up a jump server to manage access to the EKS cluster.

Step 2: Implementing Continuous Integration with Jenkins

Once the infrastructure is set up, we move on to implementing CI using Jenkins. This involves several stages:

Stages of the CI Pipeline

  1. Code Checkout: Pull the application code from a GitHub repository.

  2. Code Quality Analysis: Use SonarQube to analyze the code for quality and security issues.

  3. Dependency Checks: Run OWASP dependency checks to identify vulnerabilities.

  4. File Scanning: Scan for unnecessary files in the codebase.

  5. Docker Image Creation: Build Docker images for both the frontend and backend applications.

  6. Pushing Docker Images To Private ECR

  7. Image Scanning: Use Trivy to scan the Docker images for vulnerabilities in ECR.

  8. Update Kubernetes Manifests: Update the deployment manifests with the new image tags.

Setting Up Jenkins

  • Install Jenkins and necessary plugins for Docker, SonarQube, and AWS integration.

  • Configure Jenkins credentials for accessing AWS and GitHub.

  • Create a Jenkins pipeline that automates the above stages.

Step 3: Continuous Delivery with Argo CD

After the CI pipeline is successfully executed, we will set up continuous delivery using Argo CD. This involves:

  1. Deploying Applications: Use Argo CD to deploy the frontend, backend, and database applications to the EKS cluster.

  2. Managing Application State: Argo CD will ensure that the deployed applications match the desired state defined in the GitHub repository.

  3. Using Ingress for External Access: Configure Ingress resources to expose the applications to the internet.

Step 4: Monitoring with Prometheus and Grafana

The final step in our DevOps pipeline is to set up monitoring for our applications using Prometheus and Grafana:

  1. Install Prometheus: Deploy Prometheus on the EKS cluster to collect metrics from the applications.

  2. Install Grafana: Deploy Grafana to visualize the metrics collected by Prometheus.

  3. Create Dashboards: Set up Grafana dashboards to monitor application performance and health.

Conclusion

In this blog post, we have covered the complete process of implementing an end-to-end DevOps pipeline for a MERN stack application. From automating infrastructure with Terraform to setting up CI/CD with Jenkins and Argo CD, and finally monitoring with Prometheus and Grafana, you now have a comprehensive understanding of how to manage a modern web application using DevOps practices.

For further details, please refer to the documentation and GitHub repository linked in the description. Happy coding!