Day-10-terraform

Creating an EKS Cluster with VPC Configuration Using Terraform

TLDR: This blog post provides a comprehensive guide on creating an Amazon EKS cluster with VPC configuration using Terraform, including installation steps, configuration details, and best practices for structuring Terraform files.

In this blog post, we will explore how to create an Amazon Elastic Kubernetes Service (EKS) cluster along with a Virtual Private Cloud (VPC) configuration using Terraform. This guide will cover the installation of necessary tools, configuration steps, and best practices for structuring Terraform files.

setup
install aws_cli → aws —version
install terraform → terraform version
create access and secrete access key
aws configure → paste access and secrete access key
// vpc.tf  --> using community module
provider "aws" {
  region = var.aws_region
}

data "aws_availability_zones" "available" {}

locals {
  cluster_name = "abhi-eks-${random_string.suffix.result}"
}

resource "random_string" "suffix" {
  length  = 8
  special = false
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.7.0"

  name                 = "abhi-eks-vpc"
  cidr                 = var.vpc_cidr
  azs                  = data.aws_availability_zones.available.names
  private_subnets      = ["10.0.1.0/24", "10.0.2.0/24"]
  public_subnets       = ["10.0.4.0/24", "10.0.5.0/24"]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = "1"
  }
}
// eks-cluster.tf
module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "20.8.4"
  cluster_name    = local.cluster_name
  cluster_version = var.kubernetes_version
  subnet_ids      = module.vpc.private_subnets

  enable_irsa = true

  tags = {
    cluster = "demo"
  }

  vpc_id = module.vpc.vpc_id

  eks_managed_node_group_defaults = {
    ami_type               = "AL2_x86_64"
    instance_types         = ["t3.medium"]
    vpc_security_group_ids = [aws_security_group.all_worker_mgmt.id]
  }

  eks_managed_node_groups = {

    node_group = {
      min_size     = 2
      max_size     = 6
      desired_size = 2
    }
  }
}
// security-group.tf 
resource "aws_security_group" "all_worker_mgmt" {
  name_prefix = "all_worker_management"
  vpc_id      = module.vpc.vpc_id
}

resource "aws_security_group_rule" "all_worker_mgmt_ingress" {
  description       = "allow inbound traffic from eks"
  from_port         = 0
  protocol          = "-1"
  to_port           = 0
  security_group_id = aws_security_group.all_worker_mgmt.id
  type              = "ingress"
  cidr_blocks = [
    "10.0.0.0/8",
    "172.16.0.0/12",
    "192.168.0.0/16",
  ]
}

resource "aws_security_group_rule" "all_worker_mgmt_egress" {
  description       = "allow outbound traffic to anywhere"
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.all_worker_mgmt.id
  to_port           = 0
  type              = "egress"
  cidr_blocks       = ["0.0.0.0/0"]
}
//variable.tf
variable "kubernetes_version" {
  default     = 1.27
  description = "kubernetes version"
}

variable "vpc_cidr" {
  default     = "10.0.0.0/16"
  description = "default CIDR range of the VPC"
}
variable "aws_region" {
  default = "us-west-1"
  description = "aws region"
}
//output.tf
output "cluster_id" {
  description = "EKS cluster ID."
  value       = module.eks.cluster_id
}

output "cluster_endpoint" {
  description = "Endpoint for EKS control plane."
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane."
  value       = module.eks.cluster_security_group_id
}

output "region" {
  description = "AWS region"
  value       = var.aws_region
}

output "oidc_provider_arn" {
  value = module.eks.oidc_provider_arn
}

#output "zz_update_kubeconfig_command" {
  # value = "aws eks update-kubeconfig --name " + module.eks.cluster_id
#  value = format("%s %s %s %s", "aws eks update-kubeconfig --name", module.eks.cluster_id, "--region", var.aws_region)
#}
//version.tf
terraform {
  required_version = ">= 0.12"
  required_providers {
    random = {
      source  = "hashicorp/random"
      version = "~> 3.1.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">=2.7.1"
    }
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.68.0"
    }
    local = {
      source  = "hashicorp/local"
      version = "~> 2.1.0"
    }
    null = {
      source  = "hashicorp/null"
      version = "~> 3.1.0"
    }
    cloudinit = {
      source  = "hashicorp/cloudinit"
      version = "~> 2.2.0"
    }
  }
}
terraform init
terraform plan
terraform apply —auto-approve
grant permission to created cluster to view resource
search → eks → clusters (created by terraform) → compute (tab) → Node groups
node is not visible → since you don’t have permission to access/view resource of cluster
search → eks → clusters (created by terraform) → access (tab) → create access entry
IAM principal ARN : — → next → policy name : AmazonEKSClusterAdminPolicy → add policy → next → create
search → eks → clusters (created by terraform) → compute (tab) → Node groups (now nodes and node group is visible)
terraform destroy → to destroy created resource , to avoid charges

Overview of the Project

The project involves setting up an EKS cluster with worker nodes configured for autoscaling. We will also create security groups to secure the VPC and the EKS cluster. The steps outlined in this post are designed to mimic real-world scenarios commonly encountered in organizations.

Prerequisites

Before we begin, ensure you have the following:

  • An AWS account

  • Basic knowledge of Terraform

  • Access to a terminal or command line interface

Step 1: Install AWS CLI

To interact with AWS services, we first need to install the AWS Command Line Interface (CLI). Here’s how to do it:

  1. Search for "install AWS CLI" in your browser.

  2. Follow the instructions based on your operating system (Windows, Linux, or Mac).

  3. Verify the installation by running the command:

     aws --version
    

Step 2: Install Terraform

Next, we need to install Terraform. Follow these steps:

  1. Search for "install Terraform" in your browser.

  2. Choose the installation method suitable for your operating system.

  3. Verify the installation by running:

     terraform version
    

Step 3: Configure AWS CLI

After installing the AWS CLI, we need to configure it to connect to your AWS account:

  1. Run the command:

     aws configure
    
  2. Enter your AWS Access Key ID, Secret Access Key, default region, and output format (JSON is recommended).

Step 4: Create Terraform Files

Now that we have the necessary tools installed and configured, we can create the Terraform files. It is recommended to break down the configuration into multiple files for better readability and maintenance. Here are the key files you will create:

  • main.tf: The main configuration file.

  • vpc.tf: Configuration for the VPC.

  • eks.tf: Configuration for the EKS cluster.

  • security_groups.tf: Configuration for security groups.

  • outputs.tf: To define outputs after the execution.

Best Practices for Structuring Terraform Files

  1. Modularization: Use modules for reusable code. For example, use the official Terraform AWS modules for VPC and EKS.

  2. Separation of Concerns: Keep different resources in separate files to enhance readability and maintainability.

Step 5: Initialize Terraform

Navigate to your project directory and run:

terraform init

This command initializes the Terraform configuration and downloads the necessary provider plugins.

Step 6: Plan the Deployment

Before applying the changes, it’s a good practice to run:

tf plan

This command shows you what resources will be created, changed, or destroyed.

Step 7: Apply the Configuration

To create the resources defined in your Terraform files, run:

tf apply

You can add the -auto-approve flag to skip the confirmation prompt if you are confident about the changes.

Step 8: Verify the Deployment

Once the resources are created, you can verify the EKS cluster and VPC configuration in the AWS Management Console. Check the EKS section to see your cluster and the VPC section for the network configuration.

Step 9: Grant Permissions

To access the Kubernetes resources, you may need to create access entries in the EKS cluster settings. This can be done through the AWS Management Console or using Terraform.

Step 10: Clean Up Resources

After you are done testing, remember to clean up the resources to avoid incurring charges. Run:

tf destroy

This command will remove all the resources created by Terraform.

Conclusion

In this blog post, we covered the steps to create an EKS cluster with VPC configuration using Terraform. We discussed the installation of necessary tools, configuration steps, and best practices for structuring Terraform files. By following these guidelines, you can effectively manage your infrastructure as code and streamline your deployment processes.