Securing Your Database with Network Policies in Kubernetes (5)
TLDR: This blog post explores how to secure a database in a Kubernetes cluster using network policies, focusing on ingress traffic control to prevent unauthorized access from other namespaces. It provides a practical example of deploying a Redis database and implementing network policies to restrict access effectively.
In the fifth episode of the Kubernetes Troubleshooting Zero to Hero series, we delve into a common security access issue faced by DevOps engineers: securing a database within a Kubernetes cluster. This post will guide you through the theoretical and practical aspects of using network policies to restrict access to a database, ensuring that only authorized applications can connect to it.
Understanding the Use Case
To illustrate the concept, let's consider a Kubernetes cluster with two namespaces:
Secure Namespace - where the database is deployed.
Sandbox Namespace - where other applications or pods are deployed.
In this scenario, we will deploy a database (which could be Redis, MongoDB, MySQL, etc.) in the secure namespace and restrict access to it from any pods in the sandbox namespace. The goal is to ensure that only a specific application within the secure namespace can access the database, thereby enhancing security.
Why Restrict Access?
You might wonder why it is necessary to restrict access between namespaces in the same Kubernetes cluster. The answer lies in security. If a pod in the sandbox namespace is compromised, it could potentially access the database if no restrictions are in place. By implementing network policies, we can prevent unauthorized access and protect sensitive data.
Network policies in Kubernetes are used to control the traffic flow between pods. They are primarily divided into two categories:
Ingress - Controls incoming traffic to a pod.
Egress - Controls outgoing traffic from a pod.
In this post, we will focus on ingress policies to restrict access to our database.
Practical Implementation
Step 1: Deploying the Database
We will deploy a Redis in-memory database in the secure namespace. Here’s how to do it:
Create the secure namespace:
kubectl create ns secure-namespace
Deploy the Redis database:
kubectl apply -f db.yaml -n secure-namespace
Verify that Redis is running:
kubectl get pods -n secure-namespace
Step 2: Simulating Unauthorized Access
Next, we will simulate a scenario where a pod in the default namespace (acting as a hacker pod) tries to access the Redis database:
Deploy the hacker pod:
kubectl apply -f hack.yaml
Log into the hacker pod:
kubectl exec -it deploy/httpd -n default -- /bin/bash
Install Redis CLI to connect to the Redis database:
apt update && apt install redis-tools
Attempt to connect to the Redis database:
redis-cli -h <redis-ip-address>
If the connection is successful, it indicates a security vulnerability.
Step 3: Implementing Network Policies
To secure the Redis database, we will create a network policy that restricts ingress traffic:
Create a network policy YAML file (network-policy.yaml):
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-redis namespace: secure-namespace spec: podSelector: matchLabels: app: redis policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: role: redis-member
Apply the network policy:
kubectl apply -f network-policy.yaml
Step 4: Testing the Network Policy
After applying the network policy, attempt to connect to the Redis database again from the hacker pod. This time, the connection should be denied, demonstrating that the network policy is effectively restricting access.
Conclusion
Network policies are a powerful tool for securing sensitive resources in a Kubernetes cluster. By understanding and implementing ingress policies, DevOps engineers can significantly enhance the security of their applications and databases. In this post, we explored how to deploy a Redis database and restrict access using network policies, ensuring that only authorized pods can communicate with it.
For further exploration, consider trying these configurations in your own Kubernetes environment, such as an EKS cluster or OpenShift. Thank you for reading, and stay tuned for more insights in the next episode!