IAM Module (IRSA, OIDC, and Why This Controls Everything)
In the previous part, we built the VPC.
Now we move to something that causes the most confusion in EKS setups:
π IAM
This is not just βpermissionsβ.
This module controls:
- how EKS works
- how nodes behave
- how pods access AWS services
If this is wrong:
- ALB wonβt work
- CSI drivers fail
- Pods canβt access AWS
- Debugging becomes painful
π Module Files
modules/iam/
βββ main.tf
βββ variables.tf
βββ outputs.tf
π variables.tf
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
}
variable "oidc_provider_arn" {
description = "ARN of the OIDC provider"
type = string
}
variable "oidc_provider" {
description = "OIDC provider URL"
type = string
}
π§ What these inputs mean
cluster_name
β used to name rolesoidc_provider_arn
β comes from EKS moduleoidc_provider
β used for IRSA condition matching
π Important:
This module depends on EKS
Because OIDC is created inside the EKS module.
π main.tf (Core IAM Logic)
1. EKS Cluster Role
resource "aws_iam_role" "eks_cluster" {
name = "${var.cluster_name}-cluster-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
}
]
})
}
π§ What this actually does
This role is used by:
π EKS Control Plane (managed by AWS)
Key line
Service = "eks.amazonaws.com"
π Means:
βEKS service is allowed to assume this roleβ
Attach Policy
resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster.name
}
Why this policy?
This allows EKS to:
- manage nodes
- communicate with AWS
- create resources
2. Node Group Role
resource "aws_iam_role" "eks_nodes" {
name = "${var.cluster_name}-node-group-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})
}
π§ What this role is for
π Used by EC2 instances (worker nodes)
Key line
Service = "ec2.amazonaws.com"
π Means:
EC2 instances can assume this role
3. Node Policies
Now we attach multiple policies.
a. Worker Node Policy
resource "aws_iam_role_policy_attachment" "eks_worker_node_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}
π Allows nodes to:
- join cluster
- communicate with control plane
b. CNI Policy
resource "aws_iam_role_policy_attachment" "eks_cni_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}
π This is very important
Allows:
- Pod networking
- ENI management
π Without this:
Pods wonβt get IPs
c. ECR Access
resource "aws_iam_role_policy_attachment" "eks_container_registry_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}
π Allows nodes to:
- pull Docker images
d. SSM Access
resource "aws_iam_role_policy_attachment" "eks_ssm_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
π Allows:
- SSM access
- no need for SSH
π This is production best practice
4. EBS CSI Driver Role (π₯ Most Important Part)
resource "aws_iam_role" "ebs_csi_driver" {
name = "${var.cluster_name}-ebs-csi-driver-role"
This is NOT for nodes.
This is for:
π Kubernetes Pod (EBS CSI controller)
π₯ This is IRSA (Core Concept)
Action = "sts:AssumeRoleWithWebIdentity"
π This is different from EC2 roles
This allows:
π Pods β assume IAM role
π₯ OIDC Trust
Principal = {
Federated = var.oidc_provider_arn
}
π This links:
- EKS cluster
- IAM
π₯ Condition (VERY IMPORTANT)
"${var.oidc_provider}:sub" = "system:serviceaccount:kube-system:ebs-csi-controller-sa"
π This means:
ONLY this service account can assume the role:
kube-system / ebs-csi-controller-sa
Why this matters
π This is fine-grained security
Instead of:
β giving full access to nodes
You do:
β give access only to specific pod
Attach Policy
resource "aws_iam_role_policy_attachment" "ebs_csi_policy" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
}
What this enables
- Create volumes
- Attach volumes
- Manage storage
π outputs.tf
output "eks_cluster_role_arn" {
value = aws_iam_role.eks_cluster.arn
}
output "eks_nodes_role_arn" {
value = aws_iam_role.eks_nodes.arn
}
output "ebs_csi_driver_role_arn" {
value = aws_iam_role.ebs_csi_driver.arn
}
π§ Why outputs matter
These are used in:
- EKS module
- Node module
- CSI module
Example:
cluster_role_arn = module.iam.eks_cluster_role_arn
π This creates dependency automatically.
π₯ Real Architecture (What You Built)
EKS Control Plane β uses cluster role
EC2 Nodes β use node role
Pods (CSI) β use IRSA role (OIDC)
β οΈ Real Mistakes People Make
- Giving full IAM to nodes (bad security)
- Not using IRSA
- Wrong OIDC condition β role not assumed
- Forgetting CNI policy β pods fail
π§ Key Takeaways
- IAM is not optional β it defines system behavior
- Nodes and pods should have separate roles
- IRSA is the correct way to give AWS access to pods
- OIDC is what connects Kubernetes to IAM
π Next
In Part 3:
π EKS Cluster Module
π How control plane is created
π What OIDC actually does internally
If you understand this module, you understand how AWS + Kubernetes actually connect behind the scenes.
Top comments (0)