Deploying a Go app on AWS EKS with Terraform, GitHub Actions, and a custom domain.
This project provisions an EKS cluster on AWS using Terraform, deploys a Go application via GitHub Actions, and exposes it at app.jspoth.com behind an Application Load Balancer with HTTPS.
Error: failed to download openapi: the server has asked for the client to provide credentials
So this one was a bit confusing at first. The GitHub Actions IAM role existed in AWS, the OIDC was set up,
the workflow had aws eks update-kubeconfig — everything looked right. But kubectl was still failing
to connect to the cluster.
Turns out having the IAM role is not enough. With the newer EKS access entries model (which replaced the old
aws-auth ConfigMap), you also need to explicitly grant the role access inside EKS.
Once we added the access entry in Terraform it worked.
resource "aws_eks_access_entry" "github_actions" {
cluster_name = module.eks.cluster_name
principal_arn = "arn:aws:iam::<account-id>:role/github-actions-terraform"
type = "STANDARD"
}
resource "aws_eks_access_policy_association" "github_actions" {
cluster_name = module.eks.cluster_name
principal_arn = "arn:aws:iam::<account-id>:role/github-actions-terraform"
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope { type = "cluster" }
}
This one took a bit to get right. For a pod to assume an IAM role (e.g. to access DynamoDB), three things need to line up exactly:
modules/irsa in Terraform
The part that got messy was the OIDC provider ID in the trust policy — it has to exactly match
the cluster's OIDC provider URL. Easy to get wrong if you're copying from an old role or a different cluster.
The modules/irsa module takes care of building the trust policy correctly using
var.oidc_provider and var.oidc_provider_arn passed from the EKS module outputs.
# modules/irsa/main.tf
resource "aws_iam_role" "this" {
name = var.role_name
assume_role_policy = jsonencode({
Statement = [{
Effect = "Allow"
Principal = { Federated = var.oidc_provider_arn }
Action = "sts:AssumeRoleWithWebIdentity"
Condition = {
StringEquals = {
"${var.oidc_provider}:sub" = "system:serviceaccount:${var.namespace}:${var.service_account}"
"${var.oidc_provider}:aud" = "sts.amazonaws.com"
}
}
}]
})
}
And the Kubernetes service account annotation:
annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<account-id>:role/go-app-irsa-primary
Error: AccessDenied — elasticloadbalancing:DescribeListenerAttributes
The ALB wasn't getting provisioned and the LBC logs showed an AccessDenied.
The IAM policy we had for the LBC was just outdated — a newer version of LBC added a requirement
for elasticloadbalancing:DescribeListenerAttributes that wasn't in the policy file.
Added it and reapplied — ALB came up after that.
Error: ResourceInUseException: Table already exists
I had initially created the DynamoDB table through the UI, then later decided
to do a DR exercise which required replicating the table to us-west-2. I made those changes in AWS
but forgot to update the Terraform locally. So the next time I ran terraform plan,
it tried to create a table that already existed and failed.
The fix was to import the existing table into Terraform state rather than delete and recreate it:
terraform import module.dynamodb.aws_dynamodb_table.this app_events
This was probably the most frustrating part of the whole project. A few things caught me out. I wrote a full step-by-step tutorial for this — read it here.
jspoth.com in Route 53,
it showed up in the console, but dig was returning no answer. Deleting and recreating it fixed it —
it just hadn't been saved properly the first time.
dig returned the right answer,
curl was still failing. Had to flush the local cache:
sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder