Deploying Containers on EKS Fargate in Private Subnets Behind an ALB
This note describes how to run containers on EKS Fargate within private subnets, securely managed behind an Application Load Balancer (ALB).

Setting Up VPC
Creating VPC
Create a dedicated VPC with the following commands:
aws ec2 create-vpc \ --cidr-block 192.168.0.0/16 \ --tag-specifications "ResourceType=vpc,Tags=[{Key=Name,Value=eks-fargate-vpc}]"
aws ec2 modify-vpc-attribute \ --vpc-id vpc-xxxxxxxxxxxxxxxxx \ --enable-dns-hostnamesPlease make sure to enable DNS hostnames for VPC endpoints. For more details, please refer to an official documentation.
If you use custom DNS domain names defined in a private hosted zone in Amazon Route 53, or use private DNS with interface VPC endpoints (AWS PrivateLink), you must set both the enableDnsHostnames and enableDnsSupport attributes to true.
Adding Subnets
Create private subnets for Fargate pods and a public subnet for the bastion EC2 instance.
aws ec2 create-subnet \ --vpc-id vpc-xxxxxxxxxxxxxxxxx \ --availability-zone ap-northeast-1a \ --cidr-block 192.168.0.0/20 \ --tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-private-subnet-1a}]"
aws ec2 create-subnet \ --vpc-id vpc-xxxxxxxxxxxxxxxxx \ --availability-zone ap-northeast-1c \ --cidr-block 192.168.16.0/20 \ --tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-private-subnet-1c}]"
aws ec2 create-subnet \ --vpc-id vpc-xxxxxxxxxxxxxxxxx \ --availability-zone ap-northeast-1a \ --cidr-block 192.168.32.0/20 \ --tag-specifications "ResourceType=subnet,Tags=[{Key=Name,Value=eks-fargate-public-subnet-1a}]"Adding Internet Gateway
To enable internet access for resources in the public subnet, create an Internet Gateway and attach it to your VPC:
aws ec2 create-internet-gateway \ --tag-specifications "ResourceType=internet-gateway,Tags=[{Key=Name,Value=igw-eks-fargate}]"
aws ec2 attach-internet-gateway \ --internet-gateway-id igw-xxxxxxxxxxxxxxxxx \ --vpc-id vpc-xxxxxxxxxxxxxxxxxNext, create a route table and associate it with the Internet Gateway:
aws ec2 create-route-table \ --vpc-id vpc-xxxxxxxxxxxxxxxxx \ --tag-specifications "ResourceType=route-table,Tags=[{Key=Name,Value=rtb-eks-fargate-public}]"
aws ec2 create-route \ --route-table-id rtb-xxxxxxxx \ --destination-cidr-block 0.0.0.0/0 \ --gateway-id igw-xxxxxxxxxxxxxxxxx
aws ec2 associate-route-table \ --route-table-id rtb-xxxxxxxx \ --subnet-id subnet-xxxxxxxxxxxxxxxxxAdding VPC Endpoints
To enable secure communication for an EKS private cluster, create the necessary VPC endpoints. Refer to the official documentation for detailed information.
| Type | Endpoint |
|---|---|
| Interface | com.amazonaws.region-code.ecr.api |
| Interface | com.amazonaws.region-code.ecr.dkr |
| Interface | com.amazonaws.region-code.ec2 |
| Interface | com.amazonaws.region-code.elasticloadbalancing |
| Interface | com.amazonaws.region-code.sts |
| Gateway | com.amazonaws.region-code.s3 |
Create a security group for the VPC endpoints:
aws ec2 create-security-group \ --description "VPC endpoints" \ --group-name eks-fargate-vpc-endpoints-sg \ --vpc-id vpc-xxxxxxxxxxxxxxxxx \ --tag-specifications "ResourceType=security-group,Tags=[{Key=Name,Value=eks-fargate-vpc-endpoints-sg}]"
aws ec2 authorize-security-group-ingress \ --group-id sg-xxxxxxxxxxxxxxxxx \ --protocol tcp \ --port 443 \ --cidr 192.168.0.0/16Create the Interface VPC Endpoints:
for name in com.amazonaws.<REGION>.ecr.api com.amazonaws.<REGION>.ecr.dkr com.amazonaws.region-code.ec2 com.amazonaws.<REGION>.elasticloadbalancing com.amazonaws.<REGION>.sts; do \aws ec2 create-vpc-endpoint \ --vpc-id vpc-xxxxxxxxxxxxxxxxx \ --vpc-endpoint-type Interface \ --service-name $name \ --security-group-ids sg-xxxxxxxxxxxxxxxxx \ --subnet-ids subnet-xxxxxxxxxxxxxxxxx subnet-xxxxxxxxxxxxxxxxx;done;Create the Gateway VPC Endpoint for S3:
aws ec2 create-vpc-endpoint \ --vpc-id vpc-xxxxxxxxxxxxxxxxx \ --service-name com.amazonaws.<REGION>.s3 \ --route-table-ids rtb-xxxxxxxxxxxxxxxxxBy adding these endpoints, your private cluster can securely access AWS services such as ECR, S3, and Elastic Load Balancing.
Bastion EC2
To access an EKS private cluster, you can utilize a bastion EC2 instance. This bastion host allows secure interaction with your Kubernetes API server endpoint if public access is disabled.
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access
If you have disabled public access for your cluster’s Kubernetes API server endpoint, you can only access the API server from within your VPC or a connected network.
Creating an Instance IAM Role
To enable the bastion instance to operate securely, create an IAM role and attach the AmazonSSMManagedInstanceCore managed policy for Session Manager access.
Create an IAM role:
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}' > policy.json
aws iam create-role \ --role-name eks-fargate-bastion-ec2-role \ --assume-role-policy-document file://./policy.jsonCreate an instance profile:
aws iam create-instance-profile \ --instance-profile-name eks-fargate-bastion-ec2-instance-profile
aws iam add-role-to-instance-profile \ --instance-profile-name eks-fargate-bastion-ec2-instance-profile \ --role-name eks-fargate-bastion-ec2-roleAttach the AmazonSSMManagedInstanceCore policy to allow Session Manager access:
aws iam attach-role-policy \ --role-name eks-fargate-bastion-ec2-role \ --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCoreFor broader permissions to set up and manage EKS, EC2, and VPC services, attach an additional policy. Refer to the official documentation for best practices on least-privilege permissions.
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudformation:CreateStack", "cloudformation:DeleteStack", "cloudformation:DescribeStacks", "cloudformation:DescribeStackEvents", "cloudformation:ListStacks", "ec2:*", "eks:*", "iam:AttachRolePolicy", "iam:CreateOpenIDConnectProvider", "iam:CreateRole", "iam:DetachRolePolicy", "iam:DeleteOpenIDConnectProvider", "iam:GetOpenIDConnectProvider", "iam:GetRole", "iam:ListPolicies", "iam:PassRole", "iam:PutRolePolicy", "iam:TagOpenIDConnectProvider" ], "Resource": "*" } ]}' > policy.json
aws iam put-role-policy \ --role-name eks-fargate-bastion-ec2-role \ --policy-name eks-cluster \ --policy-document file://./policy.jsonStarting the Bastion EC2 Instance
Once the IAM role is configured, start the EC2 instance. Ensure that you use a valid AMI ID. Refer to the official documentation for the latest AMI details.
instanceProfileRole=$( \aws iam list-instance-profiles-for-role \ --role-name eks-fargate-bastion-ec2-role \| jq -r '.InstanceProfiles[0].Arn')
aws ec2 run-instances \ --image-id ami-0bba69335379e17f8 \ --instance-type t2.micro \ --iam-instance-profile "Arn=$instanceProfileRole" \ --subnet-id subnet-xxxxxxxxxxxxxxxxx \ --associate-public-ip-address \ --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=eks-fargate-bastion-ec2}]"Connecting to the Instance with Session Manager
To securely access the bastion EC2 instance, use AWS Session Manager. This eliminates the need for SSH key pairs and ensures secure, auditable access.


After connecting, switch to the ec2-user account using the following command:
sh-4.2$ sudo su - ec2-userEnsure that the instance IAM role has the AmazonSSMManagedInstanceCore policy attached for Session Manager connectivity.
Updating AWS CLI to the Latest Version
To ensure compatibility with the latest AWS services, update the AWS CLI to its latest version:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"unzip awscliv2.zipsudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --updateVerify the installation:
aws --versionInstalling kubectl
To manage your EKS cluster, install kubectl on the bastion instance.
Download the kubectl binary for your EKS cluster version:
curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.7/2022-10-31/bin/linux/amd64/kubectlMake the binary executable:
chmod +x ./kubectlAdd kubectl to your PATH:
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/binecho 'export PATH=$PATH:$HOME/bin' >> ~/.bashrcVerify the installation:
kubectl version --short --clientInstalling eksctl
Install eksctl to simplify the management of your EKS clusters.
Download and extract eksctl:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmpMove the binary to a location in your PATH:
sudo mv /tmp/eksctl /usr/local/binVerify the installation:
eksctl versionYour bastion EC2 instance is now ready to manage and operate your EKS cluster with kubectl and eksctl installed.
EKS
Creating EKS Cluster
Create an EKS cluster using eksctl with the --fargate option specified. This cluster will use Fargate to manage pods without requiring worker nodes.
Refer to the official documentation for detailed instructions.
Creating the cluster may take approximately 20 minutes or more.
eksctl create cluster \ --name eks-fargate-cluster \ --region ap-northeast-1 \ --version 1.24 \ --vpc-private-subnets subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx \ --without-nodegroup \ --fargateAfter creation, verify the cluster with the following command:
$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20mAppendix: Troubleshooting Cluster Access
Issue 1: Credential Error
If you encounter the error below when running kubectl get svc:
Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"Update the AWS CLI to the latest version:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"unzip awscliv2.zipsudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --updateRetry the command:
$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20mIssue 2: Connection Refused
If you see the error below:
The connection to the server localhost:8080 was refused - did you specify the right host or port?Update your Kubernetes configuration file (~/.kube/config) using the following command:
aws eks update-kubeconfig \ --region ap-northeast-1 \ --name eks-fargate-clusterRetry the command:
$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 20mAdding IAM Users and Roles
To avoid losing access to the cluster, grant access to additional IAM users or roles. By default, only the IAM entity that created the cluster has administrative access.
Refer to the official documentation for best practices.
The IAM user or role that created the cluster is the only IAM entity that has access to the cluster. Grant permissions to other IAM users or roles so they can access your cluster.
To add an IAM user to the system:masters group, use the following command:
eksctl create iamidentitymapping \ --cluster eks-fargate-cluster \ --region ap-northeast-1 \ --arn arn:aws:iam::000000000000:user/xxxxxx \ --group system:masters \ --no-duplicate-arnsThis ensures that additional users or roles have administrative access to your EKS cluster.
Enabling Private Cluster Endpoint
Enable the private cluster endpoint to restrict Kubernetes API access to within the VPC.
Enabling the private cluster endpoint may take about 10 minutes.
aws eks update-cluster-config \ --region ap-northeast-1 \ --name eks-fargate-cluster \ --resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=trueEnsure that your EKS control plane security group allows ingress traffic on port 443 from your bastion EC2 instance.
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access
You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your bastion host.
sgId=$(aws eks describe-cluster --name eks-fargate-cluster | jq -r .cluster.resourcesVpcConfig.clusterSecurityGroupId)aws ec2 authorize-security-group-ingress \ --group-id $sgId \ --protocol tcp \ --port 443 \ --cidr 192.168.0.0/16Test the connectivity between the bastion EC2 instance and the EKS cluster:
$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 153mFargate Profile
Create a Fargate profile for your application namespace:
eksctl create fargateprofile \ --region ap-northeast-1 \ --cluster eks-fargate-cluster \ --name fargate-app-profile \ --namespace fargate-appInstalling AWS Load Balancer Controller
Install the AWS Load Balancer Controller to run application containers behind an Application Load Balancer (ALB).
Create an IAM OIDC provider for the cluster if it does not already exist:
oidc_id=$(aws eks describe-cluster --name eks-fargate-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)aws iam list-open-id-connect-providers | grep $oidc_id
# If no response is returned, run the following:eksctl utils associate-iam-oidc-provider \ --region ap-northeast-1 \ --cluster eks-fargate-cluster \ --approveDownload the policy file for the AWS Load Balancer Controller:
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.jsonCreate the IAM policy:
aws iam create-policy \ --policy-name AWSLoadBalancerControllerIAMPolicy \ --policy-document file://iam_policy.jsonCreate the IAM service account:
eksctl create iamserviceaccount \ --region ap-northeast-1 \ --cluster=eks-fargate-cluster \ --namespace=kube-system \ --name=aws-load-balancer-controller \ --role-name "AmazonEKSLoadBalancerControllerRole" \ --attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \ --approveInstalling Helm and Load Balancer Controller Add-on
Install Helm v3:
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh$ chmod 700 get_helm.sh$ ./get_helm.sh$ helm version --short | cut -d + -f 1v3.10.3Install the Load Balancer Controller add-on:
helm repo add eks https://aws.github.io/eks-chartshelm repo updatehelm install aws-load-balancer-controller eks/aws-load-balancer-controller \ -n kube-system \ --set region=ap-northeast-1 \ --set vpcId=vpc-xxxxxxxxxxxxxxxxx \ --set image.repository=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/amazon/aws-load-balancer-controller \ --set clusterName=eks-fargate-cluster \ --set serviceAccount.create=false \ --set serviceAccount.name=aws-load-balancer-controller \ --set enableShield=false \ --set enableWaf=false \ --set enableWafv2=false
You need to add enableShield=false, enableWaf=false, and enableWafv2=false to the command because VPC endpoints are not currently provided. For more information, please refer to the official documentation.
When deploying it, you should use command line flags to set enable-shield, enable-waf, and enable-wafv2 to false. Certificate discovery with hostnames from Ingress objects isn’t supported. This is because the controller needs to reach AWS Certificate Manager, which doesn’t have a VPC interface endpoint.
Verify the deployment:
$ kubectl get deployment -n kube-system aws-load-balancer-controllerNAME READY UP-TO-DATE AVAILABLE AGEaws-load-balancer-controller 2/2 2 2 105sWith the AWS Load Balancer Controller installed, your application containers are ready to run securely behind an Application Load Balancer.
Tagging Subnets
Tag the private subnets to indicate their use for internal load balancers. This is required for Kubernetes and the AWS Load Balancer Controller to identify the subnets correctly.
aws ec2 create-tags \ --resources subnet-xxxxxxxxxxxxxxxxx subnet-xxxxxxxxxxxxxxxxx \ --tags Key=kubernetes.io/role/internal-elb,Value=1Refer to the official documentation for additional details.
Must be tagged in the following format. This is so that Kubernetes and the AWS load balancer controller know that the subnets can be used for internal load balancers.
Deploying Application
Building Application
This example uses FastAPI to create a simple API for demonstration purposes.
Define the necessary dependencies for the application:
anyio==3.6.2click==8.1.3fastapi==0.88.0h11==0.14.0httptools==0.5.0idna==3.4pydantic==1.10.2python-dotenv==0.21.0PyYAML==6.0sniffio==1.3.0starlette==0.22.0typing_extensions==4.4.0uvicorn==0.20.0uvloop==0.17.0watchfiles==0.18.1websockets==10.4Create a basic API endpoint:
from fastapi import FastAPI
app = FastAPI()
@app.get('/')def read_root(): return {'message': 'Hello world!'}Create a Dockerfile to build the application container:
FROM python:3.10-alpine@sha256:d8a484baabf7d2337d34cdef6730413ea1feef4ba251784f9b7a8d7b642041b3COPY ./src ./RUN pip install --no-cache-dir -r requirements.txtCMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]Pushing the Image to ECR
Build and push the application image to ECR:
Create an ECR repository:
aws ecr create-repository --repository-name apiRetrieve the repository URI:
uri=$(aws ecr describe-repositories | jq -r '.repositories[] | select(.repositoryName == "api") | .repositoryUri')Authenticate Docker to ECR:
aws ecr get-login-password --region ap-northeast-1 | docker login --username AWS --password-stdin 000000000000.dkr.ecr.ap-northeast-1.amazonaws.comBuild, tag, and push the image:
docker build .docker tag xxxxxxxxxxxx $uri:latestdocker push $uri:latestDeploying to Fargate
Create a Kubernetes manifest file fargate-app.yaml.
Replace 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest with the actual image URI.
For more information about the AWS Load Balancer Controller v2.4 specification, refer to the official documentation.
---apiVersion: v1kind: Namespacemetadata: name: fargate-app---apiVersion: apps/v1kind: Deploymentmetadata: name: fargate-app-deployment namespace: fargate-app labels: app: apispec: replicas: 1 selector: matchLabels: app: api template: metadata: labels: app: api spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 containers: - name: api image: 000000000000.dkr.ecr.ap-northeast-1.amazonaws.com/api:latest imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 nodeSelector: kubernetes.io/os: linux---apiVersion: v1kind: Servicemetadata: name: fargate-app-service namespace: fargate-app labels: app: apispec: selector: app: api ports: - protocol: TCP port: 80 targetPort: 80 type: NodePort---apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: fargate-app-ingress namespace: fargate-app annotations: alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/target-type: ipspec: ingressClassName: alb rules: - http: paths: - path: / pathType: Prefix backend: service: name: fargate-app-service port: number: 80Apply the manifest file:
kubectl apply -f fargate-app.yamlVerify the deployed resources:
$ kubectl get all -n fargate-appNAME READY STATUS RESTARTS AGEpod/fargate-app-deployment-6db55f9b7b-4hp8z 1/1 Running 0 55s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/fargate-app-service NodePort 10.100.190.97 <none> 80:31985/TCP 6m
NAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/fargate-app-deployment 1/1 1 1 6m
NAME DESIRED CURRENT READY AGEreplicaset.apps/fargate-app-deployment-6db55f9b7b 1 1 1 6mProvisioning ALB may take about ten minutes or longer.
Testing the API
Retrieve the DNS name of the ALB:
kubectl describe ingress -n fargate-app fargate-app-ingressExample output:
Name: fargate-app-ingressLabels: <none>Namespace: fargate-appAddress: internal-k8s-fargatea-fargatea-0579eb4ce2-1731550123.ap-northeast-1.elb.amazonaws.comIngress Class: albDefault backend: <default>Rules: Host Path Backends ---- ---- -------- * / fargate-app-service:80 (192.168.4.97:80)Annotations: alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/target-type: ipEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfullyReconciled 4m17s ingress Successfully reconciledTest the API endpoint:
curl internal-k8s-fargatea-fargatea-xxxxxxxxxx-xxxxxxxxxx.ap-northeast-1.elb.amazonaws.comExpected output:
{"message":"Hello world!"}Deleting EKS Cluster
If you no longer require the EKS cluster or its associated resources, you can delete them using the steps outlined below.
Remove the deployed application and uninstall the AWS Load Balancer Controller:
kubectl delete -f fargate-app.yamlhelm uninstall aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-systemRetrieve the ARN of the AWSLoadBalancerControllerIAMPolicy and detach it:
arn=$(aws iam list-policies --scope Local \| jq -r '.Policies[] | select(.PolicyName == "AWSLoadBalancerControllerIAMPolicy").Arn')
aws iam detach-role-policy \ --role-name AmazonEKSLoadBalancerControllerRole \ --policy-arn $arnDelete the service account associated with the AWS Load Balancer Controller:
eksctl delete iamserviceaccount \ --region ap-northeast-1 \ --cluster eks-fargate-cluster \ --namespace kube-system \ --name aws-load-balancer-controllerRemove Fargate profiles created during the setup:
aws eks delete-fargate-profile \ --cluster-name eks-fargate-cluster \ --fargate-profile-name fargate-app-profile
aws eks delete-fargate-profile \ --cluster-name eks-fargate-cluster \ --fargate-profile-name fp-defaultRetrieve and detach the AmazonEKSFargatePodExecutionRolePolicy:
arn=$(aws iam list-policies --scope AWS \| jq -r '.Policies[] | select(.PolicyName == "AmazonEKSFargatePodExecutionRolePolicy").Arn')
aws iam detach-role-policy \ --role-name eksctl-eks-fargate-cluster-FargatePodExecutionRole-xxxxxxxxxxxxx \ --policy-arn $arnUse eksctl to delete the cluster:
eksctl delete cluster \ --region ap-northeast-1 \ --name eks-fargate-clusterAppendix: Troubleshooting Deletion Issues
If you encounter issues with deleting the AWS Load Balancer Controller ingress, you may need to remove finalizers manually described here:
kubectl patch ingress fargate-app-ingress -n fargate-app -p '{"metadata":{"finalizers":[]}}' --type=mergeThis command ensures that Kubernetes can finalize the ingress resource for deletion.