Configuring EKS Managed Node Groups to Use a Proxy with Terraform

In enterprise environments, security and network policies are paramount. It’s common for Amazon EKS worker nodes to run in private subnets without direct outbound internet access. Instead, all egress traffic is routed through a centrally managed and monitored HTTP/HTTPS proxy.
While this enhances security, it introduces a challenge: EKS worker nodes still need to pull container images, communicate with AWS APIs, and run bootstrap tasks. Without proper proxy configuration, these nodes cannot function correctly.
This article shows how to configure EKS managed node groups with Terraform to be fully proxy-aware using the cloudinit_pre_nodeadm
hook from the popular terraform-aws-modules/eks/aws module.
Why EKS Nodes Need Proxy Configuration
Simply setting HTTP_PROXY
and HTTPS_PROXY
environment variables is not enough. A modern EKS node (Amazon Linux 2 or Amazon Linux 2023), has multiple components that must each be aware of the proxy:
- Base Operating System - shell sessions and system utilities.
- Container Runtime (
containerd
) - critical for pulling container images from Amazon ECR, Docker Hub, or other registries. - EKS Bootstrap Process (
nodeadm
) - communicates with the EKS control plane and must bypass the proxy for cluster join operations. - Package Managers (
yum
) - required if installing additional software during bootstrapping.
If any of these are misconfigured, your nodes may fail to join the cluster, pull images, or update packages.
Understanding the Bootstrap Hook: cloudinit_pre_nodeadm
The key to solving this problem is the cloudinit_pre_nodeadm
parameter available in the Terraform AWS EKS module.
cloud-init
- The standard EC2 initialization system that runs on first boot, configuring OS-level settings. When a new EC2 instance boots for the first time, it runscloud-init
, which executes a series of user-provided instructions (user data) to configure the operating system, install packages, create files, and set up users before the machine is considered fully "ready".nodeadm
- The official bootstrap agent for EKS AMIs (replacing the olderbootstrap.sh
). It retrieves cluster certificates, configures the kubelet, and joins the node to the cluster.
The cloudinit_pre_nodeadm
parameter, provided by the terraform-aws-modules/eks/aws
module, is a powerful hook that lets you run a custom cloud-init
script at a precise moment: after basic OS initialization but before the nodeadm
service is started.
This timing is critical. By executing our script before nodeadm
, we ensure that the foundational network environment—including our proxy settings—is already in place. When nodeadm
, containerd
, and the kubelet
eventually start, they inherit the correct configuration from the environment, allowing them to function properly within the restricted network.
Terraform Solution: Configuring Proxy for EKS Managed Node Groups
Our solution will pass the Kubernetes service CIDR from our Terraform configuration directly into the user data script. This removes hardcoded values and makes the solution reusable across different clusters.
Here is the Terraform code block that implements the complete proxy configuration.
module "eks" { source = "terraform-aws-modules/eks/aws" version = "~> 21.0" # Define the service CIDR for the cluster cluster_service_ipv4_cidr = "10.100.0.0/16" # ... other EKS cluster configuration ... eks_managed_node_groups = { main_nodes = { # ... other node group configuration like instance_types, min_size, etc. ... cloudinit_pre_nodeadm = [ { content_type = "text/x-shellscript" content = <<-EOT #!/bin/bash set -ex # Define your proxy endpoint PROXY="http://your-proxy-url:3128" # standard proxy port # Use IMDSv2 to securely fetch instance metadata TOKEN=`curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` MAC=$(curl -s -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/mac) VPC_CIDR=$(curl -s -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/network/interfaces/macs/$MAC/vpc-ipv4-cidr-blocks | xargs | tr ' ' ',') K8S_SERVICE_CIDR="${module.eks.cluster_service_ipv4_cidr}" # Define your NO_PROXY list. This is critical for internal and AWS service communication. # You might need to add your Pod CIDR as well. NO_PROXY="$VPC_CIDR,$K8S_SERVICE_CIDR,localhost,127.0.0.1,169.254.169.254,.amazonaws.com,.svc.cluster.local,.svc,.cluster.local" # 1. Configure system-wide proxy settings echo "Setting up /etc/environment" cat << EOF > /etc/environment HTTP_PROXY=$PROXY HTTPS_PROXY=$PROXY NO_PROXY=$NO_PROXY http_proxy=$PROXY https_proxy=$PROXY no_proxy=$NO_PROXY EOF # 2. Configure containerd proxy via systemd override echo "Configuring containerd service" mkdir -p /etc/systemd/system/containerd.service.d cat << EOF > /etc/systemd/system/containerd.service.d/http-proxy.conf [Service] EnvironmentFile=/etc/environment EOF # 3. Configure nodeadm proxy via systemd override echo "Configuring nodeadm service" mkdir -p /etc/systemd/system/nodeadm.service.d cat << EOF > /etc/systemd/system/nodeadm.service.d/http-proxy.conf [Service] EnvironmentFile=/etc/environment EOF # 4. Configure yum proxy echo "Configuring yum" echo "proxy=$PROXY" >> /etc/yum.conf # 5. Reload systemd daemon and restart containerd to apply changes echo "Reloading systemd and restarting containerd" systemctl daemon-reload systemctl restart containerd EOT } ] } } # ... other module configuration ... }
Breaking Down the Script
Let's break down the content
of the cloud-init
script to understand the important pieces:
set -ex
- this is a standard best practice for shell scripting.set -e
ensures the script will exit immediately if a command fails, andset -x
prints each command before it is executed, providing clear debug output in the system logs.- IMDSv2 Metadata Queries - the script securely and dynamically fetches the VPC CIDR block directly from the EC2 metadata service. This is crucial for the
NO_PROXY
variable, ensuring that any traffic destined for other resources within the VPC bypasses the proxy. It uses the more secure IMDSv2 method, which requires a session token. - Defining
NO_PROXY
- theNO_PROXY
variable is just as important asHTTP_PROXY
. It specifies a comma-separated list of domains and IP ranges that should not use the proxy. Our list includes:- The dynamically fetched
$VPC_CIDR
for all internal VPC traffic. - The Kubernetes service CIDR passed from module via
$K8S_SERVICE_CIDR
(e.g.,10.100.0.0/16
). localhost
and the metadata service address (169.254.169.254
).- Key AWS and Kubernetes service endpoints to ensure direct communication with the control plane and other AWS APIs. Note the addition of
.amazonaws.com
to ensure services like ECR, S3, and EC2 are accessed directly via VPC Endpoints if they exist.
- The dynamically fetched
/etc/environment
- this file provides the system-wide environment variables for all users and processes. It's the foundation of our configuration.- Systemd overrides - the modern, correct way to modify a
systemd
service likecontainerd
ornodeadm
is to create an override file. We createhttp-proxy.conf
for both services. TheEnvironmentFile=/etc/environment
directive instructssystemd
to load all variables from our global configuration file before starting the service. This is cleaner and more maintainable than modifying the main service files directly. yum.conf
- a simple line added to/etc/yum.conf
ensures package installation works through the proxy.- Reload and restart - finally,
systemctl daemon-reload
forcessystemd
to re-read its configuration files, andsystemctl restart containerd
applies the new environment variables to the container runtime immediately.nodeadm
will pick up its configuration when it runs shortly after this script completes.
Best Practices for Proxy Configuration in EKS
- Always use IMDSv2 instead of IMDSv1 for metadata access (more secure).
- Ensure NO_PROXY includes:
- VPC CIDR
- Kubernetes service CIDR
169.254.169.254
(EC2 metadata).amazonaws.com
(AWS APIs like ECR, S3, EC2)
- Use systemd overrides instead of editing service files directly (cleaner, upgrade-safe).
- Keep Terraform code parameterized so the solution works across clusters without hardcoding CIDRs.
Final Thoughts
Successfully deploying EKS worker nodes in a network firewalled behind an HTTP proxy is a common enterprise challenge, but it doesn't have to be complicated. By leveraging the cloudinit_pre_nodeadm
hook within the terraform-aws-modules/eks/aws
module, you gain precise control over the node's bootstrap sequence.
This approach provides a declarative, automated, and repeatable Infrastructure as Code pattern for secure and compliant Kubernetes deployments on AWS. By injecting a dynamic script that configures the operating system, containerd
, and nodeadm
before these critical services start, you ensure nodes join the cluster and pull images reliably. This approach not only solves the immediate technical problem but also creates a repeatable, version-controlled pattern for building secure and compliant Kubernetes platforms on AWS.
Frequently Asked Questions (FAQ)
1. Why do I need a proxy for EKS managed node groups?
In many enterprise environments, worker nodes are deployed in private subnets without direct internet access. A proxy allows nodes to pull container images, access AWS APIs, and install packages while keeping egress traffic monitored and secure.
2. What is the difference between bootstrap.sh
and nodeadm
in EKS?
Older EKS AMIs used bootstrap.sh
to configure and join nodes to a cluster.
Modern EKS AMIs (Amazon Linux 2023 and newer) use nodeadm
, a dedicated agent that manages certificates, kubelet configuration, and node registration. It provides better reliability and integration with EKS.
3. How do I configure containerd
to use a proxy in EKS?
The recommended method is to create a systemd override file that points to /etc/environment
where proxy variables are stored. This ensures that containerd
loads the correct proxy configuration every time it starts.
4. Which domains should I add to NO_PROXY for EKS?
At minimum, include:
- Your VPC CIDR block
- Kubernetes service CIDR
localhost
,127.0.0.1
, and169.254.169.254
(EC2 metadata).amazonaws.com
(AWS APIs such as ECR, S3, and EC2).svc.cluster.local
and.cluster.local
(Kubernetes services)
5. Can I use the same Terraform script across multiple EKS clusters?
Yes. By parameterizing the Kubernetes service CIDR and dynamically fetching the VPC CIDR via IMDSv2, the script becomes reusable across multiple environments without hardcoding network details.
6. Does this method work with both Amazon Linux 2 and Amazon Linux 2023 EKS AMIs?
Yes. For Amazon Linux 2, it works with containerd and the legacy bootstrap flow. For Amazon Linux 2023, it works with the new nodeadm
bootstrap agent. The cloudinit_pre_nodeadm
hook ensures proxy settings are in place before either process runs.
Pro tip: You can also implement this solution alongside AWS VPC Endpoints for services like ECR and S3 to further reduce the amount of traffic that must traverse the proxy.