Kubernetes logs unavailable behind a proxy: Diagnosing API server communication issues

In modern infrastructure setups, especially on-premises environments, the presence of outbound proxies is increasingly common. These proxies often enforce organizational access policies and provide observability into network traffic. While container orchestration platforms like Kubernetes can generally operate well under such constraints, some subtle and easily overlooked configuration issues can lead to unexpected behavior.
In this article, I’ll walk through a real-world issue we encountered where Kubernetes pod logs were not retrievable via the API, despite the cluster appearing healthy otherwise. We'll also dive deeper into how proxy behavior interacts with Kubernetes components.
The symptom: cluster looks healthy, but kubectl logs
fails
We were running a Kubernetes cluster on bare metal within a datacenter. The environment was placed behind a corporate HTTP proxy.
On the surface, everything appeared operational:
kubectl get pods
,kubectl get svc
, and other API queries worked fine,- etcd connectivity was intact,
- Node and pod statuses reported healthy.
However, when attempting to retrieve logs from running pods using kubectl logs <pod>
, the command failed with a timeout or a generic connection error. This was observed across all pods and namespaces.
Investigating the root cause
Kubernetes retrieves pod logs through the kubelet on the node where the pod is running. The kube-apiserver acts as a reverse proxy for these requests and must establish a direct connection to the kubelet endpoint (typically via HTTPS on port 10250). This is where the problem manifested.
We first validated that logs were, in fact, being generated. By SSH-ing into the node where the pod was scheduled and using crictl logs <container_id>
, we could view logs without issue. This pointed to a problem not with the container runtime or logging configuration, but with how the kube-apiserver was trying to reach kubelet.
Further inspection revealed that the kube-apiserver container was not bypassing the proxy for internal node-to-node communication. This wasn't a failure of the proxy settings - it was because the NO_PROXY
environment variable had never been set in the first place. In environments operating behind a proxy, this omission can critically impair the cluster's internal operations. Without an explicit NO_PROXY
configuration, even local traffic destined for internal services like the kubelet may be incorrectly routed through the proxy, leading to timeouts and unpredictable failures.
How HTTP proxies work in Kubernetes environments
To understand the root of the problem, it's important to grasp how HTTP proxies interact with Kubernetes networking.
When you set the environment variables HTTP_PROXY
or HTTPS_PROXY
, applications on the host — including kube-apiserver, container runtime clients, and even kubelet — will route outbound traffic through the proxy server, unless explicitly told not to via NO_PROXY
.
In a Kubernetes cluster:
- Most control plane components communicate over private IPs or node-local addresses,
- The kube-apiserver connects to the kubelet to fetch logs, execute commands, or forward ports.
If the API server tries to connect to the kubelet via a private IP (e.g., 10.0.0.12) and this IP is not included in the NO_PROXY
list, the request is sent to the proxy. Since proxies typically can't route to internal IPs or ports like 10250 (kubelet), the request fails silently or times out.
This is further complicated on bare metal because:
- Cloud provider integrations (e.g., in EKS, GKE) often auto-configure these exemptions,
- Static pods don’t inherit host environment variables unless explicitly defined in their manifests.
The fix: update static pod environment variables
Because kube-apiserver was running as a static pod, its environment variables had to be defined explicitly in the corresponding manifest file, typically located at:
/etc/kubernetes/manifests/kube-apiserver.yml
We modified this file to include proper proxy settings, specifically ensuring the NO_PROXY
variable covered:
- All node IP addresses
- The cluster CIDR
- The service CIDR
- Loopback addresses
- Hostnames used by the control plane
Here is a simplified version of the environment variable section added to the static pod spec:
env: - name: HTTP_PROXY value: "http://proxy.example.com:3128" - name: HTTPS_PROXY value: "http://proxy.example.com:3128" - name: NO_PROXY value: "127.0.0.1,localhost,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,.cluster.local"
After saving the file, the kubelet automatically reloaded the manifest and restarted the kube-apiserver. Once the API server came back online, kubectl logs
began returning expected output from all pods.
Takeaways and best practices
This experience reinforced the importance of configuring NO_PROXY
correctly when operating in a proxied environment, particularly on bare metal where defaults that “just work” in cloud environments may not apply.
Key points to remember:
- The API server needs direct access to kubelets to retrieve logs, exec into pods, and perform port-forwarding
- Failing to set
NO_PROXY
for internal traffic can cause silent and hard-to-debug failures - Static pods require environment variables to be set in the pod manifest, not in the host shell environment
- Use tools like
crictl
,curl
, ortcpdump
on the control plane node to confirm where traffic is being routed - Explicitly list IP ranges and DNS suffixes used within the cluster in
NO_PROXY