Forwarding Cookies Using CloudFront: A Workaround for AWS Cache Policy Limitations

When building our Terraform module for deploying Medusa on AWS, we ran into an unexpected challenge with Amazon CloudFront. We wanted to use CloudFront as a simple way to provide HTTPS and a public URL without requiring users to bring their own domain or SSL certificate. However, we discovered that CloudFront's managed cache policies don't forward cookies, headers, and query parameters when caching is disabled - exactly what we needed for our backend API.
The Problem: Managed Cache Policies and CachingDisabled
AWS CloudFront offers managed cache policies that handle common caching scenarios. The "CachingDisabled" policy seems perfect for dynamic content that shouldn't be cached. However, this policy doesn't forward cookies, headers, or query parameters to your origin by default.
For e-commerce platforms like Medusa, this is a dealbreaker. The backend needs:
- Cookies for session management and authentication
- Headers for content negotiation and API functionality
- Query parameters for filtering and pagination
We initially tried to create a custom cache policy with MinTTL=0 (no caching) while specifying header and cookie forwarding behaviors. AWS rejected this with an error:
operation error CloudFront: CreateCachePolicy, https response error StatusCode: 400, InvalidArgument: The parameter HeaderBehavior is invalid for policy with caching disabled.
AWS's validation logic considers forwarding settings incompatible with disabled caching when using formal cache policies. The problem is clear: cache policies won't let you forward data without caching, but dynamic applications need that data forwarded to work properly.
Why We Use CloudFront
Before diving into the solution, let's clarify why we chose CloudFront in the first place:
- Free HTTPS with Default Certificate - CloudFront provides a free SSL/TLS certificate via
cloudfront_default_certificate = true, giving you a URL likehttps://d123456abcdef.cloudfront.net - No Domain Required - Users don't need to purchase a domain, manage DNS records, or provision ACM certificates
- VPC Security - Our Application Load Balancer (ALB) stays in private subnets, accessible only through CloudFront's VPC Origin feature
- Simple Setup - One Terraform resource provides HTTPS, DNS, and secure origin access without additional configuration
For a deployment-focused module, this convenience is valuable. Users get a working HTTPS endpoint immediately after terraform apply.
The Solution: Legacy forwarded_values Configuration
The workaround is to use CloudFront's legacy forwarded_values block instead of modern cache policies. While AWS recommends cache policies for new distributions, the forwarded_values configuration still works and allows zero-TTL caching with full data forwarding.
Here's the configuration we use in our backend module:
default_cache_behavior { target_origin_id = local.origin_id viewer_protocol_policy = "redirect-to-https" # Disable caching by setting all TTLs to zero min_ttl = 0 default_ttl = 0 max_ttl = 0 forwarded_values { query_string = true # Forward all query parameters headers = ["*"] # Forward all headers to origin cookies { forward = "all" # Forward all cookies to origin } } allowed_methods = ["GET", "HEAD", "POST", "PUT", "PATCH", "OPTIONS", "DELETE"] cached_methods = ["GET", "HEAD", "OPTIONS"] }
Key Configuration Elements
The heart of this solution is the TTL configuration. By setting min_ttl, default_ttl, and max_ttl all to 0, we're telling CloudFront "don't cache anything, ever." Every request goes straight through to the origin, which is essential for dynamic content like user sessions and real-time inventory updates.
Inside the forwarded_values block, we're basically saying "pass everything through." Setting query_string = true ensures that API parameters like ?page=2&limit=20 reach your backend. The headers = ["*"] configuration is particularly important-it forwards every header, including Authorization, Content-Type, and custom headers your application might use. And crucially, forward = "all" in the cookies block ensures that session cookies make the round trip from browser to CloudFront to your backend and back again.
The allowed_methods array supports the full spectrum of HTTP verbs (GET, POST, PUT, PATCH, DELETE) because Medusa's admin API needs them all. This configuration effectively turns CloudFront into a passthrough proxy with HTTPS termination-not a traditional CDN, but a secure front door for your API.
Trade-offs and Considerations
This approach shines when you're working with dynamic applications that maintain session state-think authentication systems, shopping carts, or any API where each request is unique. It's particularly valuable in rapid deployment scenarios where getting HTTPS working quickly matters more than squeezing out every bit of performance optimization. We've also found it perfect for development and staging environments where managing domains and certificates feels like overkill.
That said, this isn't a one-size-fits-all solution. If you're serving static content like CSS, JavaScript bundles, or images, you're missing out on CloudFront's real strength: global edge caching. Similarly, if you're running a high-traffic production service where caching could significantly reduce origin load and costs, the no-cache approach leaves performance on the table. For applications serving a global audience where edge caching could shave hundreds of milliseconds off response times, you'd want to reconsider this pattern.
For our Medusa module specifically, the no-cache approach makes sense because backend APIs are inherently dynamic-every request involves database queries, authentication checks, and business logic that can't be cached safely. Caching would actually break core functionality like session management and real-time inventory updates. The convenience of instant HTTPS deployment is worth the trade-off, and users always have the option to add a proper CDN layer in front for their static storefront assets if needed.
Conclusion
AWS CloudFront's managed cache policies work well for typical CDN use cases, but they have limitations when you need no caching with full data forwarding. The legacy forwarded_values configuration provides a reliable workaround that's been working in production for our Medusa. deployments.
While AWS's documentation encourages using modern cache policies, the forwarded_values approach remains supported and is sometimes the pragmatic choice for dynamic applications. As always in infrastructure engineering, the "right" solution depends on your specific requirements-in our case, deployment convenience and session state management won the day.
This article is based on our experience building the terraform-aws-medusajs module for deploying Medusa. e-commerce backends on AWS.





