How to Securely Connect to Medusa.js Production Database on AWS?

Let’s imagine something for a second.
You're minding your own business, managing AWS infrastructure for a client with a pretty standard e-commerce setup: a Medusa.js backend, a Next.js storefront, and most importantly for this story, a PostgreSQL RDS instance safely stashed away in a private subnet where nothing from the outside world can touch it. Exactly how the AWS gods intended.
Then, one day, your client says:
- "Hey, I need to get access to the production database"
Now, there are plenty of legit reasons to want this kind of access: analytics, dashboards, audits, maybe some light database spelunking. In this case, it’s for Metabase, which as far as you can tell, magically turns SQL into colourful charts.
So sure, let's help them out. You're a helpful DevOps engineer. You write Terraform. You breathe YAML. You’ve stared into the void of broken networking configs and lived to tell the tale. This? This is doable.
The only question is: how do we do it securely, without slapping a public IP on the database and calling it a day?
That’s exactly what this post is about: how to securely connect to a Medusa.js production database on AWS, without compromising your infrastructure or your sleep.
The Problem
We’ve got:
- A managed RDS PostgreSQL database (though this applies to pretty much any RDS engine)
- A Metabase instance living somewhere outside the VPC
- A need to connect Metabase to the database
Sounds simple enough, but of course it's not just a one-click "make it public" button.
But wait, some of you might be asking: what's actually stopping us from "just" connecting to the database? Well, our database sits in a private subnet. Which means... (flips through AWS docs), it doesn’t have a route to an internet gateway. More importantly, it doesn’t have a public IP address. It’s only reachable via its private IP from within the VPC. Oh, there is also the security group, which allows access only from the Medusa backend.
Now, in theory, I could completely disregard all the security concerns, toss the database into a public subnet, give it a public IP, and call it a day. But I'd probably also get tossed out of a job, and fairly so.
So, making the database publicly accessible is off the table. That leaves us with one goal: somehow access that private IP from the outside. Luckily (or unluckily), there are a few ways to make that happen.
So, what are our options?
Option 1: Port Forwarding
The first and most common approach is good old port forwarding, which is basically asking a server inside your private network to kindly act as a middleman and pass your packets along to the database, like a helpful bouncer who also does package delivery
To make this work, we need what's called a jump box (or bastion host if you're feeling fancy). This is just a plain old EC2 instance living in your VPC with access to the private subnet and your database (Don’t forget to update your RDS security group to allow traffic from the jump box).
Here’s a minimal Terraform snippet to spin one up:
resource "aws_instance" "jump_box" { ami = "ami-0abcdef1234567890" # Replace with latest Amazon Linux 2 AMI instance_type = "t3.nano" subnet_id = aws_subnet.private_subnet.id vpc_security_group_ids = [aws_security_group.jump_box_sg.id] key_name = "your-ssh-key" tags = { Name = "jump-box" } } resource "aws_security_group" "jump_box_sg" { name = "jump-box-sg" description = "Allow SSH" vpc_id = aws_vpc.main.id ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["your-ip-address/32"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
Now that we ’ve got our jump box, let’s explore how we can tunnel traffic through it. For that, we can use the classic way or the AWS way.
Option 1.1: The Classic Way - SSH Port Forwarding
This is the method your senior Linux admin probably used in 2009, and honestly, it still works just fine.
You just need to move your EC2 instance to a public subnet, give it an Elastic IP, SSH key (if you’re using terraform, check the parameter key_name of aws_instance), and make sure the port 22 is open (preferably make it accessible only to specific IP addresses).
Assuming you’ve got your jump box set up and your private key in hand, here’s the command:
ssh -i ~/.ssh/your-key.pem \ -N -L 5430:your-db.xxxxx.rds.amazonaws.com:5432 \ ec2-user@your-jumpbox-public-ip
This sets up a tunnel from localhost:5430 → your RDS:5432. Just point your client at localhost:5430 and you’re good. (You can change the local port if you need to)
Most SQL tools support built-in SSH tunneling if you don’t want to set up the tunnel manually.
Pros:
- It just works
- It’s compatible with most solutions which need to connect to SQL database
Cons:
- EC2 instance needs patching and monitoring
- You’re exposing an SSH port (even if restricted)
- You’ll start getting unsolicited connections attempts the moment you open the port
- You have to manage SSH keys (on AWS it’s not that big of a deal)
This is the second option we chose and well, it just worked.
Option 1.2: The AWS Way - SSM Port Forwarding
Ah, good old SSM. In theory, this is the cleaner option. No public access, no open ports, just straight up AWS magic. You enable SSM, and then use “aws ssm start-session” to port-forward to the database.
Prerequisites:
- SSM Agent installed on your EC2 instance (Amazon Linux already has it preinstalled)
- AWS CLI with SSM plugin installed
- AmazonSSMManagedInstanceCore policy attached to your jump box
ssm:StartSession
andssm:DescribeInstanceInformation
permissions for your account
Once you have all of that, you can just run the following command:
aws ssm start-session \ --target <instance-id> \ --document-name AWS-StartPortForwardingSession \ --parameters '{"host":["your-db.xxxxx.rds.amazonaws.com"], "portNumber":["5432"],"localPortNumber":["5430"]}'
Like the previous options, this will open a tunnel allowing you to connect to the database on localhost:5430
.
Pros:
- No public IPs or open ports
- No SSH keys to manage
- IAM-based access
- You can audit access in the AWS CloudTrail
- Feels like you’re doing something correct
Cons:
- Requires AWS CLI
- It probably doesn’t actually solve your problem, since most solutions don’t support this.
Here’s the thing, while port forwarding with SSM is nice for dev and devops access, if you need something like a Metabase to connect to your database then SSM won’t help you.
So yes, we tried this first, and just after that found out that the access is needed for Metabase.
Option 2: VPN
In case you somehow don’t know, a VPN (Virtual Private Network) is basically a magic tunnel that lets machines outside your VPC pretend they’re inside it. Once connected, your laptop can access your internal resources, as if they were part of your precious private subnet all along.
This one’s probably overkill for most use cases like ours, but hey, it exists. You can spin up a VPN (e.g. AWS Client VPN or a WireGuard setup) and let your client connect to your internal network that way. Great if you already have a VPN setup, but if you don’t, then do you really want to?
There are a few flavors here:
- AWS Client VPN: The “I want AWS to hold my hand” option.
- Roll-your-own VPN (OpenVPN, WireGuard): AKA “I like pain.”
- Third-party VPNs: Where you pay to inflict the pain on someone else.
We’ll use AWS Client VPN because we’re not trying to impress anyone, we’re just trying to get this over with before lunch.
resource "aws_ec2_client_vpn_endpoint" "example" { description = "Client VPN for private RDS access" client_cidr_block = "10.0.10.0/22" server_certificate_arn = "arn:aws:acm:your-cert-arn" authentication_options { type = "certificate-authentication" root_certificate_chain_arn = "arn:aws:acm:your-root-ca-arn" } connection_log_options { enabled = false } split_tunnel = true vpc_id = aws_vpc.main.id dns_servers = ["8.8.8.8"] # shrug } resource "aws_ec2_client_vpn_network_association" "example" { client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id subnet_id = aws_subnet.private_subnet.id } resource "aws_ec2_client_vpn_authorization_rule" "example" { client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id target_network_cidr = "10.0.0.0/16" authorize_all_groups = true }
Oh right, you’ll need to create certificates. With ACM. Or OpenSSL. Or just write them by hand. Whichever works.
What this actually does:
- Spins up a VPN endpoint (as if we needed more resources to take care of).
- Associates it with your VPC so users can crawl around your private subnets.
- Lets anyone who can connect access your RDS instance as if they were in the same network.
Pros:
- Enterprise-y!
- Secure and scalable
- Useful for broader access needs
- Works great if you already have a VPN setup (we don’t)
Cons:
- You have to use the VPN
- Doesn’t work with Metabase
- Certs, IAM, CIDR blocks, DNS resolution issues
- Can very easily get very expensive
Bonus Consideration: Read Replicas
If your client is doing heavy analytical workloads, consider offloading queries to an RDS read replica. That way, your production DB doesn’t get overwhelmed by analytical queries, and you get some nice isolation between transactional and analytical use.
Keep in mind:
- Read replicas can lag behind the primary DB
- You still need to expose the replica through one of the methods above
But it’s a nice option if performance and reliability matter.
Wrapping Up
So, your client wants access to his Medusa.js database. It’s a valid ask, but you still want to do it responsibly.
Here’s the quick TLDR:
- Use SSH port forwarding if you need quick, compatible access from outside (e.g. Metabase).
- Use SSM for internal-only dev/admin access, just don’t expect it to work with any tool like Metabase.
- VPN if you’re going full enterprise or need multi-service access.
- Add Read replicas if query load is a concern.
And above all: don’t just throw the DB in a public subnet. That way lies sadness, audit findings, and probably a very uncomfortable meeting.