i'd been on render for every project. it was comfortable, it just worked. then a client project pushed me to ec2 and i learned more in one day than i had in weeks of reading docs.
Suprim Khatri
Backend Developer · May 14, 2026
render was fine. it still is. but for this client project i needed more control — custom nginx config, ssl, a real domain wired up properly. ec2 felt like the grown-up move.
i was also just curious. i'd heard ec2 thrown around everywhere and never actually touched it. today felt like the day.
the aws console is intimidating the first time. a lot of options, a lot of menus. but launching an instance is actually straightforward once you know what to pick.
dnf as the package manager and has go available in the repos.the key pair is where most people get tripped up. you create it once, download the .pem file, and if you lose it you're locked out. i saved mine and immediately ran:
chmod 400 ~/Downloads/my-key.pemthis locks the file to your user only. ssh refuses to use a .pem that's too open permission-wise. learned that the hard way on a previous project.
ssh -i ~/Downloads/my-key.pem ec2-user@your-public-ipec2-user is the default user on amazon linux. the first time it asks you to confirm the fingerprint — type yes. then you're in. that moment of seeing the ec2 terminal for the first time hits different.
installed git and go:
sudo dnf install git golang -ydnf is the package manager on amazon linux. the -y flag auto-confirms everything so you're not sitting there pressing enter repeatedly.
the tutorial i was following had a plain go api. i had a turborepo monorepo with the backend at apps/backend. most of what he did applied but i had to adapt a few things.
building the binary for example — he ran it from the project root. i had to go into the backend directory first:
cd apps/backend
go build -o server ./cmd/server
chmod +x serverthe godotenv.Load() call in my config also looks for .env relative to wherever the binary runs from. so the .env had to live in apps/backend, not the monorepo root. figured that out after getting a DATABASE_URL is required panic on first boot.
i had my prod secrets in .env.production at the monorepo root locally. the scp command to get it onto ec2:
scp -i ~/Downloads/my-key.pem .env.production ec2-user@your-ip:~/your-repo/apps/backend/.envrun that from your local terminal, not inside the ec2 session. i ran it inside ec2 the first time. scp copies between two machines — it needs to be invoked from one of them, talking to the other. classic mistake.
the go binary runs on port 5000. nginx sits in front of it and handles the public-facing traffic on 80/443:
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name api.yourdomain.com;
location / {
proxy_pass http://localhost:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}certbot then handled ssl automatically — it modified the nginx config itself and set up auto-renewal. one command:
sudo certbot --nginx -d api.yourdomain.comgot the server running. tried logging in from the frontend. cors error.
turns out FRONTEND_URL in my .env was set to https://mysite.com but the frontend runs on https://www.mysite.com. cors does exact origin matching. the www matters.
fixed that. logged in. cookie wasn't being set.
two issues:
GIN_MODE was still debug — in debug mode gin handles cookies differently. set it to release.COOKIE_DOMAIN also needs a leading dot:
COOKIE_DOMAIN=.yourdomain.comwithout the dot the cookie is scoped to the exact subdomain and won't carry across to the www frontend.
running nohup ./server & keeps it alive when you disconnect but not when ec2 reboots. learned this the hard way — came back the next morning to a 502.
systemd is the right way:
[Unit]
Description=My Backend
After=network.target
[Service]
Type=simple
User=ec2-user
WorkingDirectory=/home/ec2-user/your-repo/apps/backend
ExecStart=/home/ec2-user/your-repo/apps/backend/server
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.targetRestart=on-failure means it also recovers from crashes, not just reboots. once enabled it just runs — you never think about it again.
the last piece. every push to main that touches apps/backend sshs into ec2, pulls, rebuilds, and restarts:
name: Deploy Backend
on:
push:
branches:
- main
paths:
- "apps/backend/**"
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy to EC2
uses: appleboy/ssh-action@v1.0.0
with:
host: ${{ secrets.EC2_HOST }}
username: ${{ secrets.EC2_USER }}
key: ${{ secrets.EC2_SSH_KEY }}
script: |
cd ~/your-repo
git pull origin main
cd apps/backend
go build -o server ./cmd/server
chmod +x server
sudo systemctl restart your-projectthe paths filter is important — without it every frontend push would trigger a backend redeploy and a few seconds of downtime.
render abstracts a lot. that's great when you're moving fast but it means you never really understand what's happening under the hood. ec2 makes you wire everything yourself — nginx, ssl, process management, deployment pipelines.
it's more work. it's also more knowledge. i understand what a reverse proxy actually does now. i understand why cookie domains work the way they do. i understand what systemd is managing.
also: always pick the region closest to your users. the default is us-east-1. i'm in nepal — ap-south-1 (mumbai) is the right call. obvious in hindsight, easy to miss when you're just clicking through the setup form.