Self-Hosting with Caddy: HTTPS in 5 Minutes
Why Caddy
Nginx has been the default for years. It works. But every time we set up a new site, we're writing 30 lines of config, generating Let's Encrypt certificates with certbot, setting up auto-renewal cron jobs, and debugging permission issues when renewal fails at 3 AM.
Caddy does all of that in two lines:
yourdomain.com {
root * /srv/www/yoursite
file_server
}
That's a complete HTTPS-enabled static site. Caddy provisions the TLS certificate from Let's Encrypt automatically, renews it before expiry, redirects HTTP to HTTPS, and enables HTTP/2 — all without a single line of SSL config. Point your DNS, start Caddy, done.
Installing Caddy
On Ubuntu/Debian:
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install caddy
Caddy installs as a systemd service and starts automatically. The config lives at /etc/caddy/Caddyfile.
Our Setup: One VPS, Multiple Services
We run everything on a single free Oracle Cloud VPS — multiple websites, API services, and an MCP server. Here's the actual Caddyfile:
# Static site with API backend
getsitewatch.dev, www.getsitewatch.dev {
root * /srv/www/sitewatch/landing
file_server
handle /api/* {
reverse_proxy localhost:3200
}
}
# API-only subdomain
api.getsitewatch.dev {
reverse_proxy localhost:3200
}
# MCP server for Claude
mcp.getsitewatch.dev {
reverse_proxy localhost:3232
}
# Another static site
zerodeps.dev, www.zerodeps.dev {
root * /srv/www/zerodeps
try_files {path} {path}/ /404.html
file_server
}
# Health check on port 80
:80 {
respond "server is live" 200
}
Five different domains/subdomains, each with its own auto-provisioned HTTPS certificate. Let me break down the patterns.
Pattern 1: Static Site
The simplest case. Point a directory at a domain:
yourdomain.com {
root * /srv/www/yoursite
file_server
}
file_server serves static files and automatically looks for index.html in directories. If someone visits /blog/, it serves /blog/index.html.
For clean 404 handling, add try_files:
zerodeps.dev {
root * /srv/www/zerodeps
try_files {path} {path}/ /404.html
file_server
}
This tries the exact path first, then as a directory, and falls back to your custom 404 page. No more ugly default error pages.
Pattern 2: Reverse Proxy
For Node.js apps, Docker containers, or any backend service:
api.yourdomain.com {
reverse_proxy localhost:3200
}
That's it. Caddy handles TLS termination, HTTP/2, and proxies all traffic to your app on port 3200. Your app only needs to listen on localhost — no HTTPS config in your application code.
We use this for our MCP server. The server binds to 127.0.0.1:3232 and Caddy exposes it at https://mcp.getsitewatch.dev. The server doesn't know or care about TLS.
Pattern 3: Static Site + API Backend
This is the most common pattern for modern apps — a static frontend with an API on the same domain:
getsitewatch.dev {
root * /srv/www/sitewatch/landing
file_server
handle /api/* {
reverse_proxy localhost:3200
}
}
Requests to /api/* go to the Node.js backend. Everything else is served as static files. Same domain, no CORS issues, clean URLs.
The handle directive is Caddy's way of doing path-based routing. It's mutually exclusive — the first matching handler wins. This is important: without handle, Caddy would try to serve /api/health as a static file and return 404.
Pattern 4: www Redirect
List both the bare domain and www variant:
yourdomain.com, www.yourdomain.com {
root * /srv/www/yoursite
file_server
}
Caddy provisions certificates for both and serves the same content. If you want to redirect www to bare (or vice versa), be explicit:
www.yourdomain.com {
redir https://yourdomain.com{uri} permanent
}
Making Changes
Edit the Caddyfile and reload — zero downtime:
sudo nano /etc/caddy/Caddyfile
sudo systemctl reload caddy
Caddy validates the config before applying it. If there's a syntax error, it keeps running the old config and tells you what's wrong. No more "we restarted nginx and the site is down for 30 seconds while we fix a typo".
Check the status anytime:
sudo systemctl status caddy
DNS Setup
For each domain/subdomain in your Caddyfile, you need a DNS A record pointing to your server's IP. At your domain registrar:
Type Name Value
A @ 130.x.x.x (bare domain)
A www 130.x.x.x (www subdomain)
A api 130.x.x.x (api subdomain)
A mcp 130.x.x.x (mcp subdomain)
All subdomains point to the same IP. Caddy figures out which site to serve based on the hostname in the request. This is called virtual hosting, and it means you can run dozens of sites on one server.
Firewall
Caddy needs ports 80 and 443 open. Port 80 is required even for HTTPS — Let's Encrypt uses it for certificate challenges.
If you're on Oracle Cloud (or any cloud provider), you need to open ports at two levels:
1. Cloud security list: Add ingress rules for ports 80 and 443 from 0.0.0.0/0.
2. OS firewall: Oracle's Ubuntu image ships with iptables rules that block everything except SSH, even when the cloud firewall allows it:
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
sudo netfilter-persistent save
This trips up a lot of people. Your Caddy config is correct, DNS is pointing right, but the site doesn't load — it's almost always the OS-level firewall.
Docker Services Behind Caddy
If you run Docker containers, they expose ports on localhost. Caddy proxies to them the same way:
# docker-compose.yml exposes port 3200
app.yourdomain.com {
reverse_proxy localhost:3200
}
# Another container on port 8080
dashboard.yourdomain.com {
reverse_proxy localhost:8080
}
Don't expose container ports to the public (0.0.0.0:3200). Bind them to localhost (127.0.0.1:3200) and let Caddy be the only thing facing the internet. This way all traffic goes through HTTPS and you have one place to manage access.
Logs and Debugging
Caddy logs to journald by default:
# Recent logs
sudo journalctl -u caddy --no-pager -n 50
# Follow live
sudo journalctl -u caddy -f
For per-site access logs, add a log directive:
yourdomain.com {
log {
output file /var/log/caddy/yourdomain.log
format json
}
root * /srv/www/yoursite
file_server
}
If a certificate isn't provisioning, check the logs. The most common issues:
- DNS not pointing to your server — Let's Encrypt can't reach you
- Port 80 blocked — the HTTP challenge needs it
- Rate limited — too many certificate requests (usually from testing). Wait an hour
The Full Stack
Combining this with a free Oracle Cloud VPS, here's what a complete self-hosted setup looks like:
- Grab a free ARM instance (4 OCPUs, 24GB RAM)
- Install Caddy (5 minutes)
- Point your domains (DNS A records)
- Write your Caddyfile (one block per site/service)
- Deploy your apps (static files, Docker containers, Node.js services)
Total cost: $0. Total time: under an hour. You get HTTPS, HTTP/2, reverse proxy, and virtual hosting for as many domains as you want — all on a machine with more RAM than most paid VPS plans.
The Caddyfile for our entire infrastructure is 20 lines. It runs two websites, an API, and an MCP server. Adding a new service is three lines and a DNS record. That's the kind of simplicity worth optimising for.