Lock Your Origin Server to Cloudflare — The Parts Most Guides Leave Out
AI GENERATED ARTICLE (we have created this article through AI tools)
If you put Cloudflare in front of a server and don't take any extra steps, your origin is still reachable directly from the public internet. Anyone who finds the origin IP — through certificate transparency logs, old DNS history, or just port scanning — can bypass Cloudflare entirely. All the WAF rules, rate limits, and bot protections you configured? Gone for that request.
This post shows how to actually fix that on a Debian/Ubuntu server running nginx: set Cloudflare's SSL mode correctly, lock the firewall to Cloudflare IP ranges, keep those ranges up to date automatically, and make sure nginx sees the real client IP instead of Cloudflare's edge IP in your logs.
I'm writing this after a refactoring session where I discovered my own server was in the "false sense of security" state — iptables had Cloudflare IP allow rules that looked correct, but the default policy was `ACCEPT`, which meant the rules were decorative. Everything was open. This post is the version of the guide I wish I'd had.
What you actually need
Locking your origin to Cloudflare is a three-layer problem, not a one-script problem. All three layers have to be right:
1. Cloudflare configuration: SSL/TLS mode set to Full (Strict) so the CF ↔ origin leg is encrypted and validated.
2. Host firewall (iptables/ip6tables): default-drop policy with explicit allow rules for Cloudflare's IP ranges on ports 80/443, plus allows for SSH and any other inbound services you need.
3. nginx: the `ngx_http_realip_module` configured to trust the `CF-Connecting-IP` header, so logs and application code see the real visitor IP instead of Cloudflare's edge IP.
Then, because Cloudflare updates their IP ranges occasionally, everything in layers 2 and 3 that references those ranges has to be refreshed periodically.
Most tutorials cover one of those three layers and imply it's sufficient. It isn't.
Prerequisites
Before doing anything below:
- Your domain is already proxied through Cloudflare (orange cloud on in the DNS tab).
- You have a valid TLS certificate on the origin (Let's Encrypt is fine; we'll use it below).
- You have root/sudo access to the server.
- Critically: you have an out-of-band recovery method in case you lock yourself out. A VPS provider's web console (Linode Lish, DigitalOcean Droplet Console, Hetzner Robot console, etc.) is ideal. Do not proceed without this.
- You have a second SSH session open while making firewall changes, so if the primary session dies you still have a way to fix things.
Install the packages we'll need:
sudo apt update
sudo apt install -y iptables-persistent curl
`iptables-persistent` is the package that restores your firewall rules on boot. Without it, every reboot wipes the rules and you're back to square one.
Step 1: Set Cloudflare to Full (Strict)
In the Cloudflare dashboard, select your domain → SSL/TLS → Overview. The setting should be **Full (Strict)**.
Each mode means:
- Off* no HTTPS at all. Wrong.
- Flexible: visitor ↔ CF is HTTPS, CF ↔ origin is HTTP. This leaks traffic between CF and your origin in plaintext. Wrong.
- Full: CF ↔ origin is HTTPS but CF does not validate the origin's certificate. An attacker who intercepts the CF ↔ origin path can present any cert. Weak.
- Full (Strict): CF ↔ origin is HTTPS and CF validates the origin's certificate against a real CA. This is what you want.
If you're using Let's Encrypt on origin (as shown below), Full (Strict) just works. No extra configuration needed.
Step 2: Configure nginx to trust Cloudflare's real-IP header
Once Cloudflare is proxying requests, your nginx `$remote_addr` variable becomes Cloudflare's edge IP — a number like `162.158.x.x`. Your access logs fill up with Cloudflare IPs, rate limits misfire, abuse detection breaks, etc.
Cloudflare sends the real visitor IP in the `CF-Connecting-IP` header. nginx's `ngx_http_realip_module` can be told to trust that header, but only when the request arrives from a Cloudflare IP range.
Create `/etc/nginx/conf.d/cloudflare-real-ip.conf`:
# AUTO-MANAGED by /usr/local/bin/update-cloudflare-ips.sh
# Do not edit by hand. See https://www.cloudflare.com/ips/
# Cloudflare IPv4
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 131.0.72.0/22;
# Cloudflare IPv6
set_real_ip_from 2400:cb00::/32;
set_real_ip_from 2606:4700::/32;
set_real_ip_from 2803:f800::/32;
set_real_ip_from 2405:b500::/32;
set_real_ip_from 2405:8100::/32;
set_real_ip_from 2a06:98c0::/29;
set_real_ip_from 2c0f:f248::/32;
real_ip_header CF-Connecting-IP;
real_ip_recursive on;
What each directive does:
- `set_real_ip_from` declares a trusted proxy CIDR. nginx only rewrites `$remote_addr` when the request comes from one of these ranges, which prevents spoofing from anywhere else.
- `real_ip_header CF-Connecting-IP` tells nginx which header to read the real client IP from. CF always sets `CF-Connecting-IP` to a single IP (the true visitor), which is cleaner than parsing `X-Forwarded-For`.
- `real_ip_recursive on` walks back through the header chain past any additional trusted proxies. Mostly defensive but harmless.
Test and reload:
sudo nginx -t
sudo systemctl reload nginx
After this, hit the site and tail the access log. You should see real visitor IPs instead of Cloudflare IPs:
sudo tail -f /var/log/nginx/access.log
If you still see `162.158.x.x` / `172.68-71.x.x` / `104.16-31.x.x` type addresses, the module isn't active. Check `nginx -V 2>&1 | tr ' ' '\n' | grep realip` — you should see `--with-http_realip_module`. Standard Debian/Ubuntu nginx packages include it.
Step 3: Configure iptables carefully
This is where it gets real. A misstep here can lock you out of SSH. We're going to build up the firewall rules *before* flipping the default policy to DROP, verify each step works, and only then switch.
Have a second SSH session open and a VPS console ready. Don't skip this.
3a. Look at what you have now
sudo iptables -L INPUT -n --line-numbers
Note whether the chain says `policy ACCEPT` or `policy DROP` at the top. If it says `ACCEPT`, any rules you add are decorative until we change that.
3b. Allow SSH
sudo iptables -I INPUT 1 -p tcp --dport 22 -j ACCEPT
`-I INPUT 1` inserts at the top of the chain so SSH is checked before anything else. If your SSH is on a different port, substitute it.
3c. Allow loopback
sudo iptables -I INPUT 2 -i lo -j ACCEPT
Services on the server that talk to themselves over `127.0.0.1` need this. If you skip it and flip to DROP, local IPC breaks in surprising ways.
3d. Allow return traffic for established connections
sudo iptables -I INPUT 3 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
Without this, outbound connections from your server can't get replies back, because the replies count as "new inbound traffic" and would be dropped.
3e. Allow any internal-network IPs you need
If you have other servers on a private network that need to reach this one (e.g., for a database or Redis connection):
sudo iptables -A INPUT -s IP_INSERT_HERE -j ACCEPT
Substitute your own private IPs. `-A` appends to the end of the chain (order among these doesn't matter). Skip this step if you don't have any.
3f. Verify before proceeding
Run:
sudo iptables -L INPUT -n -v --line-numbers
Confirm you see SSH at line 1, loopback at line 2 (with `in: lo` in the `-v` output), and conntrack at line 3. Open a fresh SSH session from your local machine (not reusing an existing one) and confirm it connects. Do not move on if anything is off.
3g. Flip the default policy to DROP
sudo iptables -P INPUT DROP
The moment this executes:
- Existing connections keep working (conntrack rule).
- New SSH connections keep working (rule 1).
- Loopback keeps working (rule 2).
- Anything not matched by a rule is dropped.
At this point the firewall is real but has no Cloudflare allows yet, so port 80/443 from the internet is also blocked. That's expected — the automation script in step 4 will add the CF rules next. Don't panic if your site shows "connection refused" for a minute.
3h. Do the same for IPv6
If your server has a public IPv6 address, the IPv4 firewall doesn't protect IPv6 traffic. `ip6tables` is a completely separate ruleset.
sudo ip6tables -I INPUT 1 -p tcp --dport 22 -j ACCEPT
sudo ip6tables -I INPUT 2 -i lo -j ACCEPT
sudo ip6tables -I INPUT 3 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
sudo ip6tables -P INPUT DROP
Step 4: The update script (handles CF ranges for nginx + iptables + ip6tables)
This script does three things on each run:
1. Downloads Cloudflare's current IP range lists.
2. Regenerates `/etc/nginx/conf.d/cloudflare-real-ip.conf` and reloads nginx if it changed.
3. Refreshes the iptables and ip6tables allow rules for CF ranges on ports 80/443.
It's self-healing: on the first run, it finds any existing CF allow rules you added by hand and replaces them with tagged ones it manages, so subsequent runs can cleanly remove and re-add its own rules without touching anything else.
Create the script:
sudo nano /usr/local/bin/update-cloudflare-ips.sh
Paste in
#!/usr/bin/env bash
# ===========================================================
# update-cloudflare-ips.sh
# -----------------------------------------------------------
# Refreshes Cloudflare IP ranges in three places:
# 1. /etc/nginx/conf.d/cloudflare-real-ip.conf (nginx real_ip)
# 2. iptables INPUT chain (IPv4 firewall)
# 3. ip6tables INPUT chain (IPv6 firewall)
#
# Self-healing: removes any existing rules matching a CF range
# on ports 80/443 before adding fresh ones, so it cleans up
# hand-added rules on first run. Non-CF rules (SSH, loopback,
# internal IPs, Docker, RELATED,ESTABLISHED) are never touched.
# ===========================================================
set -euo pipefail
NGINX_CONF=/etc/nginx/conf.d/cloudflare-real-ip.conf
IPTABLES_SAVE_V4=/etc/iptables/rules.v4
IPTABLES_SAVE_V6=/etc/iptables/rules.v6
MARK="cf-managed"
TMP_NGINX=$(mktemp)
trap 'rm -f "$TMP_NGINX"' EXIT
# -----------------------------------------------------------
# 1. Fetch CF IP lists
# -----------------------------------------------------------
V4=$(curl -fsSL --max-time 10 https://www.cloudflare.com/ips-v4)
V6=$(curl -fsSL --max-time 10 https://www.cloudflare.com/ips-v6)
if [[ -z "$V4" || -z "$V6" ]]; then
echo "ERROR: failed to fetch CF IP lists" >&2
exit 1
fi
# -----------------------------------------------------------
# 2. Build new nginx conf
# -----------------------------------------------------------
{
echo "# AUTO-GENERATED by update-cloudflare-ips.sh on $(date -u +%FT%TZ)"
echo "# DO NOT EDIT BY HAND. Source: https://www.cloudflare.com/ips/"
echo
echo "# Cloudflare IPv4"
while IFS= read -r cidr; do
[[ -n "$cidr" ]] && echo "set_real_ip_from $cidr;"
done <<< "$V4"
echo
echo "# Cloudflare IPv6"
while IFS= read -r cidr; do
[[ -n "$cidr" ]] && echo "set_real_ip_from $cidr;"
done <<< "$V6"
echo
echo "real_ip_header CF-Connecting-IP;"
echo "real_ip_recursive on;"
} > "$TMP_NGINX"
# -----------------------------------------------------------
# 3. Update nginx conf only if changed, then reload
# -----------------------------------------------------------
NGINX_CHANGED=0
if ! [[ -f "$NGINX_CONF" ]] || ! cmp -s "$TMP_NGINX" "$NGINX_CONF"; then
BACKUP="${NGINX_CONF}.bak.$(date +%s)"
cp -a "$NGINX_CONF" "$BACKUP" 2>/dev/null || true
install -m 0644 "$TMP_NGINX" "$NGINX_CONF"
if ! nginx -t >/dev/null 2>&1; then
echo "ERROR: nginx -t failed after CF update, rolling back" >&2
[[ -f "$BACKUP" ]] && mv "$BACKUP" "$NGINX_CONF"
exit 1
fi
systemctl reload nginx
NGINX_CHANGED=1
fi
# -----------------------------------------------------------
# 4. Helper: refresh CF allow rules in a given iptables chain.
# Args: <iptables_cmd> <cidr_list> <insert_position>
# -----------------------------------------------------------
refresh_cf_rules() {
local ipt="$1"
local cidrs="$2"
local insert_at="$3"
# Remove any rule matching a CF range on 80/443, tagged or not.
while IFS= read -r cidr; do
[[ -z "$cidr" ]] && continue
while $ipt -D INPUT -s "$cidr" -p tcp -m multiport --dports 80,443 \
-m comment --comment "$MARK" -j ACCEPT 2>/dev/null; do :; done
while $ipt -D INPUT -s "$cidr" -p tcp -m multiport --dports 80,443 \
-j ACCEPT 2>/dev/null; do :; done
done <<< "$cidrs"
# Insert fresh tagged rules at the chosen position.
local pos="$insert_at"
while IFS= read -r cidr; do
[[ -z "$cidr" ]] && continue
$ipt -I INPUT "$pos" -s "$cidr" -p tcp -m multiport --dports 80,443 \
-m comment --comment "$MARK" -j ACCEPT
pos=$((pos + 1))
done <<< "$cidrs"
}
# -----------------------------------------------------------
# 5. Refresh IPv4 and IPv6 CF rules
# -----------------------------------------------------------
# Insert at position 4 (after SSH, loopback, RELATED/ESTABLISHED).
refresh_cf_rules iptables "$V4" 4
refresh_cf_rules ip6tables "$V6" 4
# -----------------------------------------------------------
# 6. Persist both rulesets so they survive reboot
# -----------------------------------------------------------
mkdir -p "$(dirname "$IPTABLES_SAVE_V4")"
iptables-save > "$IPTABLES_SAVE_V4"
ip6tables-save > "$IPTABLES_SAVE_V6"
# -----------------------------------------------------------
# Done
# -----------------------------------------------------------
if [[ "$NGINX_CHANGED" == 1 ]]; then
echo "nginx conf updated and reloaded."
fi
echo "iptables CF rules refreshed ($(wc -l <<< "$V4") ranges)."
echo "ip6tables CF rules refreshed ($(wc -l <<< "$V6") ranges)."
Make it executable:
sudo chmod 0755 /usr/local/bin/update-cloudflare-ips.sh
Run it once manually:
sudo /usr/local/bin/update-cloudflare-ips.sh
You should see output like:
```
iptables CF rules refreshed (15 ranges).
ip6tables CF rules refreshed (7 ranges).
```
If the nginx file changed, you'll also see `nginx conf updated and reloaded.`. On subsequent runs where nothing changes, you'll still see the two "refreshed" lines — they always print.
Verify the rules landed:
sudo iptables -L INPUT -n --line-numbers
You should see a stack of rules tagged `/* cf-managed */` in positions 4 through 18 (or wherever the CF ranges were inserted). Non-CF rules you added earlier should still be there.
Same for IPv6:
sudo ip6tables -L INPUT -n --line-numbers
Step 5: Schedule weekly updates via cron
Create a cron job that runs the script weekly. Put it in `/etc/cron.d/` so it runs as root:
sudo nano /etc/cron.d/cloudflare-ips
Paste:
0 4 * * 0 root /usr/local/bin/update-cloudflare-ips.sh
This runs every Sunday at 04:00 server time. Adjust the time if you prefer a different quiet window.
Two gotchas to watch for with `/etc/cron.d/` files:
- File must end with a newline Some cron implementations silently ignore a line with no trailing newline. If `cat /etc/cron.d/cloudflare-ips` shows the rule and your shell prompt on the same line, there's no trailing newline. Fix with: `echo '' | sudo tee -a /etc/cron.d/cloudflare-ips`.
- File must not be executable. Cron refuses to parse executable files in `/etc/cron.d/`. Check `ls -l /etc/cron.d/cloudflare-ips` — permissions should be `-rw-r--r--`.
Verify cron loaded the file:
sudo journalctl -u cron --since "2 minutes ago" | grep -i cloudflare
You should see a line like `RELOAD (/etc/cron.d/cloudflare-ips)` within a minute of creating the file.
Step 6: Verify the lockdown actually works
This step is the one most guides skip, and it's how you discover whether your setup is real or decorative.
6a. Confirm the default policy really is DROP
sudo iptables -L INPUT -n | head -1
sudo ip6tables -L INPUT -n | head -1
Both must say `Chain INPUT (policy DROP)`. If either says `policy ACCEPT`, your rules are decorative.
6b. Confirm nginx sees real visitor IPs
Hit the site from your local machine, then on the server:
sudo tail -f /var/log/nginx/access.log
The leading IP should be your residential IP (check `curl ifconfig.me` locally), not a Cloudflare address. If you still see Cloudflare addresses, step 2 didn't take effect.
6c. Confirm the origin is not reachable bypassing Cloudflare
From a machine that is *not* Cloudflare and *not* in your allow rules, try to hit the origin IP directly.
First, find your origin IP:
curl -4 ifconfig.me # IPv4 public IP of the server
Then from your local machine, with a browser or `curl`, try:
curl -v --resolve YOUR_WEBSITE.com:443:YOUR_ORIGIN_IP https://YOUR_WEBSITE.com
Expected result: connection times out or gets refused. The TCP handshake to origin should not complete.
If this succeeds, your firewall is not actually locking down what you think it is. Revisit step 3.
6d. Confirm Cloudflare can still reach the origin
Open the site normally in a browser. It should load. If it shows Cloudflare's "Error 521: Web server is down" page, your firewall is blocking Cloudflare too — the CF allow rules aren't working. Check that your `cloudflare-real-ip.conf` was written and the iptables rules include CF ranges.
The IP list snapshot in the nginx conf drifts without the cron job
If you set up step 2 but not step 5, your nginx allows and firewall allows will eventually become stale as Cloudflare adds new ranges. Requests from new CF ranges will be dropped at the firewall (visitors get "service unavailable" when CF rotates) and `$remote_addr` silently becomes Cloudflare's edge IP for those ranges. Both failure modes are hard to spot.
SSH brute force bots will still hit you
With SSH on port 22 allowed from anywhere, you'll get constant login attempts in `/var/log/auth.log`. This is harmless if you enforce key-only auth (`PasswordAuthentication no` in `/etc/ssh/sshd_config`) but noisy. If you want to quiet it down, install `fail2ban` or move SSH to a non-standard port. Check out this guide on setting up fail2Ban
You still need to rotate your TLS cert
Cloudflare validates your origin cert in Full (Strict) mode. If your Let's Encrypt cert expires because certbot stopped renewing, Cloudflare will serve users an SSL error. Verify `sudo systemctl list-timers | grep certbot` shows a scheduled renewal.
If you ever disable Cloudflare, you break everything
This setup deliberately makes the origin unreachable without Cloudflare. If you turn off the orange cloud in the DNS tab, Cloudflare stops forwarding traffic, and your firewall blocks everyone else — the site goes dark. Not a bug, just a dependency worth being aware of.
Rollback
If something goes wrong after any step, the fastest way back is:
# Reset INPUT policy (unlocks everything)
sudo iptables -P INPUT ACCEPT
sudo ip6tables -P INPUT ACCEPT
Then remove the nginx file:
sudo rm /etc/nginx/conf.d/cloudflare-real-ip.conf
sudo nginx -t && sudo systemctl reload nginx
And if you've already saved the rules:
# Re-save the now-open policy so a reboot doesn't re-apply the DROP
sudo iptables-save > /etc/iptables/rules.v4
sudo ip6tables-save > /etc/iptables/rules.v6
Full reset. If that's not enough, boot into the VPS console and clear `/etc/iptables/rules.v4` and `/etc/iptables/rules.v6` entirely.
Summary
Cloudflare in front of your site is not the same as "protected by Cloudflare." Locking the origin to Cloudflare's IP ranges requires coordinated changes in three places — CF's dashboard, your host firewall, and nginx — and the firewall part in particular has several ways to look correct while doing nothing. The script in step 4 keeps the moving parts in sync after setup, but it does not do the setup itself.
If there's one takeaway: **verify from the outside**. Run step 6c from a machine that isn't Cloudflare. If that direct hit to your origin IP times out, the lockdown is real. If it succeeds, something is off regardless of how the config files look.