I tried to expose a “content review dashboard” at localhost:8080 through a Cloudflare Tunnel. Discovered mid-verification that the localhost:8080 service was actually a npx serve . process launched from ~/Documents/, bound to 0.0.0.0 (by default), with directory listing on (by default), serving the entire ~/Documents folder - every client workspace, every token in tmp/, every personal file - to whoever could reach it.
The tunnel had been live for ninety seconds with the Cloudflare Access policy not yet propagated to the new hostname. I killed it, cleaned up, pivoted to Tailscale.
Writing this so nobody else (me included) ever does it again.
The Setup
I was building a single dashboard that aggregates the state of every project, pipeline item, task, and service in my workspace. Phase 3 was “make this viewable from my phone.” The natural starting point was the two dashboards I already had running:
localhost:3847- a Python daemon task dashboardlocalhost:8080- a content review tool I was actively using, served vianpx serve
The plan: one Cloudflare Tunnel exposes both through two subdomains, with a Cloudflare Access policy gating each to my email. Reasonable-sounding plan. Wrong plan.
What localhost:8080 Actually Was
A process I found by running lsof -iTCP:8080 -sTCP:LISTEN:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node 15134 username 14u IPv6 0xe4125ff3358be944 0t0 TCP *:http-alt (LISTEN)
The *:http-alt is the giveaway - the asterisk means “all interfaces.” This wasn’t bound to localhost. It was bound to 0.0.0.0, which means any device on any network my Mac was connected to could reach it.
The process was npm exec serve . -p 8080 --cors. That’s Vercel’s serve npm package, launched via npx serve . from the ~/Documents/ directory. At some point weeks earlier, I had needed an HTTP origin for an AI content reviewer, spun up npx serve in a terminal, never closed it, and it had been running ever since.
Curl confirmed:
$ curl -s http://localhost:8080/ | head -5
<title>Files within Documents/</title>
An open directory listing of ~/Documents, every folder browsable: every client workspace, plus tmp/ containing API tokens, PAT files, and credentials. All over HTTP with no auth, with CORS enabled, on 0.0.0.0:8080, to anyone on the same network.
The Defaults That Caused This
Four serve defaults compounded into a footgun:
servebinds to0.0.0.0by default. The package assumes you want your dev server reachable from other devices on your LAN. To bind to localhost you have to pass-l tcp://127.0.0.1:PORTas an explicit listener.serveenables directory listing by default. If there’s noindex.html,servehelpfully renders a browsable index. Inside~/Documents/, every folder at that level gets listed.servefollows symlinks by default.--corswas added at some point (probably because the AI reviewer’s browser-side JS was getting blocked) and never removed. Combined with the 0.0.0.0 binding, any page on any site could JavaScript-fetch my entire Documents folder.
None of these are individually wrong. serve is a dev tool - the defaults are optimized for “I want to quickly share a folder with my teammate on the same WiFi.” The defaults are wrong for “this process will run forever in the background and I’ll forget about it.”
The SSL Detour That Made It Worse
Cloudflare’s free-tier Universal SSL covers the apex and a one-level wildcard (*.alcanah.co). It does not cover second-level subdomains like d.ops.alcanah.co. When I tried to curl my second-level subdomain, I got sslv3 alert handshake_failure.
I fixed it by switching to single-level subdomains and reloading the LaunchAgent.
But Cloudflare Access applications are scoped to specific hostnames. When I changed the hostnames, the Access policy I’d configured was still attached to the old subdomains - it didn’t automatically follow. So the new hostnames were live, resolvable, TLS-serving, and ungated. Anyone who knew the URL could load them without authentication.
And I hadn’t yet verified what localhost:8080 served.
The Verification That Saved Me
My first post-SSL-fix test was a simple curl:
curl -sS -o /tmp/response.html \
-w " HTTP %{http_code} | TLS verify %{ssl_verify_result} | size %{size_download}\n" \
https://ops-c.alcanah.co/
Got back HTTP 200 | TLS verify 0 | size 8290. Then I looked at the response body and saw the words “Files within Documents/” and a list of every project folder I own.
Ninety seconds had elapsed between the LaunchAgent reload (which put the ungated hostnames live) and my reading the response body. I killed the LaunchAgent immediately.
Root Causes, Least to Most Important
Root cause 4 (shallow): Cloudflare Universal SSL doesn’t cover two-level subdomains on the free tier.
Root cause 3: Cloudflare Access policies are hostname-scoped, not tunnel-scoped. When you change a hostname, you have to remember to update the policy.
Root cause 2: I didn’t verify what localhost:8080 actually served before tunneling it. A five-second curl localhost:8080 | head would have revealed the directory listing.
Root cause 1 (the real one): I tunneled a dev server. Any HTTP server built for “run this in my terminal for a few minutes” has defaults optimized for ergonomics, not security. None of those defaults are acceptable for a service you intend to run for weeks at a time in the background, let alone one you intend to expose to the internet.
The rule I want to internalize: if a service’s purpose is “be reachable from outside localhost,” it should be a purpose-built server with an explicit allowlist, not a dev server you forgot to kill.
What I Should Have Done
Before touching the tunnel at all:
-
Run a port audit to see what was actually listening:
lsof -iTCP -sTCP:LISTEN -n -P | awk '$NF ~ /\*:/ || $NF ~ /0\.0\.0\.0/' -
Verify what each localhost port served before exposing it:
for port in 3847 8080; do curl -sS --max-time 3 http://localhost:$port/ | head -5 done -
Build a purpose-specific server - a small app that serves only what I want exposed, from a dedicated folder, with explicit path allowlisting and no directory listing.
-
Use a private network (Tailscale) instead of a public URL for “access my own dashboard from my own phone.” Cloudflare Tunnel is great for public surfaces. Tailscale is great for private ones. I reached for the wrong primitive.
Protective Measures Going Forward
Weekly port audit cron. A scheduled task that runs lsof -iTCP -sTCP:LISTEN every week, filters for anything bound to * or 0.0.0.0, compares against an allowlist, and alerts if anything new appears.
serve shell wrapper. A function in .zshrc that intercepts serve without an explicit -l listen flag and refuses to run.
Verification checklist before any tunnel or remote-access setup:
curl -s localhost:PORT/ | head -20- what does it actually serve?lsof -iTCP:PORT -sTCP:LISTEN- what’s it bound to?ls <served-root>- what’s in the directory being served?
If any of those returns something unexpected, I don’t expose it. Period.
The Broader Lesson
“Private by default” should mean “private by code, not by attention.” “I’ll remember to configure it correctly” is not a security model. The architecture should make the failure mode safe. A dev server you’re accidentally exposing to the internet is a failure mode you should not be able to achieve by forgetting a flag.
Tailscale made this easy because its failure mode is “unreachable” - the worst thing that happens if you misconfigure Tailscale is a service you wanted to reach doesn’t load. Cloudflare Tunnel’s failure mode is “reachable by whoever finds the URL.” Pick tools whose worst case matches your threat model.
If You’re Reading This Thinking “Am I Doing This Right Now?”
Run this:
lsof -iTCP -sTCP:LISTEN -n -P 2>/dev/null | awk '$NF ~ /\*:/ || $NF ~ /0\.0\.0\.0/'
Every line is a service bound to all network interfaces on your machine. For each one, check the process name. Anything you recognize as a dev server (node serve, python -m http.server, vite, astro, next dev): check what it serves, and kill it if it’s serving more than you thought.
The five seconds it takes to run that command is cheap insurance.