If you use Claude Code, Cursor, or any MCP-based AI tool, check this right now:
ps aux | grep mcp-server | grep -v grep
I found 11 orphaned mcp-server-cloudflare processes running simultaneously. 550% combined CPU. The oldest had been burning cycles for 48+ hours. My machine was crawling and I had no idea why.
This pattern is documented in claude-code issue #1935 - a credibility anchor for what’s happening underneath.
The Problem
Claude Code spawns MCP (Model Context Protocol) server processes as child node processes. Each configured MCP server (Cloudflare, Perplexity, etc.) runs as a separate node process for the duration of a session. When a session ends cleanly, these processes are supposed to be terminated. In practice, they frequently survive:
- Session crashes or unexpected termination
- User closes the terminal or IDE without a clean shutdown
- Multiple Claude Code sessions spawn duplicate MCP instances
- A daemon spins up sessions that end abruptly when conversations finish
The result is orphaned node processes that consume CPU indefinitely. They aren’t doing useful work - they’re stuck in event loops with no parent to communicate with.
Observed Impact
On 2026-02-22, I discovered 11 orphaned mcp-server-cloudflare processes running simultaneously, each consuming ~50% CPU (550% combined). The oldest two had been running since Saturday with 387+ minutes of CPU time each. The system was noticeably sluggish.
After killing all 11, a new zombie appeared within hours - a single mcp-server-cloudflare process had already accumulated 192 minutes of CPU at 97.9% usage by the time I checked again.
Why Cloudflare Specifically?
All observed zombies were mcp-server-cloudflare. Other MCP servers (Perplexity, Google Workspace) did not exhibit the same behavior. This may be due to:
- The Cloudflare MCP server’s internal event loop behavior when disconnected from its parent
- How npx caches and spawns the Cloudflare package vs. others
- Coincidence of which sessions crashed (needs more data)
Regardless, the monitor needs to be generic - any MCP server could exhibit this behavior.
Detection Logic
How to distinguish orphans from active processes:
- An active MCP process has a live Claude Code parent process in its process tree
- An orphaned MCP process either has PID 1 as its parent (reparented to init/launchd) or its original parent is gone
- Secondary signal: sustained high CPU (>40%) with no parent - active MCP servers are mostly idle, waiting for tool calls
# Find MCP processes whose parent is PID 1 (orphaned and reparented)
ps -eo pid,ppid,pcpu,etime,command | grep mcp-server | awk '$2 == 1 {print}'
The Fix: Periodic LaunchAgent
A macOS LaunchAgent that fires a cleanup script every 10 minutes.
Why LaunchAgent over cron? LaunchAgents are the macOS-native way to schedule periodic tasks, they survive sleep/wake cycles (cron jobs miss their window), and they integrate with launchctl for easy management.
How launchd scheduling works:
launchd(PID 1) is always running on every Mac - it’s the system process manager- It reads
.plistfiles from~/Library/LaunchAgents/at login - The
StartIntervalkey tells launchd “run this script every N seconds, then let it exit” - No background process sits in memory between runs - launchd manages the timer internally
Platform equivalents:
| Platform | Scheduler | Script |
|---|---|---|
| macOS | LaunchAgent (.plist + launchd) | Bash |
| Linux | systemd timer or cron | Bash |
| Windows | Task Scheduler | PowerShell |
A small companion check in the Claude Code session-init hook catches any zombies that accumulated since the last session, giving you immediate relief at session start.
Why This Matters for Daemons
When you run a daemon that triggers Claude Code sessions (Slack bridge, scheduled tasks, anything autonomous), each session spawns MCP processes. If the upstream conversation ends abruptly or the session crashes, those MCP processes become orphans - and the user who triggered it has no visibility into it happening.
On a daemon host, orphans accumulate from every user’s sessions, not just one person’s local work. The monitor stops being an optimization and starts being essential infrastructure.
Logging
Kills get logged to ~/Library/Logs/alcanah-node-monitor.log with timestamp, PID, process name, CPU usage, and uptime. This lets you:
- Confirm the monitor is working
- Track which MCP servers produce the most zombies
- Identify if the root cause gets fixed upstream (kill frequency drops to zero)
If you’re building local AI infrastructure, this is the kind of thing nobody warns you about until your laptop is screaming.