The fix isn’t make the daemon smarter about logging. The fix is make the daemon write to the same files in the same format as interactive sessions.

If the daemon appends a [daemon] entry to logs/temp/alcanah-41fred-2026-04-09.md, every downstream consumer - session-logger, status-reporter, goals-weekly-review, any future dashboard - picks it up for free. Zero integration work per consumer.

This is convergent logging: multiple producers, one log stream. The producers don’t know about each other. The consumers don’t care who wrote the entry. The format is the contract.

The Problem

When you add automation to an AI workspace - nightly git syncs, weekly CRM pulls, batch enrichment jobs - you create a second stream of activity that’s invisible to the first.

Interactive sessions write to temp logs. The session-logger consolidates them into master logs. Status reports, weekly reviews, and dashboards all read from master logs. This is the workspace’s nervous system - it’s how continuity works between sessions.

But the daemon runs at 2am while you’re asleep. It commits repos, pulls CRM data, enriches contacts. None of that appears in the logs. The next morning, your session starts cold. The status report says nothing happened overnight. The weekly review misses 7 days of automated work.

This is worse than having no automation at all. At least without automation, you know nothing happened. With invisible automation, you lose track of what your own system is doing.

Why It Happens

The two systems evolved independently:

Interactive logging was designed first. It assumes a human is present, making decisions, and Claude logs as it goes. The format is markdown temp logs, one per project per day, consolidated monthly. Every agent, hook, and dashboard reads from this system.

Daemon logging was bolted on later. The daemon runs as a macOS LaunchAgent - a background process with no terminal, no Claude session, no VS Code. It can’t “decide” to log. So it writes to its own location: a JSONL file for structured data, a separate markdown reports directory for human-readable summaries.

The result is two parallel histories of the same workspace.

What Made It Hard: The FDA Permission Wall

The workspace lives in ~/Documents/Alcanah AI/ - which is actually ~/Library/Mobile Documents/com~apple~CloudDocs/Documents/Alcanah AI/ because iCloud manages the Documents folder. macOS requires Full Disk Access for any process that writes to iCloud-managed directories.

Interactive Claude sessions have FDA because VS Code has it. The daemon’s LaunchAgent does not. Python scripts run by launchd inherit the TCC permissions of their app bundle, not the binary path.

This means:

  • /usr/bin/python3 in FDA -> doesn’t help (it’s a shim, not the real binary)
  • /Library/.../Python.app in FDA -> doesn’t help if the plist runs /usr/bin/python3
  • The plist must run the actual Python.app/Contents/MacOS/Python binary AND that binary must have FDA

I discovered this through four failed attempts. The daemon ran fine, tasks executed, results posted to Slack - but every file write to the workspace silently failed with Operation not permitted. The daemon’s error handling caught it and fell back to writing to its own directory. Nobody noticed until I checked.

The Environment Variable Persistence Trap

macOS launchd preserves environment variables across process restarts within the same session. Python’s python-dotenv library defaults to override=False - it won’t overwrite an env var that already exists. So when I updated .env to fix a broken path, killed the daemon, and let KeepAlive restart it… the old value persisted. The fix looked like it worked (daemon restarted) but the config change was silently ignored.

This is the kind of bug that only appears in LaunchAgent contexts. Interactive Python, Docker containers, and SSH sessions all start with clean environments. LaunchAgents don’t.

The Design

What Converges

ProducerFormatLocation
Interactive Claude session## HH:MM - [Brief description] + bulletslogs/temp/{project}-{username}-{date}.md
Daemon task execution## HH:MM - [daemon] {job_id} ({status}) + bulletsSame file

The [daemon] prefix is the only difference. Everything else - timestamp format, bullet structure, file naming - is identical.

Fallback Chain

Daemon completes task
  v
Try: write to workspace logs/temp/ (requires FDA)
  v success? done
  v fail?
Try: write to ~/alcanah-daemon/reports/ (always writable)
  v
Log warning (daemon log, not workspace log)

Convergent logging degrades gracefully. On a new machine without FDA configured, the daemon still works - it just writes to a sidecar location that needs manual checking until FDA is granted.

Key Decisions

Daemon entries use the same file, not a separate one. I considered logs/temp/daemon-{date}.md as a separate stream. Rejected because every consumer would need to know about both files. Using the same file means consumers get daemon entries for free.

[daemon] prefix, not a separate section. I considered grouping daemon entries under ## Automated Tasks at the bottom. Rejected because chronological interleaving is more useful - you want to see “nightly sync ran at 2am, then you started working at 9am” in sequence, not split across sections.

JSONL is kept as a parallel record. The workspace temp log gets a human-readable summary. The JSONL (task-log.jsonl) keeps the structured data: exact timestamps, latency calculations, retry counts, output previews. The dashboard reads JSONL. Session-logger reads markdown. Both are authoritative for their purpose.

FDA is granted to Python.app, not /usr/bin/python3. This is a macOS-specific decision. The LaunchAgent runs the Python.app binary directly (no bash wrapper, no shim). This is less portable but correct for the TCC permission model.

Broader Pattern

This isn’t unique to AI workspaces. Any system with both human-driven and automated activity needs convergent logging to maintain a single source of truth. CI/CD pipelines solve this by writing to the same job log regardless of trigger (manual vs webhook vs schedule). Kubernetes solves it by routing all pod logs to the same aggregator. The principle is the same: if two things affect the same system, they should write to the same log.

The specific challenge in AI workspaces is that the “human” side is itself an AI (Claude) writing natural-language logs, and the “automated” side is a daemon running that same AI non-interactively. The format contract - markdown temp logs with timestamps and bullet points - is what makes convergence work without any consumer needing to care about the source.