Ghost Token Scanner
Silent context waste, found and priced
npx promptreports-cli context --ghostsExtends context with a scanner for waste that never shows in summaries. Finds duplicate tool results across sessions, oversized tool payloads, post-compaction residue, and unused loaded skills. Reports waste + estimated monthly cost.
Your summary tells you what you spent. Ghost tokens tell you what you wasted. This scanner walks every .jsonl session file and detects five kinds of silent bloat: duplicate tool results (same large file read or API response returned 3+ times), oversized tool input payloads (agents pasting content that should be path references), post-compaction residue (what survived compaction that didn't need to), skills loaded but never invoked (each costs ~300 tokens per session in context), and a bloated CLAUDE.md. Output shows waste severity, total tokens wasted, daily average, and estimated monthly dollar cost of ghosts alone.
On this page
Prerequisites
- At least one Claude Code session in ~/.claude/projects/
Flags & Options
| Flag | Description | Default |
|---|---|---|
| --ghosts | Enable ghost-token scan mode (enables this view) | — |
| --days N | Lookback period | 7 |
| --json | Machine-readable findings | — |
Examples
7-day ghost scan
npx promptreports-cli context --ghostsDefault — last 7 days of session files.
30-day ghost scan
npx promptreports-cli context --ghosts --days 30Wider window reveals more duplicate tool results.
Export for dashboards
npx promptreports-cli context --ghosts --jsonPipe findings into monitoring or CI.
Output
Summary box with scanned sessions, total waste in tokens, daily average, and estimated monthly cost of ghosts. Below it: per-finding cards sorted by severity and token impact, with evidence for each.
┌─ Ghost Token Scan ────────────────────────────────────┐
│ Scanned: 14 session files (last 7 days) │
│ Total waste: 48.2K tokens │
│ Daily avg: 6.9K tokens/day of silent bloat │
│ Est. cost: $2.20/month just on ghosts │
└─────────────────────────────────────────────────────┘
HIGH 4 tool results repeated 3+ times across sessions ~22.1K tokens
Same large tool output (file reads, API responses) appearing repeatedly.
MED 2 tool(s) with oversized input payloads ~8.4K tokens
Tools: WebFetch, Read. Long inputs suggest pasting over referencing.
LOW 7 skill(s) loaded but never invoked ~2.1K tokens
Never referenced: autoresearch-legal, slack-gif-creator, …What it reads and writes
Reads
- ~/.claude/projects/**/*.jsonl
- .claude/skills/
- CLAUDE.md
Writes
Nothing (read-only)
Free vs Pro usage
Free tier
- See what's silently eating your context budget
- Identify duplicate tool results that could be cached
- Find skills that are pure tax and should be removed
Pro tier
Upgrade- Track ghost-token trend over time to verify cleanup is working
- Alert on rising waste via dashboard thresholds
- Benchmark ghost rate across team members to spread best practices
Pro tips
- If duplicate tool results dominate, audit how your agent reads files — teach it to remember
- Unused skills are the cheapest fix: remove them, save 300 tokens each per session
- Run with --days 30 monthly to catch slow-accumulating waste