I Analyzed 38 Claude Code Sessions. Only 0.6% of Tokens Were Actual Code Output.
I kept hitting Claude Code's usage limits. No idea why. So I parsed the local session files and counted tokens. 38 sessions. 42.9 million tokens. Only 0.6% were Claude actually writing code. The ot...

Source: DEV Community
I kept hitting Claude Code's usage limits. No idea why. So I parsed the local session files and counted tokens. 38 sessions. 42.9 million tokens. Only 0.6% were Claude actually writing code. The other 99.4%? Re-reading my conversation history before every single response. Not as scary as it sounds Input tokens (Claude reading) cost $3 per million on Sonnet. Output tokens (Claude writing) cost $15 per million. So that tiny 0.6% of writing carries 5x the per-token cost. The re-reading is cheap on its own. The problem is compounding. Every message you send, Claude re-reads your entire history. Message 1 reads nothing. Message 50 re-reads messages 1 through 49. By message 100, it's re-reading everything. My worst session hit $6.30 equivalent API cost. The median was $0.41. The difference? I let it run 5+ hours without /clear. Lazy prompts are secretly expensive A prompt like "do it" costs nearly the same as a detailed paragraph. Your message is tiny compared to the history being re-read al