Every token has a cost. Every query leaves a trace. How much of the Earth did your AI consume today?
Behind every AI response lies a data center humming with servers. Behind every server lies electricity. Behind every watt lies carbon dioxide floating into the atmosphere.
You can't see it. You can't feel it. But somewhere, a tree is working overtime to absorb what your conversation just released.
This isn't about shame. AI is transformative. It helps us code, create, learn, and solve problems we couldn't before.
But awareness changes behavior. When you see the trees, you start to think. When you think, you start to choose.
We built a tool that reads your AI coding assistant usage and tells you, in trees, what it cost. Supports Claude Code and OpenCode.
No data leaves your machine. Just you and your trees.
Our estimates are based on peer-reviewed research from 2024-2025 on LLM inference energy consumption.
Based on model size and inference efficiency:
| Model Size | Energy | Examples |
|---|---|---|
| Huge (~175B+) | 0.001 Wh/token | GPT-4, Claude Opus |
| Large (~70B) | 0.0003 Wh/token | Claude Sonnet, GPT-4o |
| Medium (~20B) | 0.0001 Wh/token | Claude Haiku, GPT-3.5 |
| Small (~7B) | 0.00003 Wh/token | Mistral Small |
Prompt caching reuses previously computed context. Reading cached tokens requires minimal energy:
cache_creation: 100% energy (full computation) cache_read: 1% energy (memory retrieval only)
| Factor | Value | Source |
|---|---|---|
| COโ per kWh | 0.5 kg | Global average |
| Tree absorption | 14 kg/year | Mature tree avg |
| Tree-day | 38.4 g | 14kg รท 365 |
How Hungry is AI? - Benchmarked 30 LLMs (May 2025)
TokenPowerBench - First token-level power benchmark (Dec 2025)
Epoch AI - Re-evaluated common estimates
Note: These are estimates. Actual consumption depends on hardware, data center efficiency, and model optimization. No AI provider has published official energy figures for their APIs.
Every powerful tool demands responsibility.
What will you do with this knowledge?