Watt Did AI Cost?

Every token has a cost. Every query leaves a trace. How much of the Earth did your AI consume today?

"We asked machines to think.
We forgot to ask what it would cost."

The Invisible Forest

Behind every AI response lies a data center humming with servers. Behind every server lies electricity. Behind every watt lies carbon dioxide floating into the atmosphere.

You can't see it. You can't feel it. But somewhere, a tree is working overtime to absorb what your conversation just released.

๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ
๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ
๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ
๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ
๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ๐ŸŒณ

Not Guilt. Awareness.

This isn't about shame. AI is transformative. It helps us code, create, learn, and solve problems we couldn't before.

But awareness changes behavior. When you see the trees, you start to think. When you think, you start to choose.

Measure Your Impact

We built a tool that reads your AI coding assistant usage and tells you, in trees, what it cost. Supports Claude Code and OpenCode.

// Run this in your terminal

$ npx ccwatt

No data leaves your machine. Just you and your trees.

Calculation Methodology

Our estimates are based on peer-reviewed research from 2024-2025 on LLM inference energy consumption.

Energy per Token

Based on model size and inference efficiency:

Model Size Energy Examples
Huge (~175B+) 0.001 Wh/token GPT-4, Claude Opus
Large (~70B) 0.0003 Wh/token Claude Sonnet, GPT-4o
Medium (~20B) 0.0001 Wh/token Claude Haiku, GPT-3.5
Small (~7B) 0.00003 Wh/token Mistral Small

Cache Handling

Prompt caching reuses previously computed context. Reading cached tokens requires minimal energy:

cache_creation: 100% energy (full computation)
cache_read:       1% energy (memory retrieval only)

COโ‚‚ Conversion

Factor Value Source
COโ‚‚ per kWh 0.5 kg Global average
Tree absorption 14 kg/year Mature tree avg
Tree-day 38.4 g 14kg รท 365

Research Sources

How Hungry is AI? - Benchmarked 30 LLMs (May 2025)
TokenPowerBench - First token-level power benchmark (Dec 2025)
Epoch AI - Re-evaluated common estimates

Note: These are estimates. Actual consumption depends on hardware, data center efficiency, and model optimization. No AI provider has published official energy figures for their APIs.

0.0003
Wh per token (Sonnet-class)
14 kg
COโ‚‚ absorbed by 1 tree/year
โˆž
Tokens we'll use tomorrow

The Question Remains

Every powerful tool demands responsibility.
What will you do with this knowledge?

View on GitHub