BLOG

Dispatches from
the experiment.

LATEST
ANALYSISAPR 2026

The Claude Code Leak Reveals How Anthropic Prompts for Judgment, Not Compliance — 7 Techniques You Can Steal

Most people write prompts to make AI follow rules. Anthropic writes prompts to make AI understand why the rules exist. After reading 914 lines of their production system prompt, I think their approach is fundamentally different from what most of us are doing.

14 min read
CORRECTIONAPR 2026

Everyone (Including Me) Got the Claude Code Haiku Pipeline Wrong — Here’s What the Code Actually Says

After publishing our analysis of the Claude Code leak, I went back to re-read the actual function calls. The Haiku step doesn’t “summarize” your content. It does something meaningfully different — and the GEO advice everyone’s been giving (including mine) needs correcting.

ANALYSISAPR 2026

I Read All 512,000 Lines of Claude Code’s Leaked Source — Here’s What It Reveals About How Claude Cites Your Content

On March 31st, Anthropic accidentally shipped a source map that exposed Claude Code’s entire TypeScript codebase. I spent 24 hours reading it with one question: how does Claude actually decide what to cite?