If you’ve spent any meaningful time building software with AI coding assistants, you’ve lived through this scenario: you open a new chat, spend twenty minutes re-explaining your architecture, your constraints, and the decisions you made yesterday — only to watch the AI confidently suggest an approach you already rejected in your previous session. The context is gone. The reasoning evaporated. You’re starting from zero.
This isn’t a minor inconvenience. It’s a fundamental friction point in AI-assisted development, and it’s one that Rahul Garg from Thoughtworks recently addressed on Martin Fowler’s site with a pattern he calls Context Anchoring.
The problem with ephemeral conversations
AI conversations are, by design, disposable. Every session operates within a finite context window — a working memory that holds everything the model can reference when generating a response. As that window fills up, performance degrades. Research consistently shows that LLMs experience 20-50% accuracy degradation as context grows beyond 100k tokens, a phenomenon known as context rot.
But here’s the real trap: because decisions feel preserved within a running conversation, developers keep sessions artificially long. They avoid closing the chat because they sense — correctly — that something valuable will be lost. As Garg puts it, the medium designed for thinking becomes the de facto storage system. That’s a category error, and it costs real productivity.
What Context Anchoring actually is
Context Anchoring is deceptively simple. It’s the practice of externalizing feature-level design decisions into a living document that persists outside the AI conversation. Not a project wiki. Not comprehensive documentation. A focused, continuously updated record of what was decided, why, and what was rejected.
Garg distinguishes between two levels of context documents:
- Priming documents — project-level context like your tech stack, architectural patterns, and coding conventions. These change infrequently (quarterly updates) and provide the stable backdrop for any AI session.
- Feature documents — specific to the work at hand. These capture decisions, constraints, rejected alternatives, open questions, and implementation state. They’re updated continuously as work progresses.
The feature document is where the real value lives. A well-maintained one might include a decision table with reasoning columns, current constraints the AI must respect, open questions awaiting resolution, and a simple implementation checklist. Fifty lines of structured context replace hundreds of lines of conversational history.
Why this matters more than you think
The immediate benefit is obvious: you can start a new AI session, share the feature document, and achieve alignment in thirty seconds instead of thirty minutes. But the deeper implications are more interesting.
It forces clarity. Writing down “we chose approach X because of Y, and rejected Z because of W” is a forcing function for clear thinking. Vague reasoning that sounds convincing in a chat thread becomes visibly weak when committed to a document. If you can’t articulate why a decision was made, maybe the decision isn’t as solid as you thought.
It enables team coordination. When multiple developers work on the same feature with their own AI sessions, the feature document becomes shared ground truth. Without it, each developer’s AI operates in its own bubble, potentially making contradictory suggestions based on incomplete context. This is a problem that Anthropic’s engineering team has also identified , advocating for treating context as “a finite resource with diminishing marginal returns” — one that should be curated deliberately rather than accumulated passively.
It makes sessions disposable. Garg offers a simple litmus test: “Could I close this conversation right now and start a new one without anxiety?” If the answer is no, you have context trapped in the wrong medium. Context Anchoring eliminates that anxiety entirely.
The broader trend: context engineering
Context Anchoring doesn’t exist in isolation. It’s part of a larger shift in how the industry thinks about AI-assisted development. The emerging discipline of context engineering — as described by Anthropic — moves beyond crafting individual prompts to orchestrating the entire information environment that an AI model operates within.
This includes strategies like just-in-time context retrieval (loading information only when needed rather than front-loading everything), compaction (summarizing conversation history to reclaim context window space), and sub-agent architectures (delegating focused tasks to specialized agents with clean context windows). Tools like Claude Code’s persistent Tasks and various MCP-based memory systems are already implementing these ideas at the tooling level.
The enterprise world is moving in the same direction. Concepts like the Context Layer — a persistent system of record that carries organizational knowledge across AI interactions — are gaining traction as companies realize that context management is becoming as important as data management.
Practical calibration
Not everything needs anchoring. Garg is pragmatic about this: a quick question to your AI assistant needs no documentation. A single-session bug fix might warrant a lightweight capture at most. But any feature that spans multiple days, involves multiple sessions, or requires coordination between developers — that’s where Context Anchoring earns its keep.
The beauty of the pattern is its low overhead. You’re not writing documentation for documentation’s sake. You’re capturing decisions at the moment they’re made, in a format optimized for machine consumption. The document serves double duty: it aligns your AI assistant and it aligns your future self.
Our take
At Sourcelabs, we’ve seen firsthand how context loss compounds across a team. One developer discovers that a particular API doesn’t support pagination the way the docs suggest, works around it in their session, and the next developer hits the same wall because that hard-won knowledge died with the chat. Context Anchoring addresses this directly.
The pattern also highlights something important about the current moment in AI-assisted development: the bottleneck isn’t model capability — it’s context management. Models are remarkably capable when given the right information. The challenge is ensuring they have that information consistently, efficiently, and across session boundaries.
Garg’s framing transforms context from an ephemeral byproduct of conversation into a durable engineering artifact. That’s a mindset shift worth adopting, regardless of which AI tools you use. The conversations are temporary. The decisions shouldn’t be.