If you’ve spent any time letting an AI agent write code for you, you probably know the feeling: things start great, but somewhere around the 15-minute mark the output starts drifting. Files get messy, requirements get silently dropped, and you end up spending more time fixing things than you saved. This problem has a name: context rot. And GSD is one attempt at solving it.
So what is GSD exactly?
GSD (Get Shit Done) is a meta-prompting and context engineering framework for AI-assisted development. Created by Lex Christopherson, it provides a structured workflow that breaks your development work into phases, each running in a fresh context window with atomic commits.
The idea is simple: instead of throwing vague descriptions at an AI and hoping for the best (sometimes lovingly called “vibecoding”), you follow a repeatable cycle of discuss, plan, execute, verify. Each step produces artifacts like specs and research notes that persist across sessions, so the AI always has the right context loaded when it starts working.
How it works in practice
GSD revolves around a set of slash commands that guide you through five steps:
- Initialize your project with requirements and a roadmap
- Discuss gray areas before jumping into implementation
- Plan by breaking work into atomic XML-structured tasks with clear verification criteria
- Execute those tasks in dependency-aware waves, each with a fresh context window
- Verify the output with human-driven testing
The framework maintains a knowledge base of markdown files (project vision, requirements, state, research) that gets loaded into each execution. This means every task starts clean but informed, rather than inheriting a bloated conversation history.
What GSD is not
It’s not a project management tool. There are no sprints, no story points, no Jira boards. It deliberately avoids enterprise ceremony in favor of a streamlined loop. It’s also not tied to a single AI tool. While it works particularly well with Claude Code, it supports Cursor, Copilot, Windsurf, Cline, and others.
How does it compare to similar approaches?
GSD isn’t the only player in this space. Tools like OpenSpec take a similar angle on spec-driven AI development, focusing on structured specifications to improve AI output quality. The core insight is shared across all of them: giving an AI agent a detailed, well-structured spec produces dramatically better results than ad-hoc prompting.
Where GSD distinguishes itself is in its emphasis on context management. The wave-based execution model, quality gates, and fresh-context-per-task approach are specifically designed to combat the degradation that happens in long AI sessions. It treats your context window as a precious resource rather than an infinite canvas.
Getting started
Installation is straightforward:
1npx get-shit-done-cc@latest
The GitHub repo has full documentation, and there’s an active community on Discord .
What’s next for us
We’ve been watching the spec-driven development space with interest, and GSD caught our attention for its pragmatic approach. We’re going to start experimenting with it on some internal projects to see how it holds up in real-world use. Expect a follow-up post where we share our honest experiences, what worked, what didn’t, and whether it actually lives up to the promise of making AI-assisted development more reliable.
