In my last post, Coding with Claude: What I've Learned Building a Cybersecurity Platform with AI, I covered how we built our new analytics platform from scratch using Claude Code. That post was about greenfield development, the clean slate, the fun part. This post is about the other part. The part where you have a decade of production code that real customers depend on and you need to bring it into the AI-assisted world without breaking everything.
Here's the thing: most of the advice floating around about AI-assisted development assumes you're starting fresh. But most of us aren't. We have legacy repos, tribal knowledge buried in commit messages, and code that works for reasons nobody fully remembers. At WitFoo, our Precinct platform has been processing cybersecurity data for years. It's battle-tested. It's also the kind of codebase that makes a new developer (or a new AI) squint and say "wait, what?"
Getting Claude productive on legacy code required a deliberate, phased approach. I'll walk through the four phases that worked for us, including the actual prompts we used.
Phase One: Let Claude Do What Developers Won't
The first step was documentation. I know, I know. Every engineering blog in the world says "document your code" and every engineering team in the world collectively ignores that advice. I've been guilty of it myself. When you're heads-down shipping features, documentation feels like writing a letter to a stranger about a house you already live in. You know where the light switches are.
But Claude doesn't know where the light switches are. And unlike a human developer who can tap someone on the shoulder and ask (or just muddle through until things click), Claude needs context up front. The beautiful irony is that documentation is the one task that human developers resist most and Claude genuinely excels at. Claude will happily read through thousands of lines of code and produce detailed, structured documentation without complaining, procrastinating, or deciding it would rather refactor something instead.
The prompt was almost embarrassingly simple:
Build a plan to create detailed documentation on how this repo works.
Ask me any questions that will assist your research. Build a CLAUDE.md
and a docs/ folder for secondary documentation.That's it. Claude analyzed the codebase, asked intelligent questions about design decisions and edge cases (things I hadn't thought about in years), and produced documentation that was genuinely useful. Not just API references, but architectural context, data flow explanations, and notes about why things were done in non-obvious ways.
The key insight here is the "ask me any questions" part. Claude's questions surfaced assumptions I'd forgotten I'd made. In several cases, answering Claude's questions helped me realize we had undocumented behaviors that needed to be addressed regardless of AI tooling. The documentation phase doubled as a lightweight audit.
Phase Two: Establishing the WitFoo Way
With documentation in place, we needed to define our target state. What does good look like? Not in the abstract "we should probably write tests" sense, but in a specific, enforceable, machine-readable sense.
We call this the WitFoo Way. It's a living document that captures everything about how we want to build software: test-driven development, well-documented code, secure practices, high-performance routines that optimize resource use. It specifies our frameworks (Golang for the backend, Svelte 5 with Carbon Components for the frontend). It makes explicit decisions about how we access databases, sanitize data, handle errors, and structure APIs.
The prompt looked something like this:
Create a WITFOO_WAY.md that explains our philosophies, architecture
and approaches. They include: [all the things we want to do]. Also
scan our new repos, x, y, z that we built trying to adhere to the
WitFoo Way. Summarize what you have learned and let me decide if your
observations should be added to the WITFOO_WAY.mdThe second half of that prompt is where the magic happens. We had already built new repos following these principles (a 2-year manual process). By having Claude scan those repos and compare what it found against our stated philosophies, we got something really valuable: an honest assessment of where our practice matched our preaching and where it didn't.
Claude found patterns in our new code that we hadn't explicitly codified. Things like our preference for certain error handling approaches, the way we structured database access layers, and conventions around logging that had emerged organically. Some of those observations became formal additions to the WitFoo Way. Others highlighted inconsistencies we needed to resolve.
The WitFoo Way document became the Rosetta Stone for the entire migration. It gave Claude (and our human developers) an unambiguous reference for how legacy code should look when we're done with it.
Phase Three: The Claude Refactor
The way we've traditionally onboarded new developers at WitFoo is to give them an area of code and ask them to refactor it. Add comments, write tests, improve logic where they can. It's a great way for someone to learn a codebase while contributing immediately. It's basically saying "here, make this better, and you'll understand it by the time you're done."
We applied the same philosophy to Claude, with one important caveat: we weren't going to spend time and tokens doing a full refactor of legacy code we planned to rebuild anyway. But we still support the legacy platform and needed Claude to be able to understand it well enough to deliver bug fixes and features. So the goal wasn't perfection. It was comprehension and maintainability.
The prompt:
Build a plan that runs linters, unit, system and e2e tests on our
legacy repos. Inventory and document all discovered issues in GitHub.
Once the inventory is complete, create pull requests to update the
code to address the issues.This is a methodical, three-step process. First, run every automated quality check we have and catalog what breaks. Second, organize those findings into a structured inventory (we used GitHub Issues). Third, systematically address them through proper pull requests.
The results were genuinely impressive. We ended up with cleaned-up code that Claude could work with effectively for ongoing maintenance. Linting issues were resolved. Missing tests were identified and created. Code that had drifted from best practices was brought back in line. Not to the WitFoo Way standard (that's what the new code is for), but to a baseline of clarity and consistency that made the legacy repos workable.
An unexpected benefit: the GitHub Issues inventory became a useful artifact on its own. It gave us a clear picture of our technical debt, prioritized by severity, that informed our migration planning.
Phase Four: The Master Plan
With documentation, standards, and cleaned-up legacy code in place, we were finally ready for the big move. Building WitFoo Analytics as the next-generation replacement for our Precinct platform.
The prompt that kicked off the multi-month migration:
We are going to launch a multi-month project to create a new product
called WitFoo Analytics in the analytics repo. It should be developed
using the WITFOO_WAY.md. It should have all of the capabilities of
the legacy Precinct code aligned with the WitFoo Way approaches and
philosophies. It should also incorporate the new features and ideas
in the 7TH_GEN_Evolutions.md file. Start with a master plan then
generate issues that are optimized for Claude Sonnet 4.5 to execute.
This plan is important to get right so spend as much time as needed
asking me questions. Also use the internet-research util to search
the internet for needed clarification. Building this plan will take
several sessions so track the work in CURRENT_WORK.mdThere's a lot packed into that prompt, so let me unpack what makes it work.
"Developed using the WITFOO_WAY.md" anchors every decision to our standards document. Claude doesn't have to guess our preferences. "All of the capabilities of the legacy Precinct code" ensures feature parity. We didn't want to lose anything customers rely on. "Aligned with the WitFoo Way approaches and philosophies" means reimplemented properly, not just copied over. "7TH_GEN_Evolutions.md" is our roadmap for new capabilities we want to incorporate during the rebuild rather than bolting on later.
"Optimized for Claude Sonnet 4.5 to execute" tells Claude to generate issues that are scoped, specific, and self-contained enough for another Claude session to pick up and run with. This is the assembly line approach. Opus plans, Sonnet builds.
"Spend as much time as needed asking me questions" is critical. Without it, Claude will make assumptions. With legacy migrations, assumptions are where things go sideways. Claude asked about data migration strategies, backward compatibility requirements, deployment sequencing, and dozens of other questions that forced me to think through edge cases I might have glossed over.
"Building this plan will take several sessions so track the work in CURRENT_WORK.md" acknowledges reality. A multi-month migration can't be planned in a single context window. CURRENT_WORK.md ensures continuity between sessions.
What I Wish I'd Known Earlier
A few hard-won lessons specific to legacy migration work.
Legacy code has ghosts. Code that looks unused sometimes handles edge cases that only appear under specific conditions. Claude can't know this from static analysis alone. Before removing or replacing legacy code paths, I learned to ask Claude to search for all references, check test coverage, and flag anything that might be a hidden dependency. Even then, I review those changes myself.
Documentation quality compounds. The documentation we created in Phase One made every subsequent phase faster. Claude's questions got sharper. Its plans got more accurate. Its code required fewer corrections. If you're tempted to skip the documentation phase, don't. It's the highest-leverage investment in the entire process.
The WitFoo Way prevents drift. Without a clear standards document, each Claude session makes slightly different decisions. Over time, your codebase becomes a geological record of whatever Claude thought was best practice on any given Tuesday. The WitFoo Way keeps things consistent across sessions, developers, and months of work.
Legacy migration is not a refactor. A refactor improves existing code in place. A migration rebuilds with intention. Trying to refactor legacy code into compliance with modern standards is often more expensive than rebuilding. We used Claude to make legacy code maintainable, not perfect, while building the replacement in parallel. That distinction saved us significant time and tokens.
Wrap Up
The four-phase approach (document, standardize, clean up, rebuild) gave us something I didn't expect when we started: the ability to operate on two fronts simultaneously. Claude helps us maintain and improve our legacy Precinct platform with confidence while building the next-generation Analytics platform from a solid foundation. We didn't have to choose between keeping the lights on and building the future.
If I had to reduce this entire post to a single lesson, it would be this: don't just throw Claude at your legacy code. Prepare the ground first. Document what exists. Define what you want. Clean up the worst of the mess. Then let Claude build the bridge between where you are and where you're going.
The craft of AI-assisted development is still young. We're all figuring this out in real time. But the principles that make human development teams effective (clear documentation, shared standards, systematic testing, honest assessment of technical debt) turn out to be exactly what makes AI-assisted development effective too. The tools are new. The fundamentals aren't.
Charles Herring is co-founder and CEO of WitFoo, a cybersecurity company building collective defense solutions. He is a US Navy veteran and speaks regularly at security conferences including DEFCON, GrrCON, and Secure360. You can find him on LinkedIn.