“`html
When the inventor of the world’s most advanced programming agent speaks, Silicon Valley not only listens, but also takes notes.
Last week, the engineering community analyzed a thread on X by Boris Cherny, the creator and head of Claude Code at Anthropic. What started as a casual sharing of his personal terminal setup has turned into a viral manifesto about the future of software development that industry insiders are calling a turning point for the startup.
“If you don’t read Claude Code best practices directly from its creator, you’re behind as a programmer,” wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went a step further, declaring that Anthropic is “on fire” with Cherny’s “groundbreaking updates” and may be facing “their ChatGPT moment.”
The excitement comes from a paradox: Cherny’s workflow is surprisingly simple, but allows a single human to work with the power capacity of a small engineering department. As a user encountered after implementing Cherny’s setup
Here’s an analysis of the workflow that’s reshaping the way software is built, straight from the architect himself.
How Running Five AI Agents Simultaneously Transforms Programming into a Real-Time Strategy Game
The most striking takeaway from Cherny’s revelation is that he doesn’t code linearly. In the traditional “inner loop” of development, a programmer writes a feature, tests it, and moves on to the next. However, Cherny acts as a fleet commander.
“I run 5 Claudes in parallel in my terminal,” wrote Cherny. “I number my tabs 1 to 5 and use system notifications to know when a Claude needs input.”
By leveraging iTerm2 system notifications, Cherny effectively manages five concurrent workflows. While one agent runs a test suite, another revises a legacy module, and a third drafts documentation. He also runs “5-10 Claudes on claude.ai” in his browser and uses a “teleport” command to pass sessions between the web and his local machine.
This confirms the “do more with less” strategy formulated by Anthropic President Daniela Amodei earlier this week. While competitors like OpenAI pursue trillion-dollar infrastructure buildouts, Anthropic proves that superior orchestration of existing models can lead to exponential productivity gains.
The Counterintuitive Argument for Choosing the Slowest, Smartest Model
In a surprising move for an industry obsessed with latency, Cherny revealed that he is exclusively using Anthropic’s heaviest, slowest model: Opus 4.5.
“I use Opus 4.5 with Thinking for everything,” Cherny explained. “It’s the best coding model I’ve ever used, and although it’s larger and slower than Sonnet, it almost always ends up being faster than using a smaller model because you have to control it less and it’s better at using tools.”
This is a crucial insight for technology leaders in companies. The bottleneck in modern AI development is not the speed of token generation; It is human time spent correcting the AI’s mistakes. Cherny’s workflow suggests that paying the “calculation tax” for a smarter model up front eliminates the “correction tax” later.
A Shared File Turns Every AI Mistake into a Lasting Lesson
Cherny also explained how his team is solving the problem of AI amnesia. Standard models for large languages do not “remember” a company’s specific coding style or architectural choices from one session to the next.
To address this issue, Cherny’s team maintains a single file called CLAUDE.md in their Git repository. “Every time we see Claude doing something wrong, we add it to CLAUDE.md so Claude knows not to do it next time,” he wrote.
This practice transforms the codebase into a self-correcting organism. When a human developer reviews a pull request and discovers a bug, they don’t just fix the code; They tag the AI to update their own instructions. “Every mistake becomes a rule,” noted Aakash Gupta, a product lead who analyzed the thread. The longer the team works together, the smarter the agent becomes.
Slash Commands and Subagents Automate the Most Tedious Parts of Development
The “vanilla” workflow, praised by one observer, is based on the consistent automation of repetitive tasks. Cherny uses slash commands – custom shortcuts checked into the project’s repository – to handle complex operations with a single keystroke.
He highlighted a command called /commit-push-pr that he calls dozens of times a day. Instead of manually entering Git commands, writing a commit message, and opening a pull request, the agent handles the bureaucracy of version control autonomously.
Cherny also uses subagents – specialized AI personas – to handle specific phases of the development lifecycle. It uses a code simplifyer to clean up the architecture after the main work is done and an app verification agent to perform end-to-end testing before shipping anything.
Why Verification Loops are the Real Key to AI-Generated Code
If there’s a single reason why Claude Code has reportedly reached $1 billion in annual recurring revenue so quickly, it’s probably the verification loop. AI is not just a text generator; It’s a tester.
“Claude tests every single change I receive to claude.ai/code using the Claude Chrome extension,” Cherny wrote. “It opens a browser, tests the UI, and iterates until the code works and the UI feels good.”
He argues that giving AI the ability to inspect its own work – whether through browser automation, executing Bash commands or executing test suites – improves the quality of the end result by “two to three times.” The agent doesn’t just write code; it proves that the code works.
What Cherny’s Workflow Signals About the Future of Software Engineering
The response to Cherny’s thread points to a crucial shift in the way developers think about their craft. For years, “AI coding” meant an autocomplete feature in a text editor – a faster way to type. Cherny has shown that it can now act as an operating system for work itself.
“Read this if you’re already an engineer… and want more power,” summarized Jeff Tang on X.
The tools to multiply human performance by a factor of five already exist. They just require a willingness to stop thinking of AI as an assistant and start thinking of it as a workforce. The programmers who take this mental leap first will not only be more productive. You’ll be playing a completely different game – and everyone else will continue to type.
Here
“`

