My Claude Code Setup: 5 Parallel Agents and Why I Still Do Pair Programming (Kinda)
By Mitch Hazelhurst ·
I rebuilt three services from scratch recently. Solo. Five terminal tabs in iTerm2, five Claude Code agents, one week of focused execution.
I've completely fallen for iTerm2. The split panes, the hotkey window, the ability to keep all five agents visible at once. It's the cockpit that makes this workflow possible. If you're running Claude Code and you're still on the default macOS Terminal, do yourself a favour.
This is the setup I use every day. It's fast, it's opinionated, and it's not enough on its own. Here's how it works and why I built something else to fill the gap.
The setup
Five terminal tabs. Each running a Claude Code instance with a distinct role:
- Orchestrator . Fetches Linear tickets, builds an implementation plan, decides what can be parallelised across the other agents.
- Front end service . Owns the UI repo.
- Data service . Owns the data ingestion pipeline.
- Query service . Owns the AI chat and analytical query layer.
- Audit agent . Reviews PRs as they land, flags issues, and kicks work back to the relevant service agent.
The orchestrator is the brain. It pulls tickets from Linear, generates a parallelisation plan, and dispatches work to the service agents. Each agent fires up its own git work tree so they're not stepping on each other's file systems. When a commit is ready to PR, the audit agent pulls it, reviews the code, and either approves or kicks it back with comments. It's a recursive loop.
The whole thing is glued together by CLAUDE.md files in each repo. These store the rules: implement the ticket, pull sub-tasks, write tests, pass tests, pass linting, run integration tests before committing. No PR without green checks.
The planning that makes it work
The agents aren't magic. They're fast executors of clear instructions. The actual leverage comes from the planning you do before writing a single line of code.
I go back to first principles every time. What are the customers actually saying? What does the product need to do? I write the vision doc. Then the PRD. Then user journeys. Then an event model. Then the architecture to support it. Then tickets with acceptance criteria and test cases for each one.
The agents aren't guessing what to build. They're executing against a spec that I spent days making airtight. I love documenting everything now, because those documents act as decision records. I can go back, review them, and course-correct.
If your agents are producing garbage, the problem is almost never the model. It's your inputs. Vision, PRD, user journeys, event model, architecture, tickets. In that order. Get those right and the agents become remarkably effective.
What Claude Code gets wrong
It still makes silly mistakes. Wrong AWS Bedrock model naming conventions. Case sensitivity bugs where a query breaks unless you type a customer name with perfect capitalisation. UI elements that are close but not right.
I still do most of my front end polish manually. Fighting Claude to get pixels right is slower than just doing it myself. The agentic loop is incredible for architecture, refactoring, service scaffolding, test generation, and cross-file changes. It falls apart on visual polish and domain-specific edge cases where the model doesn't have enough context to make the right call.
More importantly: it doesn't tell you what you don't know. It generates code. You accept it. It moves on. There's no moment where it says “hey, the pattern you just accepted has implications you should understand.”
Why I built Pear to run alongside it
I'm a self-taught developer. Four years ago I wrote my first line of code. Now I'm writing backend Go services, managing infrastructure, and shipping production systems. AI tools made that trajectory possible. But they also made it dangerous.
When you're producing agentic code at speed, with a non-traditional background, you need a circuit breaker. Something that watches the code flowing through your repos and says: stop. You need to understand this before it ships.
That's Pear. I run it in three terminal tabs alongside my Claude Code agents, one per repo. It watches file changes and git diffs in real time. When something lands, Pear kicks the diff to an LLM and comes back with: here are the concepts underlying this change, here's something you might have missed on line 200, here's what could break in production if you don't fix it.
It's a pair programmer that doesn't write code. It teaches. And when it catches something, I reference Designing Data-Intensive Applications or The Go Programming Language to go deeper. Those two books are my daily references. Pear tells me when to open them.
Dog-fooding the product
I used Pear to critique Pear while I was building it. That sounds recursive, and it is. But it was one of the most useful things I did. The tool caught patterns in its own codebase that I wouldn't have noticed, because I was moving too fast to stop and think about what the code I just accepted actually does.
That's the core problem. AI coding tools optimise for speed. Nobody is optimising for understanding. You can ship an entire service in a day and not be able to explain how half of it works. That's not engineering. That's assembly.
The workflow I'd recommend
If you're using Claude Code seriously, here's what I've learned:
- Spend more time planning than coding. Vision, PRD, user journeys, event model, architecture, tickets. The agents execute your clarity. If the input is vague, the output is garbage.
- Use CLAUDE.md as your engineering standards. Tests, linting, integration checks. Codify your expectations. The agents follow them every time.
- Parallelise by service, not by file. Each agent owns a repo or service boundary. Work trees keep them isolated. An orchestrator keeps them aligned.
- Automate review, don't skip it. A dedicated audit agent that reviews every PR before you even look at it catches more than you think.
- Run a learning layer alongside your coding layer. This is where Pear fits. The agents ship code. Pear makes sure you understand it. Both matter.
Own your workflow
I see a lot of open-source orchestration frameworks and agentic workflow repos popping up. They're great for inspiration. But I don't believe in adopting someone else's workflow wholesale. The models change fast. What you had to work around six months ago won't be the same constraint tomorrow. Own your workflow. Keep it lean. Build it around how you actually think and work.
The developers who will thrive aren't the ones with the fanciest agent setup. They're the ones who can plan clearly, communicate effectively, and actually understand what their agents shipped. AI lowers the technical ceiling. Product thinking, domain knowledge, and the ability to own a problem end to end? Those are the skills that compound.
Ship fast. Understand what you shipped. That's the game now.