← Back to blog

AI Is Making Developers Fast, But Not Good

By Mitch Hazelhurst ·

Every developer I talk to says the same thing: AI tools made them faster. Ship features in hours instead of days. Generate entire modules from a prompt. Autocomplete that actually works.

But ask them to explain what their code does. Why it's structured that way, what trade-offs were made, what would break if you changed the architecture. The confidence disappears. They assembled the code. They didn't engineer it.

The data tells the same story

A 2025 METR study found that AI tools actually made experienced open-source developers 19% slower on real-world tasks in repos they knew well, despite those developers predicting a 24% speedup. The tools helped generate code, but the overhead of reviewing, debugging, and correcting AI output ate the gains.

Stanford research on AI-assisted coding found that developers who used AI assistants produced code with more security vulnerabilities than those who didn't, and were simultaneously more confident their code was secure. Speed bred false confidence.

GitClear's analysis of 150 million lines of code showed that AI-assisted codebases have significantly higher churn rates. Code that gets written then quickly rewritten or deleted. More output, but less of it survives. Stack Overflow traffic dropped over 50% as developers stopped researching and started prompting, trading deep understanding for fast answers.

Speed without understanding is debt

When you generate code you don't understand, you're not saving time. You're borrowing it. Every function you can't explain is a future debugging session. Every architectural decision you didn't consciously make is a refactor waiting to happen.

This isn't an argument against AI tools. They're genuinely useful and they're not going away. The problem is that the entire ecosystem optimises for one metric: speed of code production, while ignoring the metric that actually determines engineering quality: depth of understanding.

What needs to change

Developers need tools that close the gap between what they ship and what they understand. Not tools that slow them down. Tools that teach them as they go. The best learning happens in context: when you're looking at your own code, solving your own problems, and the explanation is relevant to what you're building right now.

That's why I'm building pear. It's a CLI tutor that sits in your terminal and teaches you software engineering concepts in the context of your actual work. It reads your diffs, understands your project structure, and explains what you're building and why it matters.

It doesn't write code for you. It doesn't autocomplete. It teaches. Because the goal isn't to ship faster. It's to ship faster and actually understand what you shipped.

The developers who will thrive

The next decade belongs to developers who use AI as a lever, not a crutch. The ones who can ship at AI speed but debug at human depth. Who can read a codebase, not just generate one. Who treat understanding as the competitive advantage it is.

AI makes developers fast. The question is whether you're also getting good.