AI Coding Assistants Slow Down Experienced Developers, Study Finds

A surprising new study from AI research nonprofit METR reveals that advanced AI coding tools actually slowed down seasoned software developers working on familiar codebases, contradicting widespread assumptions about AI-driven productivity gains. The study, conducted earlier this year, tracked experienced open-source developers using Cursor, a popular AI coding assistant, as they completed tasks in projects they knew well. Despite participants’ initial belief that AI would cut task time by 24%—and their post-task conviction that it had saved them 20%—the data showed the opposite: AI assistance increased completion time by 19%. “We were shocked,” said co-author Nate Rush, who had predicted a 2x speedup.

The findings challenge the tech industry’s bullish narrative on AI-powered developer efficiency, which has fueled massive investments in coding assistants like GitHub Copilot and Amazon CodeWhisperer. While previous studies reported productivity boosts of up to 56% for general coding tasks, METR’s research suggests these gains vanish when experts work within complex, well-known systems. The slowdown occurred because developers spent extra time reviewing and correcting the AI’s “directionally correct but not exactly right” suggestions, explained lead author Joel Becker. This nuance highlights how AI’s value depends heavily on context—a caveat often missing from vendor claims.

The study raises questions about AI’s role in professional software development, particularly for senior engineers. While AI may help beginners or those navigating unfamiliar code, it appears to introduce friction for veterans who already understand their systems intimately. This distinction matters as companies like Anthropic predict AI will eliminate 50% of entry-level white-collar coding jobs within five years. METR’s work suggests the technology’s impact will be uneven—potentially streamlining junior workflows while complicating senior ones. The authors note that most benchmark studies overestimate real-world benefits by using simplified coding tasks that don’t reflect complex legacy systems.

Despite the productivity dip, nearly all study participants—including the researchers themselves—continue using Cursor daily. Developers reported preferring AI assistance because it makes coding “less effortful” and more enjoyable, comparing it to editing a draft rather than facing a blank screen. “Developers have goals other than speed,” Becker observed, highlighting how subjective factors like reduced cognitive load may outweigh raw efficiency metrics. The findings underscore a growing realization in tech: AI’s true value lies not in blanket productivity claims, but in understanding where—and for whom—it actually helps. As coding assistants evolve, their greatest challenge may be learning when to step back as much as when to step in.