Developers Generally Embrace AI Coding Tools Yet Remain Skeptical

Developers Generally Embrace AI Coding Tools Yet Remain Skeptical Developers Generally Embrace AI Coding Tools Yet Remain Skeptical

Qodo report finds devs trust AI code tools less than productivity gains suggest

Qodo just dropped its "The State of AI Code Quality 2025" survey. 609 developers weighed in on AI coding tools across startups and big firms.

82% use AI coding tools weekly. 78% say they’re more productive with them. But confidence in AI output is a major drag.

Advertisement

"Overall, we’re seeing that AI coding is a big net positive, but the gains aren’t evenly distributed," Itamar Friedman, CEO and co-founder of Qodo, told The Register.

"There’s a small minority of power users, who tend to be very experienced developers, who are seeing massive gains – these are the 10Xers. The majority of developers are seeing moderate gains, and there’s a group that’s failing to effectively leverage the current AI tools and is at risk of being left behind."

60% of devs say AI improved or somewhat improved code quality. 20% say AI made their code worse.

Pressure is high for team leads and code reviewers who face more review work with increased AI output.

"Individual contributors may feel 3x better because they’re shipping more code, but tech leads, reviewers, and those responsible for overall code quality tend to experience more pressure," Friedman added.

"For them, the increase in code volume means more review work, more oversight, and sometimes, more stress."

Trust is a killer. 76% never ship AI-suggested code without a human review. Many rewrite AI suggestions or delay merges. Integration remains cautious.

The irony: AI is actually great at code reviews. Among those gaining productivity using AI for code reviews, 81% saw better code quality. Manual reviews only got 55%.

"Models like Gemini 2.5 Pro are excellent judges of code quality and can provide a more accurate measure than traditional software engineering metrics," Friedman said.

"With the latest model releases, they are getting to the point where they are surpassing any large scale review that can be done by humans. To quantify this, we’ve built a public benchmark to evaluate model-generated pull requests and code changes against quality and completeness criteria."

Hallucinations are frequent. Around 75% say AI often makes syntax errors or calls non-existent packages. Only a quarter see hallucinations rarely.

"One good method for dealing with the inherent flaws is to start a session by prompting the agent to review the codebase structure, documentation, and key files, before then giving it the actual development task," Friedman advised.

"Another technique is to give the AI agent a clear specification and have it generate tests that comply with the spec… Only after verifying that the tests match your intent, you have the agent implement it."

"Sometimes it’s best to just start again rather than have the agent double-back to make corrections."

Top dev requests from AI tools: better contextual understanding (26%), fewer hallucinations (24%), and improved code quality (15%).

"Context is key for effectively using AI tools," Friedman said.

"The information that’s fed into the models, what’s in their ‘context window,’ has a direct and dramatic impact on the quality of the code they generate."

Power users feed AI detailed specs, examples, and coding styles to get better results. Friedman suggests automating this context feeding could flatten the learning curve, similar to how Google improves search relevance.

Qodo warns companies must ensure AI context inputs comply with corporate policies as tools roll deeper into workflows.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement