
AI tools are more popular than ever. However, trust in this technology is eroding among developers.
While 84% of developers now use AI tools, a growing number — 46% — say they do not trust the accuracy of the tool’s output, according to a new survey by Stack Overflow. This is a stark rise in distrust from 31% in 2024.
What stood out was “the noticeable decline in developers’ overall perception and trust in AI tools, particularly when compared to previous years,’’ Erin Yepis, senior analyst, market research and insights at Stack Overflow, told TechRepublic in an email. “Despite AI adoption increasing for the third year in a row, its popularity has slipped — only 60% of respondents now view AI tools favorably or very favorably, down from 72% in 2024 and 77% in 2023.”
Given the increasing media attention and heavy investment in AI, Yepis said the dip in trust and sentiment was unexpected.
“I would have thought that as the tools matured, user confidence would have followed suit,’’ she said.
Why some developers remain cautious about AI tools
The survey also revealed that a majority of developers are not using AI agents: only 31% are currently using them, 17% plan to use them, and 38% do not plan to adopt them. However, among those developers who use AI agents at work, 69% reported an increase in productivity.
Yepis believes one of the main reasons developers hesitate to adopt AI tools is “because the tools often fail to manage the complexity of real-world coding tasks. This year, just 29% of developers felt that AI tools were capable of handling complex problems, a drop from 35% last year.”
Moreover, 66% cited “AI solutions that are almost right, but not quite” as their top frustration, Yepis noted. “This frequently leads to the second most common complaint, that ‘debugging AI-generated code is more time-consuming.’”
While AI tools are designed to improve productivity and efficiency, Yepis said, “developers are finding themselves spending more time resolving code issues due to the use of AI than if they had just coded it themselves from the start.”
Familiarity improves perception
Interestingly, developers who use AI tools daily tend to view them more positively — 88% of daily users reported favorable views, compared to 64% among weekly users and 60% across all respondents, Yepis said.
“This pattern suggests that developers who don’t currently use AI tools may be reacting to a steep learning curve or subpar early experiences,’’ she said.
Developers still prefer human touch
There is also a clear preference for human input when trust, ethics, or understanding are at stake, Yepis said. Stack Overflow (84%), GitHub (67%), and YouTube (61%) were the top three community platforms developers used in the past year or plan to use, according to the survey.
“A majority of developers said they defer to human help when they don’t trust AI-generated answers, when they have security or ethical concerns, and when they need a deeper understanding of their code,’’ Yepis observed.
Not feeling the ‘vibe’
While vibe coding has emerged as a trend for less experienced developers, it also requires a high level of trust in the AI-generated output. Nearly 77% of respondents said vibe coding is not part of their professional development work.
“Vibe coding may lower the barrier to starting projects, but it demands a high degree of faith in the AI’s output,’’ Ben Matthews, senior director, engineering at Stack Overflow, told TechRepublic in an email. “That tradeoff, sacrificing reliability and code security for speed, makes it less suitable for high-stakes or complex systems.”
As software scales, so do the associated risks, Matthews added.
“Since AI systems aren’t always aware of what they don’t know, they can confidently overlook serious issues,” he explained. “This is likely leading developers to turn to vibe coding for low-risk or exploratory work, but not as a substitute for thoughtful engineering, particularly when performance and long-term maintainability are important.”
Methodology
Stack Overflow said the survey is based on more than 48,000 responses from developers in 177 countries.
What if the tools you built to protect your systems are the ones making them vulnerable? This new study dives into the risks AI agents pose.