81,000 people said they don't feel in control of their AI tools. I built a linter for that.
Yesterday, Anthropic published the results of the largest qualitative study ever conducted on AI — 80,508 interviews across 159 countries. The findings are fascinating, but three numbers jumped out...

Source: DEV Community
Yesterday, Anthropic published the results of the largest qualitative study ever conducted on AI — 80,508 interviews across 159 countries. The findings are fascinating, but three numbers jumped out at me: 27% worry about unreliability. AI doing unexpected things is the #1 concern — the only category where the negatives outweighed the positives. 22% worry about losing autonomy and agency. People fear not understanding what AI is doing under the hood, or feeling like AI is drawing the lines instead of them. 16% worry about cognitive atrophy. People fear becoming dependent on tools they don't understand, losing the ability to think critically about what their tools are doing. These are abstract concerns when talking about AI in general. But they become very concrete when you look at how developers configure Claude Code. I'm one of those 27% and 22%. I use Claude Code daily, and what worries me isn't the model itself — it's that everything is changing so fast that it's literally hard to fo