2025 DORA Report
29 comments
·September 23, 2025pluc
rsynnott
This seems to be a poll of _users_. "Do people think it has improved _their_ productivity?" is a very different question to "Has it empirically improved aggregate productivity of a team/company/industry." People think _all_ sorts of snake oil improve their productivity; you can't trust people to self-report on things like this.
Pannoniae
There's a few explanations for this, and it's not necessarily contradictory.
1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...)
2. AI does improve productivity, but only if you find your own workflow and what tasks it's good for, and many companies try to shoehorn it into things which just don't work for it.
3. AI does improve productivity, but people aren't incentivised to improve their productivity because they don't see returns from it. Hence, they just use it to work less and have the same output.
4. The previous one but instead of working less, they work at a more leisurely pace.
5. AI doesn't improve producivity, people just feel it's more productive because it requires less cognitive effort to use than actually doing the task.
Any of these is plausible, yet they have massively different underlying explanations.... studies don't really show why that's the case. I personally think it's mostly 2. and 3., but it could really be any of these.
rsynnott
(1) seems very plausible, if only because that is what happens with ~everything which promises to improve productivity. People are really bad at self-evaluating how productive they are, and productivity is really pretty hard to externally measure.
ACCount37
"People use AI to do the same tasks with less effort" maps onto what we've seen with other types of workplace automation - like Excel formulas or VBA scripts.
Why report to your boss that you managed to get a script to do 80% of your work, when you can just use that script quietly, and get 100% of your wage with 20% of the effort?
welshwelsh
I think it's 5.
I was very impressed when I first started using AI tools. Felt like I could get so much more done.
A couple of embarrassing production incidents later, I no longer feel that way. I always tell myself that I will check the AI's output carefully, but then end up making mistakes that wouldn't have happened if I wrote the code myself.
pydry
>1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...)
It is from what Ive seen. It has the same visible effect on devs as a slot machine giving out coins when it spits out something correct. Their faces light up with delight when it finally nails something.
This would explain the study that showed a 20% decline in actual productivity where people "felt" 20% more productive.
mlinhares
Why not all? I've seen them all play out. There's also the people that are downstream of AI slop that feel less productive because now they have to clean up the shit other people produced.
Pannoniae
You're right, it kinda depends on the situation itself! And the downstream effects. Although, I'd argue that the one you're talking about isn't really caused by AI itself, that's squarely a "I can't say no to the slop because they'll take my head off" problem. In healthy places, you would just say "hell no I'm not merging slop", just as you have previously said "no I'm not merging shit copypasted from stackoverflow".
azdle
It's not even claiming that. It's only claiming that people who responded to the survey feel more productive. (Unless you assume that people taking this survey have an objective measure for their own productivity.)
> Significant productivity gains: Over 80% of respondents indicate that AI has enhanced their productivity.
_Feeling_ more productive is inline with the one proper study I've seen.
thebigspacefuck
The METR study showed even though people feel more productive they weren’t https://arxiv.org/abs/2507.09089
knes
the MTR study is a joke. it surveyed only 16 devs. in the era of Sonnet 3.5
Can we stop citing this study
I'm not saying the DORA study is more accurate, but at least it surveyed 5000 developers, globally and more recently (between June 13 and July 21, 2025) which means using the most recent SOTA models
Foobar8568
Well I feel and I am more productive, now on coding activities, I am not convinced, it basically replaced SO and google, but at the end of the day, I always need and want to check reference material that I may have known or not existed. Plenty of time, Google couldn't even find them.
So in my case, yes but not on activities these sellers are usually claiming.
wiz21c
> This indicates that AI outputs are perceived as useful and valuable by many of this year’s survey respondents, despite a lack of complete trust in them.
Or the respondents have hard times admitting AI can replace them :-)
I'm a bit cynical but sometimes when I use Claude, it is downright frightening how good it is sometimes. Having coded for a lot of year, I'm sometimes a bit scared that my craft can, somtimes, be so easily replaced... Sure it's not building all my code, it fails etc. but it's a bit disturbing to see that somethign you have been trained a for a very long time can be done by a machine... Maybe I'm just feeling a glimpse of what others felt during the industrial revolution :-)
polotics
Well when I use a power screwdriver I am always impressed by how much more quickly I can finish easy tasks too. I also occasionally busted a screw or three, that then I had to drill out...
pluc
Straight code writing has never been the problem - it's the understanding of said code that is. When you rely on AI, and AI creates something, it might increase productivity immediately but once you need to debug something that uses that piece of code, it will nullify that gain as you have no idea where to look. That's just one aspect of this false equivalency.
hu3
I also find it great for prompts like:
"this function should do X, spot inconsistencies, potential issues and bugs"
It's eye opening sometimes.
cogman10
So long as you view AI as a sometimes competent liar, then it can be useful.
I've found AI is pretty good at dumb boilerplate stuff. I was able to whip out prototypes, client interfaces, tests, etc pretty fast with AI.
However, when I've asked AI "Identify performance problems or bugs in this code" I find it'll just make up nonsense. Particularly if there aren't problems with the code.
And it makes sense that this is the case. AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.
bopbopbop7
Or you aren’t as good as you think you are :-)
Almost every person I worked with that is impressed by AI generated code has been a low performer that can’t spot the simplest bugs in the code. Usually the same developers that blindly copy pasted from stack overflow before.
surgical_fire
In a report from Google, who is heavily invested in AI becoming the future, I actually expect the respondents to sound more positive about AI than they actually are
Much like in person I pretend to think AI is much more powerful and inevitable than I actually think it is. Professionally it makes very little sense to be truthful. Sincerity won't pay the bills.
riffic
DORA stands for "DevOps Research and Assessment" in case anyone was curious.
https://en.wikipedia.org/wiki/DevOps_Research_and_Assessment
mormegil
I was confused, since DORA is also the EU Digital Operational Resilience Act.
philipwhiuk
What the heck is that "DORA AI Capabilities Model" diagram trying to show.
righthand
> AI adoption among software development professionals has surged to 90%
I am proudly part of the 10%!
dionian
so the whole thing is about AI?
Fokamul
2026, year of cybersecurity. Baby, let's goo :D
Every study I've read says nobody is seeing productivity gains from AI use. Here's an AI vendor saying the opposite. Funny.