Forecaster reacts: METR's bombshell paper about AI acceleration
8 comments
·April 22, 2025nojs
I think the author is missing the point of why these forecasts put so much weight on software engineering skills. It’s not because it’s a good measure of AGI in itself, it’s because it directly impacts the pace of further AI research, which leads to runaway progress.
Claiming that the AI can’t even read a child’s drawing, for example, is therefore not super relevant to the timeline, unless you think it’s fundamentally never going to be possible.
Earw0rm
"Forecasters" are grifters preying on naive business types who are somehow unaware that an exponential and the bottom half of a sigmoid look very much like one another.
ec109685
And by 2035, AI will be able to complete tasks that would take millennia for humans to complete.
boxed
> This is important because I would guess that software engineering skills overestimate total progress on AGI because software engineering skills are easier to train than other skills. This is because they can be easily verified through automated testing so models can iterate quite quickly. This is very different from the real world, where tasks are messy and involve low feedback — areas that AI struggles on
Tell me you've never coded without telling me you've never coded.
nopinsight
> software engineering skills are easier to train than other skills.
I think the author meant it's easier to train (reasoning) LLM models on [coding] skills than most other tasks. I agree on that. Data abundance, near-immediate feedback, and near-perfect simulators are why we've seen such rapid progress on most coding benchmarks so far.
I'm not sure if he included high-level software engineering skills such as designing the right software architecture for a given set of user requirements in that statement.
---
For humans, I think the fundamentals of coding are very natural and easy for people with certain mental traits, although that's obviously not the norm (which explains the high wages for some software engineers).
Coding on large, practical software systems is indeed much more complex with all the inherent and accidental complexity. The latter helps explain why AI agents for software engineering will require some human involvement until we actually reach full-fledged AGI.
d--b
> About the author: Peter Wildeford is a top forecaster, ranked top 1% every year since 2022.
What?
thih9
Interesting. I’m guessing this is “top 1%” based on some test or within some platform; and the name was omitted.
E.g. he is a board member at Metaculous, an online forecasting platform. Also:
> On his free time, Peter makes money in prediction markets and is quickly becoming one of the top forecasters on Metaculus
https://theinsideview.ai/peter
> For certain projects Metaculus employs Pro Forecasters who have demonstrated excellent forecasting ability and who have a history of clearly describing their rationales.
This is comparable to another research/estimate with similar findings at https://ai-2027.com/ I find the proposed timelines aggressive (~AGI in ~3 years), but the people behind this thinking are exceptionally thoughtful and well-versed in all related fields.