Scribble-based forecasting and AI 2027
7 comments
·June 30, 2025empiko
To be honest, I expected the punchline to be about how randomly drawing lines is the same nonsense as using simplistic mathematical modeling without considering the underlying phenomenon. But the punchline never came.
Predicting AI is more or less impossible because we have no idea about the its properties. With other technologies, we can reason about how small or how how a component can get and this gives us psychical limitations that we can observe. With AI we throw in data and we are or we are not surprised by the behavior the model exhibits. With a few datapoints we have, it seems that more compute and more data usually lead to better performance, but that is more or less everything we can say about it, there is no theory behind it that would guarantee us the gains for the next 10x.
crabl
Interesting! My first thought looking at the scribble chart was "isn't this Monte Carlo simulation?" but reading further it seems more aligned with the "third way" that William Briggs describes in his book Uncertainty[1]. He argues we should focus on direct probability statements about observables over getting lost in parameter estimation or hypothesis testing.
^[1]: https://link.springer.com/book/10.1007/978-3-319-39756-6
Fraterkes
Im sorry, I think the line scribbling idea is neat but the most salient part of this prediction (how longs this going to take) depends utterly on the scale of the x-axis. If you made x go to 2200 instead of 2050 you could overlay the exact same set of “plausible” lines.
keeganpoppen
this is actually quite brilliant. and articulates the value and utility of subjective forecasting-- something i too find somewhat underrated-- extremely clearly and convincingly. and same goes for the biases we have toward reducing things to a mathematical model and then treating that model as more "credible" despite there being (1) an infinite universe of possible models, so you can use them to "say" whatever you want anyway and (2) it complects the thing being modeled with some mathematical phenomenon, which is not always a profitable approach.
the scribble method is, of course, quite sensitive to the number of hypotheses you choose to consider, as it effectively considers them all to be of equal probability, but it also surfaces a lot of interesting interactions between different hypotheses that have nothing to do with each other, but still have effectively the "same" prediction at various points in time. and i don't see any reason that you can't just be thoughtful about what "shapes" you choose to include and in what quantity-- basically like a meta-subjective model of which models are most likely or something haha. that said, there's also some value in the low-res aspect of just drawing the line-- you can articulate exactly what path you are thinking without having to pin that thinking to some model that doesn't actually add anything to the prediction other than fitting the same shape as what is in your mind.
groby_b
At least for me, the core criticism of AI 2027 was always that it was an extremely simplistic "number go up, therefore AGI", with some nice fiction-y words around it.
The scribble model kind-of hints at what a better forecast would've done - you start from the scribbles and ask "what would it take to get that line, and how'd we get there". And I love that the initial set of scribbles will, amongst other things, expose your biases. (Because you draw the set of scribbles that seems plausible to you, a priori)
The fact that it can both guide you towards exploring alternatives and exposing biases, while being extremely simple - marvellous work.
Definitely going to incorporate this into my reasoning toolkit!
ben_w
To me, 2027 looks like a case of writing the conclusion first and then trying to explain backwards how it happens.
If everything goes "perfectly", then the logic works (to an extent, but the increasing rate of returns is a suspicious assumption baked into it).
But everything must go perfectly to do that, including all the productivity multipliers being independent and the USA deciding to take this genuinely seriously (not fake seriously in the form of politicians saying "we're taking this seriously" and not doing much), and therefore no-expenses-spared rush the target like it's actually an existential threat. I see no way this would be a baseline scenario.
Another useful trick: plot the same data several ways (e.g. if you were playing with Moore's law you might plot (log) {transistors/cm²,"ops/sec","clock speed","ops/sec/$" etc.} their inverses vs time, as well as things like "how many digits of π can you compute for $1", "multiples of total world compute in 1970") and do the same extrapolation trick on each.
You _should_ expect to see roughly comparable results, but often you don't and when you don't it can reveal hidden assumptions/flawed thinking.