Outcome-Based Reinforcement Learning to Predict the Future
15 comments
·May 27, 2025valine
lumost
Tokens are an awfully convenient way to describe an event.
phyalow
Tokens are just discretized state representations.
null
ww520
It’s the next state. So instead of spitting out words, it will spit out a whole movie, or a sequence of world states in a game or simulation.
ctoth
Do you want paperclips? Because this is how you get paperclips!
Eliminate all agents, all sources of change, all complexity - anything that could introduce unpredictability, and it suddenly becomes far easier to predict the future, no?
JoshTriplett
> Do you want paperclips? Because this is how you get paperclips!
Don't^W worry, there are many other ways of getting paperclips, and we're doing all of them.
sitkack
Even explaining how not to get paper clips, gets you paper clips when you can invert the loss function. Paper clips for everyone!
vlovich123
I don't know. Paperclips are awful useful. Would it be so bad to build more of them?
Ygg2
That's all fun and games until paperclip maximizers starts looking at your blood as source of iron.
jldugger
From the abstract
> A simple trading rule turns this calibration edge into $127 of hypothetical profit versus $92 for o1 (p = 0.037).
I'm lazy: is this hypothetical shooting fish in a barrel, or is it a real edge?
nyrikki
Note the 'hypothetical profit' part , I know of several groups looking for opportunities to skim off LLM traders, leveraging its limited sensitivity, expressiveness, and the loss of tail data.
Predictive AI is problematic no matter what tool you use. Great at demoware that doesn't deliver.
I am sure there are use cases, but it would be augmentation, not a reliable approach by itself.
So instead of next token prediction its next event prediction. At some point this just loops around and we're back to teaching models to predict the next token in the sequence.