Skip to content(if available)orjump to list(if available)

Q-learning is not yet scalable

Q-learning is not yet scalable

19 comments

·June 15, 2025

lalaland1125

This blog post is unfortunately missing what I consider the bigger reason why Q learning is not scalable:

As horizon increases, the number of possible states (usually) increases exponentially. This means you require exponentially increasing data to have a hope of training a Q that can handle those states.

This is less of an issue for on policy learning, because only near policy states are important, and on policy learning explicitly only samples those states. So even though there are exponential possible states your training data is laser focused on the important ones.

elchananHaas

I think the article's analysis of overapproximation bias is correct. The issue is that due to the Max operator in the Q learning noise is amplified over timesteps. Some methods to reduce this bias, such as https://arxiv.org/abs/1509.06461 were successful in improving the RL agents performance. Studies have found that this happens even more for the states that the network hasn't visited many times.

An exponential number of states only matters if there is no pattern to them. If there is some structure that the network can learn then it can perform well. This is a strength of deep learning, not a weakness. The trick is getting the right training objective, which the article claims q learning isn't.

I do wonder if MuZero and other model based RL systems are the solution to the author's concerns. MuZero can reanalyze prior trajectories to improve training efficiency. The Monte Carlo Tree Search (MCTS) is a principled way to perform horizon reduction by unrolling the model multiple steps. The max operator in MCTS could cause similar issues but the search progressing deeper counteracts this.

jhrmnn

Is this essentially the same difference as between vanilla regular-grid and importance-sampling Monte Carlo integration?

Ericson2314

https://news.ycombinator.com/item?id=44280505 I think that thead might help?

Total layman here, but maybe some tasks are "uniform" despite being "deep" in such a way that poor samples still suffice? I would call those "ergodic" tasks. But surely there are other tasks where this is not the case?

lalaland1125

Good clarification. I have edited my post accordingly.

There are situations where states increase at much slower rates than exponential.

Those situations are a good fit for Q learning.

arthurcolle

Feel the Majorana-1

whatshisface

The benefit of off-policy learning is fundamentally limited by the fact that data from ineffective early exploration isn't that useful for improving on later more refined policies. It's clear if you think of a few examples: chess blunders, spasmodic movement, or failing to solve a puzzle. This becomes especially clear once you realize that data only becomes off-policy when it describes something the policy would not do. I think the solution to this problem is (unfortunately) related to the need for better generalization / sample efficiency.

getnormality

Doesn't this claim prove too much? What about the cited dog that walked in 20 minutes with off-policy learning? Or are you making a more nuanced point?

paraschopra

Humans actually do both. We learn from on-policy by exploring consequences of our own behavior. But we also learn off-policy, say from expert demonstrations (but difference being we can tell good behaviors from bad, and learn from a filtered list of what we consider as good behaviors). In most, off-policy RL, a lot of behaviors are bad and yet they get into the training set and hence leading to slower training.

s-mon

While I like the blogpost, I think the use of unexplained acronyms undermines the opportunity of this blogpost to be useful to the wider audience. Small nit: make sure acronyms and jargon is explained.

briandw

This papers assumes that you know quite a bit about RL already. If you really want to dig into RL, this intro course from David Silver (Deep Mind) is excellent: https://youtu.be/2pWv7GOvuf0?si=CmFJHNnNqraL5i0s

ArtRichards

Thank you for this link.

andy_xor_andrew

the magic thing about off-policy techniques such as Q-Learning is that they will converge on an optimal result even if they only ever see sub-optimal training data.

For example, you can use a dataset of chess games from agents that move totally randomly (with no strategy at all) and use that as an input for Q-Learning, and it will still converge on an optimal policy (albeit more slowly than if you had more high-quality inputs)

Ericson2314

I would think this being true is the definition of the task being "ergodic" (distorting that term slightly, maybe). But I would also expect non-ergodic tasks to exist.

andy_xor_andrew

The article mentions AlphaGo/Mu/Zero was not based on Q-Learning - I'm no expert but I thought AlphaGo was based on DeepMind's "Deep Q-Learning"? Is that not right?

energy123

DeepMind's earlier success with Atari was based on offline Q-Learning

AndrewKemendo

Q learning isn’t scalable because of the stream barrier, however streaming DRL (TD-Lambda) is scalable:

https://arxiv.org/abs/2410.14606

Note that this is from Turing award winner Richard Sutton’s lab at UofA

RL works

getnormality

But does this address scaling to long-horizon tasks, which is what the article is about?

Onavo

Q learning is great as a hello world RL project for teaching undergraduates.