Skip to content(if available)orjump to list(if available)

Just Ask for Generalization (2021)

xg15

(2021), still very interesting. Especially the "post-overfitting" training strategy is unexpected.

luckystarr

I remember vaguely that this was observed when training GPT-3 (probably?) as well. Just trained on and on, and the error went up and then down again. Like a phase transition in the model.

esafak

The low sample efficiency of RL is well explained.

null

[deleted]