Busy beaver hunters reach numbers that overwhelm ordinary math
59 comments
·August 22, 2025xelxebar
oersted
Numberphile has also built-up a fantastic collection of videos over the years about unfathomably large finite numbers:
https://youtube.com/playlist?list=PLt5AfwLFPxWJ_FwchNAjB3xa8...
The tools required to probe and compare these monsters are so interesting.
lisper
oersted
Cool wiki linked there: https://googology.fandom.com/wiki/Googology_Wiki
Workaccount2
There is something awesome about incredibly large finite numbers. People gush about infinity, but I find it to be a pretty boring concept compared to finite numbers too large to even be written in this universe.
lisper
> People gush about infinity, but I find it to be a pretty boring
You need to look at this:
Waterluvian
Yeah!
Like, there is a perfectly finite number, but is so large that there simply isn’t enough information in the universe to encode it in any format. How cool is that to just think about for a while?
BlackFly
I think such a number is going to have strange properties like, some number bigger than that unencodable number is encodable because of a special pattern that allows a special non-surjective recursive function to encode it. I am just wondering if there really is smallest number for which no number greater than it is encodable.
It is not obvious to me that the definition of an encodable function has bounded growth: is it true that f(1) - f(0) for encodable f always has a bound given by the amount of data used to encode f? What is that bound?
chii
but couldn't you encode it as "the smallest number that cannot be encoded within the universe's matter/entropy"?
weregiraffe
You are stretching the definition of "is" here. Almost all finite numbers are impossible to actually count up to, so they exist only in the same sense that infinite numbers exist.
charcircuit
There is enough information if you assume reality is continuous. Pick a point A to be the origin. Then then you can encode the number by placing something at 1/N meters away from the origin.
xelxebar
Indeed! Funnily enough, there seems to be something similar going on between defining large finite numbers and defining large countable cardinals.
dr_dshiv
And I’m assuming they can be converted to incredibly small finite numbers just as easily?
kurlberg
If you have a child who likes math I highly recommend "Really Big Numbers" by Richard Schwarz. Tons of nice illustrations on how to "take bigger and bigger steps".
"Infinity is farther away than you thought."
tocs3
I really like the Busy Beaver stuff. I wish I had been exposed to it (at lest enough to play with it some) in high school. It reminds me some of Jorge Luis Borges' story "The Library of Babel".
Does anybody know of other interesting problems in the Busy Beaver space?
twiceaday
https://www.scottaaronson.com/papers/bb.pdf
This paper contains many conjectures around BB that could be interesting to some.
adgsfhj
Fictionally, maybe the Mandelbrot Maze mentioned in Arthur C. Clarke’s 3001:
> Their approach was more subtle; they persuaded their host machine to initiate a program which could not be completed before the end of the universe, or which - the Mandelbrot Maze was the deadliest example - involved a literally infinite series of steps.
https://archive.org/stream/SpaceOdyssey_819/3001_The_Final_O...
db48x
I loved that part of the book. I thought it was so cool how they thoroughly scrubbed the Internet of these dangerous programs, and then stored the very last copy on a disk in an ancient lava tube on the moon, just in case.
gpm
Collatz conjecture - and I would have said that even before we solved BB up until it reduces to collatz-like problems.
noiv
I wonder, whether there's anything interesting on the tape after a BB(5/6) halts?
wodenokoto
Numberphile just did a video on subcubic graph numbers which grows much, much faster than Tree numbers.
Do we know if they grow faster than busy beavers?
hxtk
The algorithm for BB(N) is (with some hand waving) “for each N-state Turing machine, check if it halts on an initially empty tape. If it does not, skip it. If it does, run it until it halts and count the number of steps, k. BB(N) is the maximum value of k you find.”
The problem is that the naive algorithm above requires an oracle for the Halting Problem, which every CS student learns is impossible in general for a computational model that isn’t more powerful than a Turing Machine.
So if a function can be expressed in terms of a Turing machine, busy beaver has to grow faster because eventually for some value of N, you can just encode the candidate function as a Turing machine and have Busy Beaver simulate it.
Busy Beaver hunters on the low end (trying to find the next number) have to exhaustively prove whether each Turing machine eventually halts or not at that number of states so that they can evaluate the longest running one with finite execution.
Certhas
No. The busy beaver function grows faster than any computable function.
What's more, for any N > 549, you cannot prove (in the usual universe of mathematics) that any computable number is equal to or larger than BB(N). (At least that's my understanding of the independence results (https://wiki.bbchallenge.org/wiki/Independence_from_ZFC))
francoi8
Busy Beaver is non computable and grows much faster than the various subcubic graph numbers (you could easily encode a subcubic graph number computation in a relatively small turing machine).
andrepd
TREE(3) is so crazy to me. You would expect it to either be a reasonable number, or to be infinite (i.e. there's an infinite sequence of such trees).
But no. It's just an _impossibly unfathomably large number_. Eventually, there's no more trees with less than $i$ vertices, but only at such a stupid incredibly large $i$ that it makes other "googology" numbers vanish by comparison.
tromp
> it makes other "googology" numbers vanish by comparison.
I consider TREE to be medium sized in the googology universe. It obliterates Graham's Number of course, as well as Goodstein sequence and simple hydras. But it's easily bested by an ingenious matrix generalization of hydras known as the Bashicu Matrix System, even when fixed at 2 rows. 3-row BMS goes far beyond that, let alone n-row. BMS is so simple that it can be encoded in under 50 bytes [1], where TREE would take significantly more. Beyond that there vastly greater numbers still such as Loader's Number. I shouldn't mention Rayo's because that's no longer computable in the usual sense.
[1] https://codegolf.stackexchange.com/questions/18028/largest-n...
xenotux
There is a downvoted comment that reads "ah yes the totally new math of exponentiation". The snark is uncalled for, but that's actually the essence of this article: it talks about repeated exponentiation as if it were some profound mathematical discovery.
It isn't. The article neglects to explain what makes busy beaver numbers interesting in the first place. And I think it's symptomatic of Quanta Magazine articles that feature on HN several times a week. A profoundly-sounding title and pleasant writing, but not much essence beyond that.
defrostr
I usually get drawn into their posts also and didn’t realize they were low value; my IQ must not be high enough to differentiate. What would you recommend as alternative sources?
lutusp
Don't let specialists detract from your enjoyment of Quanta-level articles. This one is well-written, makes no egregious errors, and only omits one important fact. And that fact is ...
All this talk about exponentiation, tetration and pentation have their roots in Peano arithmetic, which for reasons of logical clarity, defines precisely one function -- the "successor function":
s(n) = n+1
Using just this function, Peano defines addition, multiplication and exponentiation with great clarity. Since Peano's time, people have been making adjustments, like including zero among the counting numbers, and by extending exponentiation into tetration, pentation and a few more exotic operations fully discussed in the linked article and elsewhere.
I personally would have liked to see a more complete exposition of the logical roots of the Busy Beaver challenge, and I think the missing parts would have made the article a better read, even for non-specialists. Maybe especially for those readers.
But Quanta articles are perfectly suited to their audience, people who may be inspired to look for more depth elsewhere.
QuesnayJr
There isn't really anything better. The Notices of the AMS has one feature article each issue, but those are sometimes too technical.
Anyway, this article seems fine to me. The "exponentiation" comment seems like a bizarre misreading. The article is just trying to explain how big BB(6) is. Before that, it explains what BB(n) is. To think it's solely about exponentiation you have to skip that entire section.
oersted
Indeed, I would have at least liked to get a rough understanding of the tricks they use to classify and discard Turing machines, and how they construct these enormous runtime calculations. They are clearly computing something but they are obviously not actually running the machines normally for those numbers of steps.
sfn42
It's crazy to me that we're now writing articles about the fact that a large number is large.
Hey guess what, I can imagine a number even larger. It's BB(6) + 1, isn't that amazing and interesting? Wow imagine BB(6) multiplied by a googolplex, amazing. Wow, such number.
What's the point? Numbers are infinite, what else is new?
lutusp
Hmm, what a shame that the LaTeX content wasn't properly rendered in the linked article. When I noticed this I assumed someone had forgotten to include the MathJax JavaScript library, which, when present, converts LaTeX notation into properly rendered LaTeX client-side.
As it turns out, the MathJax library is called from the article's HTML (in line 1765), but for some reason it's not working.
A long-running debate argues that LaTeX rendering should be a default browser feature, so far without success. In the meantime MathJax works pretty well unless someone disables JavaScript client-side.
Reasonable people may differ, but not being able to render LaTeX is egregious and wrong.
oersted
Try reloading, the endpoint that loads MathJax might be a bit overloaded, but it does work.
adinhitlore
[flagged]
poulpy123
Ah yes the totally new math of exponentiation
brap
At some point you gotta wonder if there’s even a difference between “stops after N” and “never stops”.
I mean obviously there is, it’s the same difference between N and infinity. But… is there really?
lanternfish
In a mathematical sense - absolutely. You can dual halting problem against many very tangible qualities - like whether a (proved) statement is true or false. A (large-n) halting program is closer to an instantly halting program not just because n is always closer to 0 than inf, but because 'large n halting' and 'instantly halting' are ontologically similar in a way they just aren't with unhalting programs.
fuckaj
Well since we got off the Gold standard, we might need those big numbers.
dwohnitmok
Welcome to ultrafinitism!
mistercow
> In 1936, the pioneering logician Alan Turing proved that there’s no universal procedure for answering this question, which became known as the halting problem. Any method that works for some programs will fail for others, and in some cases, no method will work.
This is weirdly stated. The first sentence is correct. But trivially, either a machine that always says “yes” or a machine that always says “no” will give the correct answer for any given case.
Given a particular set of axioms, there are machines which do not halt, but which cannot be proven not to halt, and maybe that’s what the article means. But that’s going beyond the halting problem.
JohnKemeny
"this question" is Given the code of a computer program, can you tell whether it will eventually stop or run forever?
It is indeed the Halting problem. (Except that they forgot to state what the input is (the code itself).
akoboldfrying
I think your quantifiers are in the wrong order.
Because you wrote "But trivially, either a machine...", I think you must think that the second sentence in your quote means "For every program p, there does not exist any method m that, when applied to p, determines whether p halts" -- which would clearly be false, since it allows every program to have its own method (and as you say, just 2 "constant-valued methods" suffice). But I interpret that second sentence in your quote as "There does not exist a single method m that, when applied to any program p, determines whether p halts", which is correct.
>Given a particular set of axioms, there are machines which do not halt, but which cannot be proven not to halt, and maybe that’s what the article means.
I also think that's what the article means, and I think it's a very nice way of putting it, though it's an interesting enough wrinkle to discuss further.
Pentation? How quaint. For other Large Number fans, David Metzler has a wonderful playlist that goes way down the rabbit hole of the fast growing hierarchy:
https://www.youtube.com/playlist?list=PL3A50BB9C34AB36B3
Highly recommended.
Almost all of these mind-bogglingly large numbers are built around recursion, like the Ackermann function, which effectively has an argument for the number of Knuth up arrows to use. Then you can start thinking about feeding the Ackermann function into that slot and enjoy the sense of vertigo at how insane that becomes.
I find it fascinating how quickly the machinery of specifying large numbers probes the limits of what's definable within the mathematical systems we use.