Surprises in Logic (2016)
24 comments
·April 22, 2025Xcelerate
Another weird one related to Gödel’s theorems is Löb’s theorem: given a sound formal system F and a sentence s, if F proves that “if s is provable in F, then s,” then F also proves s. That is:
F ⊢ (Prov_F(“s”) → s) → s
Which is strange because you might think that proving “if s is provable, then s” would be possible regardless of whether s is actually provable. But Löb’s theorem shows that such self-referential statements can only be proven when s itself is already provable.
dvt
> F ⊢ (Prov_F(“s”) → s) → s
This is called "Löb’s hypothesis" and it's an incredible piece of logical machinery[1]. If you truly understand it, it's pretty mind-blowing that it's actually a logically sound statement. It's one of my favorite ways to prove Gödel's Second Theorem of Incompleteness.
[1] https://categorified.net/FreshmanSeminar2014/Lobs-Theorem.pd...
nyrikki
One of the huge sources of map/territory relation issues is that we use systems that let us avoid the problem discussed here
Definitely worth spending time on.
vajrabum
I would have thought that the proof shows this problem is unavoidable. How in your view do we avoid this problem?
nyrikki
You don't avoid it, you realize that there are fundamental limits and try to find ways to still get work done.
While you can't have the territory road, plat, and topographical maps may be incomplete, but all have their uses.
jxmorris12
I'm currently a machine learning grad student taking a meta-complexity class and came across this blog post. I found the whole thing very interesting. In particular the idea that some things are uncomputable seems fundamentally unaddressed in ML.
We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data. Clearly there's more to the story here.
dwohnitmok
> We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data.
I don't think the assumption is that strong. The assumption is rather that human learning is computable and therefore a machine equivalent of it should be too.
nyrikki
While I am to old to say what is taught today, this is exactly the map vs territory issue I mentioned in my other comment.
It is all there in what you would have been taught, but hidden because we tend to avoid the hay in the haystack problems and focus on the needles, because we don't have access to the hay.
As an example that can cause huge arguments, if for analysis you used Rudin, go look for an equally binary operator in that book, as every construction of the reals is a measure zero set, it is actually impossible to prove the equality of two real numbers. ZFC uses constructablity, Spivik uses Cauchy sequences etc...
If you look at the paper[1], about the equivalence of PAC/VC dimensionality it is all there, framing learning as a decision problem, set shattering etc.
Richardson's theorm, especially with more recent papers is another lens.
With the idea that all models are wrong, but some are useful, it does seem that curriculums tend to leave these concepts to postgraduate classes, often hiding them or ignoring them, in descriptive complexity theory they should have been front and center IMHO.
Assuming something is computable or learnable is important for finding pathological cases that are useful, don't confuse the map with the territory.
We have enough examples to know the neuronic model is wrong, but the proposed models we have found aren't useful yet, and with what this post provided shows that may always hold.
Physics and other fields make similar assumptions, like physics assumptions that laplacian determinism is true, despite counterexamples.
Gödel, Rice, Turing. etc... may be proven wrong some day, but right now Halt ~= open frame ~= symbol grounding ~= system identification in the general case is the safe bet.
But that doesn't help get work done, or possibly find new math that changes that.
null
mcphage
I don't think you need anything fancy to tackle the "surprise examination" or "unexpected hanging" paradox. This is my take on it, at least:
> The teacher says one day he'll give a quiz and it will be a surprise. So the kids think "well, it can't be on the last day then—we'd know it was coming." And then they think "well, so it can't be on the day before the last day, either!—we'd know it was coming." And so on... and they convince themselves it can't happen at all.
> But then the teacher gives it the very next day, and they're completely surprised.
The students convince themselves that it can't happen at all... and that's well and good, but once they admit that as an option, they have to include that in their argument—and if they do so, their entire argument falls apart immediate.
Consider the first time through: "It can't be on the last day, because we'd know it was coming, and so couldn't be a surprise." Fine.
Now compare the second time through: "If we get to the last day, then either it will be on that day, or it won't happen at all. We don't know which, so if it did happen on that day, it would count as a surprise." Now you can't exclude any day, the whole structure of the argument fell apart.
Basically, they start with a bunch of premises, arrive at a contradiction, and conclude some new possibility. But if you stop there, you just end up with a contradiction and can't conclude anything.
So you need to restart your argument, with your new possibility as one of the premises. And now you don't get to a contradiction at all.
jerf
I can't help but think the "surprise examination paradox" rests too much in English equivocation for it to be a properly logical paradox. In particular, the fact that "surprise" changes over time, and the fact that if I've logically deduced that it is "impossible" for the test to occur on the last day then it is ipso facto a surprise if it happens then.
Sit down and make the argument really rigorous as to the definition of "surprise" and the fuzz disappears. You can get several different results from doing so, and that's really another way of saying the original problem is inadequately specified and not really a logical conundrum. As "logical conundrums" go, equivocation is endlessly fascinating to humans, it seems, but any conundrum that can be solved merely by being more careful, up to merely a normal level of mathematical rigor, isn't logically interesting.
astrobe_
It is like the infamous 0.999999... = 1. That one uses sloppy notation (what is "..."?) to make students think and talk about math.
jjmarr
It's not sloppy notation. It's an unambiguous infinite series of the form sum_n=1^infinity 9/10^n that converges to 1.
It's the same reason that 0.333... = 1/3. It's an infinite series that converges on 1/3.
Students learn repeating decimals before they understand infinite series.
mcphage
I'm not sure the "..." is sloppy notation—it can be made rigid pretty easily. The surprise is that students' expectations that if two decimal expressions are distinct, that the real number they correspond to must be distinct also. (Even there, students have already gotten used to trailing zeros being irrelevant).
ogogmad
You did not understand the paradox.
The word "surprise" here means that the prisoner won't know his date of execution until he is told.
[Edited]
bongodongobob
I agree. The premise itself is spurious. I've never liked this paradox because I don't think it makes sense from the get go.
robinhouston
I would encourage anyone who's intrigued by this paradox to read Timothy Chow's comprehensive paper on the subject (https://arxiv.org/abs/math/9903160).
In particular, he discusses what he calls the meta-paradox:
> The meta-paradox consists of two seemingly incompatible facts. The first is that the surprise exam paradox seems easy to resolve. Those seeing it for the first time typically have the instinctive reaction that the flaw in the students’ reasoning is obvious. Furthermore, most readers who have tried to think it through have had little difficulty resolving it to their own satisfaction.
> The second (astonishing) fact is that to date nearly a hundred papers on the paradox have been published, and still no consensus on its correct resolution has been reached. The paradox has even been called a “significant problem” for philosophy [30, chapter 7, section VII]. How can this be? Can such a ridiculous argument really be a major unsolved mystery? If not, why does paper after paper begin by brusquely dismissing all previous work and claiming that it alone presents the long-awaited simple solution that lays the paradox to rest once and for all?
> Some other paradoxes suffer from a similar meta-paradox, but the problem is especially acute in the case of the surprise examination paradox. For most other trivial-sounding paradoxes there is broad consensus on the proper resolution, whereas for the surprise exam paradox there is not even agreement on its proper formulation. Since one’s view of the meta-paradox influences the way one views the paradox itself, I must try to clear up the former before discussing the latter.
> In my view, most of the confusion has been caused by authors who have plunged into the process of “resolving” the paradox without first having a clear idea of what it means to “resolve” a paradox. The goal is poorly understood, so controversy over whether the goal has been attained is inevitable. Let me now suggest a way of thinking about the process of “resolving a paradox” that I believe dispels the meta-paradox.
mcphage
That sounds interesting—thanks for sharing, I'll check it out.
munchler
> So you need to restart your argument, with your new possibility as one of the premises. And now you don't get to a contradiction at all.
It’s amusing that you stopped here without giving an actual solution. Please do tell us, which day is the test on?
griffzhowl
But it's stipulated that the test will happen on one of the days - it's not a possibility that it won't happen at all, hence the paradox.
One resolution is that what the teacher stipulates is impossible. It should really be
"You'll have a test within the next x days but won't know which day it'll be on (unless it's the last day)"
ogogmad
> The students convince themselves that it can't happen at all... and that's well and good, but once they admit that as an option, they have to include that in their argument—and if they do so, their entire argument falls apart immediate.
Your critical thinking is bad. The first paradox happens when the prisoner concludes that the judge lied, using a rational deduction. A second paradox happens when it transpires the judge told the truth.
null
The final bit of Baez's article has an interesting bit here:
> So, conceivably, the concept of 'standard' natural number, and the concept of 'standard' model of Peano arithmetic, are more subjective than most mathematicians think. Perhaps some of my 'standard' natural numbers are nonstandard for you! I think most mathematicians would reject this possibility... but not all.
It's probably worth elaborating why the majority of logicians (and likely most mathematicians) believe that standard natural numbers are not subjective (although my own opinion is more mixed).
Basically the crux is, do you believe that statements of the form using all/never quantifiers such as "this machine will never halt" or "this machine will always halt" have objective truth values?
If you do, then you are implicitly subscribing to a view that the standard natural numbers objectively exist and do not depend on subjective preferences.