Think of a Number
5 comments
·June 15, 2025BlackFingolfin
A follow up post is at https://xenaproject.wordpress.com/2025/03/16/think-of-a-numb...
AnotherGoodName
A great example of this is to ask AI to ingest and restate with detailed annotations advanced maths papers. This should be simple but the AI fails at this.
A lot of maths is terse. It can take years to grok a very advanced topic. Eg. The ABC conjecture is supposed to be solved by https://en.wikipedia.org/wiki/Inter-universal_Teichm%C3%BCll... but that theory is tough even for the smartest minds so it's still considered up in the air if it's solved or not, not enough mathematicians grok it yet to have a consensus. It's not disproven as nonsense, the paper appears to make sense. It's just that it's a very advanced topic that takes years to understand.
So as someone wanting to understand such topics you may be tempted to have AI read the paper and give annotations and summaries. You might be tempted to have AI give some numeric examples of formulas.
Guess what happens? COMPLETE AND TOTAL FAILURE. The AI can't do it. Because the paper has no online examples where people have written numeric examples and given annotations there's nothing for the AI to go off. It gives numeric examples with mistakes that don't even match the statement it's meant to be giving an example of. Often it gives up with statements like, "At this point the numeric example fails to solve the solution but you can imagine if it did". You can ask it to try and try again but it just keeps failing. Even simple and well known papers generally don't work unless there's already a simple explanation someone's already posted online that it can regurgitate.
Which is pretty damning right? Reading a paper, giving numeric examples of what the paper states and giving some plain english summaries to the most dense portions should be what a language processing system does best. We're not even asking it to come up with original ideas here. We're asking it to summarise well known mathematical papers. The only time i've seen it have success is if someone's already done such an explanation on mathsoverflow.
jordigh
> It's not disproven as nonsense, the paper appears to make sense
Not obviously utter nonsense, but a couple of mathematicians who have studied it have claimed to have found gaps and were unsatisfied with the resolution to those gaps that Mochizuki offered.
It's kind of like, well, LLM output. Has the right shape but upon scrutiny it seems to fall apart. Plausible-looking but probably nonsense.
null
Mathematics is such a wide field and the questions asked here are ill defined.
If the comment is "the AI founder bros are hyping it up and it's not as good as they claim", I think we all agree that's true. LLMs are good, but exactly how good depends on many subjective points.
If the question is: "can we come up with questions that are easy for some tiny niche set of experts, but basically impossible for an LLM", I think the answer will always be "yes", especially if you can make "niche set of experts" more and more niche every time.
If the question is "will mathematicians be unemployed in a few years", obviously the answer is also "no".
If the question is "can LLMs be used to speed up mathematics research", the answer is "yes and no, depending on what you're doing".