0.9999 ≊ 1
107 comments
·June 2, 2025tsimionescu
lcrz
Both ways are just notation. There’s nothing more real about 3/10 compared to 0.3.
Telling you otherwise might have worked as a educational “shorthand”, but there are no mathematical difficulties as long as you use good definitions of what you mean when you write them down.
The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.
tsimionescu
I agree that ultimately both are just notations. I do think the fractional notation has some definite advantages and few disadvantages, so I think it's better to regard it as more canonical.
I disagree though that it's necessary or even useful to think of 0.99... or 0.33... as sequences or limits. It's of course possible, but it complicates a very simple concept, in my opinion, and muddies the waters of how we should be using notions such as equality or inifinity.
For example, it's generally seen as a bad idea to declare that some infinite sum is actually equal to its limit, because that only applies when the series of partial sums converges. It's more rigorous to say that sigma(1/n) for n from 2 going to infinity converges to 1, not that it is equal to 1; or to say that lim(sigma(1/n)) for n from 2 to infinity = 1.
So, to say that 0.xxx... = sigma(x/10^n) for n from 1 to infinity, and to show that this is equal to 1 for x = 9, muddies the waters a bit. It still gives this impression that you need to do an infinite addition to show that 0.999... is equal to 1, when it's in fact just a notation for 9/9 = 1.
It's better in my opinion to show how to calculate the repeating decimal expansion of a fraction, and to show that there exists no fraction whose decimal expansion is 0.9... repeating.
WorldMaker
> The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.
Also a possible third thing: not enjoying working in a Base that makes factors of 3 hard to write. Thirds seem like common enough fractions "naturally" but decimal (Base-10) makes them hard to write. It's one of the reasons there are a lot of proponents of Base-12 as a better base for people, especially children, because it has a factor of 3 and thirds have nice clean duodecimal representations. (Base-60 is another fun option; it's also Babylonian approved and how we got 60 minutes and 60 seconds as common unit sizes.)
tsimionescu
You get the same problem with 0.44... + 0.55... - I don't think that makes it any easier to anyone who is confused. It's more likely just that 0.33... and 0.66... are very common and simple repeating fractions that lead to this issue.
leereeves
The fact that people are still debating whether 0.9999.... = 1 suggests that one notation is less confusing than the other.
Nobody debates whether 9/9 = 1.
olau
I was taught something of the same.
But I think it was misguided. I'll note that 1/3 is not a number, it's a calculation. So more complicated.
And fractions are generally much more complicated than the decimal system. Beyond some simple fractions that you're bound to experience in your everyday life, I don't think it makes sense to drill fractions. In the end, when you actually need to know the answer to a computation as a number, you're more likely to make a mistake because you spend your time juggling fractions instead of handling numerical instability.
Decimal notation used to be impractical because calculating with multiple digits was slow and error-prone. But that's no longer the case.
volemo
> I'll note that 1/3 is not a number, it's a calculation. So more complicated.
1/3 is a calculation the same way 42 is a calculation (4*10^1 + 2*10^0). Nothing is real except sets containing sets! /j
DemocracyFTW2
Yes, true. *BUT* 1/3 is a fraction with denominator 3. 1/5 is a fraction with another denominator, and 1/7 has yet another. So how much is 1/3 + 1/5 + 1/7? You can't just add up, you first have to multiply to get to common ground. The decimal expansions of these use the same base and are readily comparable.
tsimionescu
This is ultimately a matter of definitions, and neither defining the fractions nor the decimals as the "true" representation of rationals is ultimately more or less correct.
But, operations on fractions are definitely easier than operations on decimals. And fractions have the nice property that they have finite representations for all rational numbers, whereas decimal representations always have infinite representations even for very simple numbers, such as 1/3.
Also, if you are going to do arithmetic with infinite decimal representations, the you have to be aware that the rules are more complex then simply doing digit-by-digit operations. That is, 0.77... + 0.44... ≠ 1.11... even though 7+4 = 11. And it gets even more complex for more complicated repeating patterns, such as 0.123123123... + 0.454545... (that is, 123/999 + 45/99). I doubt there is any reason whatsoever to attempt to learn the rules for these things, given that the arithmetic of fractions is much simpler and follows from the rules for division. The fact that a handful of simple cases work in simple ways doesn't make it a good idea to try.
anthk
Rationals are numbers, not calculations. They can evaluate to themselves as members from a set.
lmm
> Doing arithmetic with decimal numbers was seen as suspect, and never allowed for decimals with infinite expansions.
With that attitude how do you handle e.g. pi or sqrt(2), which it's perfectly legitimate to do arithmetic with?
tsimionescu
This is a very strange question. With repeating decimals, it is technically possible, though very complicated, to do arithmetic directly on the representations. You have to remember a bunch of extra rules, but it can be done.
However, with numbers that have non-repeating inifinite decimal expansions, it is completely imposible to do arithmetic in the decimal notation. I'm not exagerating: it's literally physically impossible to represent on paper the result of doing 3pi in decimal notation in an unambiguous form other than 3pi. It's also completely impossible to use the decimal expansion of pi to compute that pi / pi = 1.
Here, I'll show you what it would be like to try:
pi / pi
= 3.141592653589793238462643383279502884197169399375105820949445923078164062862089986280348253421170679821480865132820664709384460955058223172....
Now, of course you can do arithmetic with certain approximations of pi. For example, I can do this: pi / pi
≈ 3.1415 / 3.1415
= 1
Or even 3 × pi
≈ 3 × 3
= 9
But this is not doing arithmetic with the decimal expansion of pi, this is doing arithmetic with rational numbers that are close enoigh to pi for some purpose (that has to be defined).anthk
pi/pi would evaluate to 1 as most proper languages would deal with pi symbolically and not arithmetically.
qsort
You don't. You keep them in symbolic form until they simplify and you do arithmetic at the last possible moment.
lmm
Sure, but when you reach that "last possible moment", what then?
AndrewDucker
Once you're dealing with irrational numbers you have to understand that all results are approximations.
lmm
Well, sure, but you should still be able to ask and answer questions like "Is pi + sqrt(2) less than or greater than 4.553?"
qayxc
Not really. Like the sibling comment said - you simply keep the symbolic values. I.e. instead of 4.442882938158... you write π√2, just like you would ⅚ and not 0,8333... in both cases you preserve the exact values. Decimal (or any other numbering system, really) approximations are only useful when you never want to do any further arithmetic with the result.
rob_c
Less approximations and more representations of complex things at times. (Just my opinion)
I prefer comparing it to complex numbers where I can't have "i" apples but I can calculate the phase difference between 2 power supplies in a circuit using such notation.
Nobody really cares about the 3rd decimal place when taking about a speeding car at a turn but they do when talking about electrons in an accelerator, so accuracy and precision always feel mucky to talk about when dealing with irrationals (again my opinion).
cwmma
in American math classes (as opposed to science classes) you almost never expand PI or sqrt(2), you either cancel them out or leave them in the answer until the end. Maybe if it's a word problem you sub them in the very last step but the problem itself is almost certainly going to be designed so it's not an issue.
Suppafly
>in American math classes (as opposed to science classes) you almost never expand PI
Except we have some fascination with memorizing the digits of pi and having competitions for doing so for some reason.
anthk
You would use rational approximations good enough for different scales and roundings.
Suppafly
>So, in this view, 0.3 or 0.333... are not numbers in the proper sense, they're just a convenient notation for 3/10 and 1/3 respectively. And there simply is no number whose notation would be 0.999..., it's just an abuse of the decimal notation.
I wonder if students from Romania are hamstrung in more advanced mathematics from being taught this way.
tsimionescu
I don't think decimals, especially repeated decimals or other infinite decimal expansions, show up much if at all in any advanced math subjects beyond the study of themselves, of course). Higher math is almost exclusively symbolic. You're more likely to need to learn that "1" is just a notation for the set which contains the empty set then to learn that it's OK to add 0.22... + 0.44... = 0.66...
leereeves
I don't think they would be; I think they might even have an advantage. They'd understand that the only numbers that have infinite repeating expansions are rationals, and that decimals are, in general, just approximations.
mcphage
> we were always taught that fractions are the "real" representation of rational numbers
There is no "real" representation of rational numbers, and fractions are no more real—or fake—than decimals.
> And there simply is no number whose notation would be 0.999...
There is, though. It's 1.
anthk
Forth and Lisp users often try to use rational first and floats later. On Scheme Lisps, you have exact-inexact and inexact->exact functions which convert rationals to floats and viceversa.
bubblyworld
There's an extremely subtle point here about the hyperreals that the author glosses over (and is perhaps unaware of):
If you take 0.999... to mean sum of 9/10^n where n ranges over every standard natural, then the author is correct that it equals 1-eps for some infinitesmal eps in the hyperreals.
This does not violate the transfer principle because there are nonstandard naturals in the hyperreals. If you take the above sum over all naturals, then 0.999... = 1 in the hyperreals too.
(this is how the transfer principle works - you map sums over N to sums over N* which includes the nonstandards as well)
The kicker is that as far as I know there cannot be any first-order predicate that distinguishes the two, so the author is on very confused ground mathematically imo.
(not to mention that defining the hyperreals in the first place requires extremely non-constructive objects like non-principal ultrafilters)
im3w1l
So something I was thinking of: A number in decimal notation can be seen as a function from the integers to {0,1,2,3,4,5,6,7,8,9} (where negative numbers map to digits left of the decimal point and non-negative to digits right of the decimal point) such that only finitely many negative numbers map to non-zero.
Could you generalize this to include the hyperreals by lifting the restrictions on finitely many, and also adding in some transfinite ordinals to the domain of the function?
bubblyworld
I suspect yes - no need to introduce transfinite ordinals, you simply map from the set Z*, which is the integers but including the nonstandard ones. In fact you don't even need to remove the finiteness hypothesis, the transfer principle should guarantee that every hyperreal has such a representation since you can prove that every real does for the standard version.
(if the finiteness thing seems confusing, remember that there are infinitely large nonstandard integers in the hyperreals, and you can't tell them apart from the others "from the inside")
cbolton
The right way to approach this is to ask a question: What does 0.999... mean? What is the mathematical definition of this notation? It's not "what you get when you continue to infinity" (which is not clear). It's the value your are approaching as you continue to add digits.
When applying the correct definition for the notation (the limit of a sequence) there's no question of "do we ever get there?". The question is instead "can we get as close to the target as we want if we go far enough?". If the answer is yes, the notation can be used as another way to represent the target.
quitit
Where school kids tend to get stuck is that they'll hold contradictory views on how fractions can be represented.
First it'll be uncontroversial that ⅓ = 0.333... usually because it's familiar to them and they've seen it frequently with calculators.
However they'll then they'll get stuck with 0.999... and posit that it is not equal to 1/1, because there must "always be some infinitesimally small amount difference from one".
However here lies the contradiction, because on one hand they accept that 0.333... is equal to ⅓, and not some infinitesimally small amount away from ⅓, but on the other hand they won't extend that standard to 0.999...
Once you tackle the problem of "you have to be consistent in your rules for representing fractions", then you've usually cracked the block in their thinking.
Another way of thinking about it is to suggest that 0.999.. is indistinguishable from 1.
Suppafly
>However they'll then they'll get stuck with 0.999... and posit that it is not equal to 1/1, because there must "always be some infinitesimally small amount difference from one".
Honestly teachers are half of the problem because they seem to make a game out of pointing out these sorts of contradictions instead of teaching the idea that you need "to be consistent in your rules for representing fractions".
That and every next step in math classes is the teacher explaining that most of how you were taught to think about math in the previous step was incorrect and you really should think about it this way, only to be told that again the next year.
lcrz
So the authors tries to be rigorous, but again falls into the same traps that the people who claim 0.9… != 1 fall.
“0.999… = 1 - infinitesimal”
But this is simply not true. Only then they get back to a true statement:
“Inequality between two reals can be stated this way: if you subtract a from b, the result must be a nonzero real number c”.
This post doesn’t clear things up, nor is it mathematically rigorous.
Pointing towards hyperreals is another red herring, because again there 0.999… equals 1.
hinkley
I don’t like any of his examples at the top. Look, it’s not that hard:
x = 0.999…
2x = 1.999…
2x - x = 1
x = 1
Multiplying by ten just confused things and the result doesn’t follow for most people.derbaum
Whether you multiply by 10 or 2, the same "counter" argument from the article stands. Only now you don't have a trailing zero after infinite nines, you have a trailing 8.
hinkley
There is no eight. This is something I’ve heard actual mathematicians complain about to other actual mathematicians: the non math public misunderstands infinite series as “imagine a number so big you can’t fathom it and add 1 more number to it. That’s not how things work.
Going as far as you can imagine and a little farther is an infinitesimal of the real infinite.
ndsipa_pomu
I don't understand how you can even have a trailing zero after an infinite number of nines. Surely any place that someone would want to put the zero can be refuted by correctly stating that a nine goes there (it's an infinite number of them, after all) and there is literally no "last" place.
anthk
Technically you don't have an '8', you keep doing a carried sum forever, think about it. The last eight will be set to 9 forever and appended a new one to it. Thus, you are getting a periodical 1.9_ in practice.
mr_mitm
Any confusion about this should go away as soon as you make clear what exactly you are talking about. If you construct the real numbers using Cauchy sequences and define the* decimal representation of a number using a Maclaurin series at x=1/10 then it's perfectly clear that 0.9... and 1.0... are two different representations of the same number. So it's the same equivalence class, but not the same representation. Thus, if you're talking about the representation of the abstract number 1, they're not equal but equivalent. If you're talking about the numbers they represent, they're equal.
* As the example shows, the decimal representation isn't unique, so perhaps we should say "_a_ decimal representation".
dagw
The intersection between people who are both confused by this and are comfortable working with Cauchy sequences, Maclaurin series and equivalence classes, is probably pretty small.
fouronnes3
To me the most obvious proof is that therere are no numbers in between 0.999... and 1. Therefore it must be the same number.
blackbear_
The fact that there are no numbers in between is not obvious at all, and has to be proven formally!
In fact, there is a (rational) number between any two distinct real numbers, therefore your proof attempt only works if you assume that 0.999 equals 1. As that is a circular reasoning, it is not a valid proof.
thaumasiotes
> your proof attempt only works if you assume that 0.999 equals 1. As that is a circular reasoning, it is not a valid proof.
No, his proof is fine. Take the standard definition of > as applied to decimal numbers when they're represented as strings. It's very easy to show that no x simultaneously satisfies x > 0.9999... and 1.0000... > x.
blackbear_
You are right, that works if you assume that every number can be represented as a decimal string.
That is indeed true for real numbers, but not for hyper-reals (https://en.m.wikipedia.org/wiki/Hyperreal_number), which is what I had in mind when I originally said that it was not obvious.
throwaway31131
I usually use this idea to show that 0.999 is not less than 1 (or more simply, there is no nonzero number you can add to 0.999 to make it 1), then because it’s not greater than 1, and there are only three possibilities (>,<,=) they must be equal.
murkle
Exactly, add them up and divide by 2. What's the answer?
fouronnes3
TFA goes into this somehow but I fail to see why it's so hard to grasp that they are the same. Maybe I should read more crackpot blogs!
HourglassFR
I don't get what the author is trying to do here. I mean he complains that talking about the limit of a sequence is too asbstract and unfamiliar to most people so the explaination is not satisfaying. But then names drop the notion of an Archimedean group and introduces with a big ol' handwave the hyperreals to solve this very straightforward highschool math problem…
Now don't get me wrong, it is nice and good to have blogs presenting these math ideas in a easy if not rigorous way by attaching them to known concept. Maybe that was the real intend here, the 0.99… = 1 "controversy" is just bait, and I am too out of the loop to get the new meta.
A_D_E_P_T
FWIW, there's an old Arxiv paper with this same argument:
https://arxiv.org/abs/0811.0164
It feels intuitively correct is what I'll say in its favor.
neeeeeeal
This is why I love HN. One post about advanced SQL ACID concepts, the next about mathematics, yet another about history.
What a community.
quchen
It baffles me how there are still blogposts with a serious attitude about this topic. It’s akin to discussing possible loopholes of how homeopathy might be medicinally helpful beyond placebo, again and again.
Why are hyperreals even mentioned? This post is not about hyperreals or non-standard math, it’s about standard math, very basic one at that, and then comes along with »well under these circumstances the statement is correct« – well no, absolutely not, these aren’t the circumstances the question was posed under.
We don’t see posts saying »1+2 = 1 because well acktchually if we think modulo 2«, what’s with this 0.9… thing then?
tsimionescu
I think it's worse than this. Even with hyperreals, 0.999... = 1, I believe, since they have to obey all laws of arithmetic that are true for the reals. At the very least, 3 × 0.333... = 1, and not 0.999... even for the hyperreals.
qayxc
IMHO the confusion arises, because the author failed to recognise that N cannot be a natural number if they go down the nonstandard analysis path. N would have to be elevated to a hyperinteger as well, which would eliminate the infinitesimal they end up with.
Tistron
You're saying that 0.999...=1, and simultaneously you are saying that 3 × 0.333... = 1 and not 0.999...
What? How can it be that a=b and a≠c when b=c?
tsimionescu
I'm saying that, in the hyperreals as well as the reals, I am 100% certain that 3 × 0.33... = 1. I am not as sure that 0.999 = 1 with the hyperreals, BUT, if it's true as the author claims that 0.99... ≠ 1 in the hyperreals, then it must follow that 3 × 0.33... ≠ 0.99... in the hyperreals.
LiKao
I still think that the distinction is very important. With standard math (e.g. real numbers) we obviously have 0.9999... = 1 and this is actually very easy to prove using the assumptions that you are taught during high school math.
However, in higher math you are taught that all this is just based on certain assumptions and it is even possible to let go of these assumptions and replace them with different assumptions.
I think it is important to be clear about the assumptions one is making, and it is also important to have a common set of standard assumptions. Like high school math, which has its standard assumptions. But it is just as possible to make different assumptions and still be correct.
This kind of thinking has very important applications. We are all taught the angle sum in a triangle is 180 degrees. But again this is assuming (default assumption) euclidean geometry. And while this is sensible, because it makes things easy in day to day life, we find that euclidean geometry almost never applies in real life, it is just a good approximation. The surface of the earth, which requires a lot of geometry only follows this assumption approximately, and even space doesn't (theory of relativity). If we would have never challenged this assumption, then we would have never gotten to the point where we could have GPS.
It is easy to assume that someone is wrong, because they got a different result. But it is much harder to put yourself into someones shoes and figure out if their result is really wrong (i.e. it may contradict their own assumption or be non-sequitur) or if they are just using different assumptions. And to figure out what these assumptions are and what they entail.
For this assumption: Yes, you can construct systems where 0.9999... != 1, but then you also must use 1/3 != 0.33333... or you will end up contradicting yourself. In fact when you assume 1 = 0.999999... + eps, then you must likely also use 1/3 = 0.33333 - eps/3 to avoid contradicting yourself (I haven't proven the resulting axiom system is free of contradiction, this is left as an excercise to the reader).
dominicrose
I think rational thinking just doesn't work when it comes to infinity math. I'd say the same thing about probabilities.
ps: based on the title I thought this would be about IEEE 754 floats.
The way I was taught decimals in school (in Romania) always made 0.99... seem like an absurdity to me: we were always taught that fractions are the "real" representation of rational numbers, and decimal notation is just a shorthand. Doing arithmetic with decimal numbers was seen as suspect, and never allowed for decimals with infinite expansions. So, for example, if a test asked you to calculate 2 × 0.2222... [which we notated as 2 × 0,(2)], then the right solution was to expand it:
Once you're taught that this is how the numbers work, it's easy(ish) to accept that 0.999... is just a notational trick. At the very least, you're "immune" to certain legit-looking operations, like Instead of So, in this view, 0.3 or 0.333... are not numbers in the proper sense, they're just a convenient notation for 3/10 and 1/3 respectively. And there simply is no number whose notation would be 0.999..., it's just an abuse of the decimal notation.