Quantum mechanics provide truly random numbers on demand
44 comments
·June 14, 2025btilly
JKCalhoun
For simple electronics circuits, reverse-biasing a transistor past its breakdown voltage will give you "noise" — an ADC will give you random values.
I don't know how statistically random it is — suspect it is quantum in nature though since we're dealing with transistors.
(EDIT: checked with ChatGPT, has a sense of humor: "Be careful not to exceed the maximum reverse voltage ratings, or you’ll get more “magic smoke” than white noise.")
sandworm101
Most any sensor attached to a realworld physical system can produce sufficient randomness. Put a vibration sensor on my clothes dryer, plug the output into an md5 hash, and voila. Or setup a webcam aimed at a tree blowing in a breeze. Or pour out some m&ms onto a table and photograph that. We dont need to go quantum when sufficiently random systems like turbulance exist in the macro world.
jasperry
How is random.org publicly verifiable? As far as I know, there's no way to prove that a certain set of numbers was produced by random.org at a certain time.
The public verifiability is the real "quantum" advance of this research; probably the title should say that. Of course, it's true that when you don't need public verifiability, your OS's entropy pool + PRNG is good enough for any currently known scenario.
btilly
The purpose of https://www.random.org/draws/ (which is unfortunately currently down), is to do exactly that.
Also it is possible for any group to agree that they will all sign messages at a given time about a given source, and stick them on a blockchain. This then becomes proof that this group all agreed on what was displayed, at that time. This becomes a kind of public verification of what was there.
zokier
NIST has operated public random beacon since at least 2013, and League of Entropy has operated distributed beacon from 2019.
Public randomness does have uses in cryptography, crypto is not only secret keys.
btilly
Can you illuminate what uses public randomness has in cryptography?
If I think about it, I can come up with some. But they seem pretty niche relative to secret keys.
dist-epoch
Some crypto algorithms need some random data in their construction. Typically "nothing up my sleeve" random numbers are used - digits of pi, sqrt(2), ...
lacoolj
"...is something that nothing in the universe can predict in advance"
The universe is a swirling vortex of entropy. In theory, with enough data, you can predict anything, at any point in time. There is no such thing as "truly random"
perching_aix
How would we know ontic randomness when we see it? I can understand how we would epistemic randomness, but not ontic.
mjburgess
Ontic randomness, which may be better called physical indeterminism, is given as the best explanation for epistemic randomness for which no conditional variable exists (in the best theories of physics, etc.) to remove the epistemic randomness.
So, for a given epistemic-random Y, "0 < P(Y) < 1" => Y is ontic-random iff there is no such X st. P(Y|X) = 1 or P(Y|-X) = 1 where dim(X) is abitarily large
The existence of X is not epistemic, and is decided by the best interpretation of the best available science.
Bell's theorem limits the conditions on `X` so that either (X does not exist) or (X is non-local).
If you take the former branch then ontic-randomness falls out "for free" from highly specific cases of epistemic; if you take the latter, then there is no case in all of physics where one implies the other.
Personally, I lean more towards saying there is no case of ontic randomness, only "ontic vagueness" or measurement-indeterminacy -- which gives rise to a necessary kind of epistemic randomness due to measurement.
So that P(Y|X) = 1 if X were known, but X isn't in principle knowable. This is a bit of a hybrid position which allows you to have the benefits of both: reality isn't random, but it necessarily must appear so because P(X|measure(X)) is necessarily not 1. (However this does require X to be non-local still).
This arises, imv, because I think there are computability constraints on the epistemic P(Y|X, measure(X)), ie., there has to be some f: X -> measure(X) which is computable -- but reality isn't computable. ie., functions of the form f : Nat -> Nat do not describe reality.
This is not an issue for most macroscopic systems because they have part-whole reductions that make "effectively computable" descriptions fine. But in systems whether these part-whole reductions dont work, including QM, the non-computability of reality creates a necessary epistemic randomness to any possible description of it.
abdullahkhalids
Physicists have thought long and hard about this. This is very far outside my area, but here is a ten year old review paper that discusses some of these issues [1].
perching_aix
89 pages is intimidating, but I guess if this bothers me so much, I may as well dive in. Thanks, will take a peek in a bit.
dr_dshiv
Report back please!
ang_cire
> When researchers measure an individual particle, the outcome is random, but the properties of the pair are more correlated than classical physics allows, enabling researchers to verify the randomness.
Is this not possibly just random-seeming to us, because we do not know or cannot measure all the variables?
> The process starts by generating a pair of entangled photons inside a special nonlinear crystal. The photons travel via optical fiber to separate labs at opposite ends of the hall.
> Once the photons reach the labs, their polarizations are measured. The outcomes of these measurements are truly random.
I understand that obviously for our purposes (e.g. for encryption), this is safely random, but from a pure science perspective, have we actually proven that the waveform collapsing during measurement is "truly random"?
How could we possibly assert that we've accounted for all variables that could be affecting this? There could be variables at play that we don't even know exist, when it comes to quantum mechanics, no?
A coin toss is completely deterministic if you can account for wind, air resistance, momentum, starting state, mass, etc. But if you don't know that air resistance or wind exists, you could easily conclude it's random.
I ask this as a layman, and I'm really interested if anyone has insight into this.
613style
Bell's Theorem (1964) describes an inequality that should hold if quantum mechanics' randomness can be explained by certain types of hidden variables. In the time since, we've repeatedly observed that inequality violated in labs, leading most to presume that the normal types of hidden variables you would intuit don't exist. There are some esoteric loopholes that remain possibilities, but for now the position that matches our data the best is that there are not hidden variables and quantum mechanics is fundamentally probabilistic.
ang_cire
So to make sure I am understanding correctly, the normal distribution of the outcomes is itself evidence that other hidden factors aren't at play, because those factors would produces a less normal distribution?
I.e. if coin toss results skew towards heads, you can conclude some factor is biasing it that way, therefore if the results are (over the course of many tests) 'even', you can conclude the absence of biasing factors?
Workaccount2
Basically they get to measure a super position particle twice, by using an entangled pair of it. So two detectors that each measure one of the particle's 3 possible spin directions, which are known to be identical (but usually you only get to make 1 measurement, so now we can essentially measure 2 directions). We then compare how the different spin directions agree or disagree with each other in a chart.
15% of the time they get combination result A, 15% of the time they get combination result B. Logically we would expect a result of A or B 30% of the time, and combination result C 70% of the time (There are only 3 combinatorial output possibilities - A,B,C)
But when we set the detectors to rule out result C (so they must be either A or B), we get a result of 50%.
So it seems like the particle is able to change it's result based on how you deduce it. A local hidden variable almost certainly would be static regardless of how you determine it.
This is simplified and dumbified because I am no expert, but that is the gist of it.
nathan_compton
Not really. The shape of the distribution of whatever random numbers you are getting is just a result of the physical situation and nothing to do with the question posed by Bell.
Let me take a crack at this. Quantum Mechanics like this: we write down an expression for the energy of a system using position and momentum (the precise nature of what constitutes a momentum is a little abstract, but the physics 101 intuition of "something that characterizes how a position is changing" is ok). From this definition we develop both a way of describing a wave function and time-evolving this object. The wave function encodes everything we could learn about the physical system if we were to make a measurement and thus is necessarily associated with a probability distribution from which the universe appears to sample when we make a measurement.
It is totally reasonable to ask the question "maybe that probability distribution indicates that we don't know everything about the system in question and thus, were that the case, and we had the extra theory and extra information we could predict the outcome of measurements, not just their distribution."
Totally reasonable idea. But quantum mechanics has certain features that are surprising if we assume that is true (that there are the so-called hidden variables). In quantum mechanical systems (and in reality) when we make a measurement all subsequent measurements of the system agree with the initial measurement (this is wave function collapse - before measurement we do not know what the outcome will be, but after measurement the wave function just indicates one state, which subsequent measurements necessarily produce). However, measurements are local (they happen at one point in spacetime) but in quantum mechanics this update of the wave function from the pre to post measurement state happens all at once for the entire quantum mechanical system, no matter its physical extent.
In the Bell experiment we contrive to produce a system which is extended in space (two particles separated by a large distance) but for which the results of measurement on the two particles will be correlated. So if Alice measures spin up, then the theory predicts (and we see), that Bob will measure spin down.
The question is: if Alice measures spin up at 10am on earth and then Bob measures his particle at 10:01 am earth time on Pluto, do they still get results that agree, even though the wave function would have to collapse faster than the speed of light to get there to make the two measurements agree (since it takes much longer than 1 minute for light to travel to Pluto from earth).
This turns out to be a measureable fact of reality: Alice and Bob always get concordant measurement no matter when the measurement occurs or who does it first (in fact, because of special relativity, there really appears to be no state of affairs whatever about who measures first in this situation - it depends on how fast you are moving when you measure who measures first).
Ok, so we love special relativity and we want to "fix" this problem. We wish to eliminate the idea that the wave function collapse happens faster than the speed of light (indeed, we'd actually just like to have an account of reality where the wave function collapse can be totally dispensed with, because of the issue above) so we instead imagine that when particle B goes flying off to Pluto and A goes flying off to earth for measurement they each carry a little bit of hidden information to the effect of "when you are measured, give this result."
That is to say that we want to resolve the measurement problem by eliminating the measurement's causal role and just pre-determine locally which result will occur for both particles.
This would work for a simple classical system like a coin. Imagine I am on mars and I flip a coin, then neatly cut the coin in half along its thin edge. I mail one side to earth and the other to Pluto. Whether Bob or Alice opens their envelope first and in fact, no matter when they do, the if Alice gets the heads side, Bob will get the tails side.
This simple case fails to capture the quantum mechanical system because Alice and Bob have a choice of not just when to measure, but how (which orientation to use on their detector). So here is the rub: the correlation between Alice and Bob's measurement depends on the relative orientation of their detectors and even though both detectors measure a random result, that correlation is correct even if Alice and Bob, for example, just randomly choose orientations for their measurements, which means Quantum Mechanics describes the system correctly even when the measurement would have had to be totally determined for all possible pairs of measurements ahead of time at the point the particles were separated.
Assuming that Alice and Bob are actually free to choose a random measuring orientation, there is no way to pre-decide the results of all pairs of measurements ahead of time without knowing at the time the particles are created which way Alice and Bob will orient their detectors. That shows up in the Bell Inequality, which basically shows that certain correlations are impossible in a purely classical universe between Alice and Bob's detectors.
Note that in any given single experiment, both Alice and Bob's results are totally random - QM only governs the correlation between the measurements, so neither Alice nor Bob can communicate any information to eachother.
goatlover
Or the Many Worlds Interpretation is correct. It is deterministic, we just don't know which branch we're in.
vonneumannstan
>I ask this as a layman, and I'm really interested if anyone has insight into this.
Another comment basically answered but basically you are touching on Hidden Variable Theorems in QM. Basically that there could be missing variables we can't currently measure that explain the seeming randomness of QM. Various tests have shown and most Physicists agree that Hidden Variables are very unlikely at this point.
SAI_Peregrinus
Local hidden variables are impossible. Non-local hidden variables are perfectly possible. Aesthetically displeasing, since it requires giving up on locality, but not logically impossible. Non-local interpretations of quantum mechanics give up on locality instead of giving up on hidden variables. You can't have both, but either one alone is possible.
vonneumannstan
We're getting close to Super-determinism at that point, which may in fact be correct but I don't think the poster was getting at that.
Strilanc
It could still be a pseudo random number generator behind the scenes. For example, a typical quantum circuit simulator would implement measurements by computing a probability then asking a pseudo random number generator for the outcome and then updating the state to be consistent with this outcome. Bell's theorem proves those state updates can't be local in a certain technical sense, but the program has arbitrary control over all amplitudes of the wavefunction so that's not a problem when writing the simulator code.
If the prng was weak, then the quantum circuit being simulated could be a series of operations that solve for the seed being used by the simulator. At which point collapses would be predictable. Also, it would become possible to do limited FTL communication. An analogy is some people built a redstone computer in minecraft that would detonate TNT repeatedly, record the random directions objects were thrown, and solve for the prng's seed [1]. By solving at two times, you can determine how many calls to the prng had occurred, and so get a global count of various actions (like breaking a block) regardless of where they happened in the world.
perching_aix
This a difference between the ontological (as-is) and the epistemological (as-modeled). I asked pretty much the same thing, you might find some of the responses I got illuminating. [0]
stiglitz
I don’t think I’ll ever be convinced that there’s some kind of fundamental “randomness” (as in one that isn’t a measure of ignorance) in the world. Claiming its existence sounds like claiming to know what we don’t know.
EastLondonCoder
Many years ago I used to work for a company in the gambling domain. There was a story going around from years before I joined that hardware TNRGs where used. And one day they failed. I can't remember precisely but heat was involved in one way or another and the failure mode they encountered was caused by overheating and repeatedly giving an endless series ones. A switch to PNRGs was promptly introduced.
jasperry
Thanks, this is a great story to illustrate why there's almost never any advantage to using a TRNG over a cryptographic-strength PRNG. That's also why Linux removed the blocking RNG from the kernel; there was no attack model where it gave more security.
Of course, PRNGs should still be seeded with real entropy from the outside world, but even if that fails at some point, your PRNG will still be producing effectively unpredictable numbers for a long time.
7e
With a PRNG the seed must be kept secret and non-reverse-engineerable. Isn't that a real disadvantage compared with a TRNG?
jasperry
Once a seed is fed to a PRNG, it can be deleted. But you still have a point, because the state of an OS PRNG can be saved and restored, for example when the machine sleeps, and a hacker could potentially access this to reproduce generated bits. But whenever the entropy pool is seeded with new entropy, any previous state values become useless.
gregfjohnson
One possible definition of "random" in this context: Is there any conceivable algorithm, perhaps one that models the entire universe in all of its particulars, that predicts the next string produced by the NIST quantum beacon?
nathan_compton
Yes, but it has a variety of very unappealing physical properties. I mean for one thing, no one has all that information in the first place, but it would also be a theory were large scale correlations between outcomes would exist with space-like separations which would be weird, though clearly not impossible. T'Hooft's cellular automata approach has these properties and I guess its valid, although I don't know if it can be used to make non-trivial predictions.
goatlover
Depends on the interpretation of QM. Many Worlds and Bohmian (Pilot Wave) are deterministic, but most interpretations are not. For MWI, you'd need your universal quantum computer to calculate all the branches/worlds. There's also Superdeterminism, which means you'd have to calculate everything from the big bang.
JKCalhoun
Write an array of random values to a hard drive — terabytes of them.
Dupe the drive.
You now have a matching pair of "one-time pads" for, I have heard, the hardest form of encryption to decrypt. I would think expect there is a business already doing this.
Someone
Used properly, encryption using one time pads produces data streams that are indistinguishable from uniformly distributed random noise and cannot be cracked (https://en.wikipedia.org/wiki/One-time_pad)
cortesoft
There aren't a ton of use cases for this that aren't met better by other cryptographic solutions.
dist-epoch
It's harder to ensure that no one messed with the drives during transport than to give a small private key to the other party.
oneshtein
> "Mathematic model of quantum world" provide truly random numbers on demand
m-watson
I'm not really sure if this is what you are getting at but it is using physical properties associated with QM to create the random numbers. Not just a mathematical model.
"This is the first random number generator service to use quantum nonlocality as a source of its numbers, and the most transparent source of random numbers to date."
pharrington
This is a press release for the University of Colorado's CURBy - CU Randomness Beacon @ https://random.colorado.edu/
jmyeet
"Random" is a really interesting concept because it's intuitive yet hard to define. It's really a definition by exclusion, that is if you can't describe something in any way then it's random by default. But how do you know you just haven't found the way to define it yet?
This is somewhat related to the idea of complexity. So if you have a sequence of "random" numbers, how do you know they're random? Take a look at a Mandelbrot Set and you wouldn't guess it's not that complex.
I really like the idea of Kolomogorv complexity [1], which is to say that the complexity of an object (including a sequence of numbers) is defined by the shortest program that can produce that result. So a sequence of number generated by a PRNG isn't complex because an infinite sequence of such numbers can be reduced to the (finite) size of the program and initial seed.
There are various random number generators that use quantum effects to create random numbers. One interesting implication of this is that it ends the debate about whether quantum effects can affect the "classical" or "macro" world.
kazinator
s/provide/provides/
It isn't quantum, but as far as I know https://www.random.org/ is sufficiently random for any purpose that I can think of for publicly verifiable random numbers.
(Most of the demand for random numbers, of course, comes from cryptography. In which case public verifiability of what the random thing was is the last thing that you want.)