Derivation and Intuition behind Poisson distribution
20 comments
·May 1, 2025quirino
fc417fc802
> What happened here?
You went astray when you declared the expected wait and expected passed.
Draw a number line. Mark it at intervals of 10. Uniformly randomly select a point on that line. The expected average wait and passed (ie forward and reverse directions) are both 5, not 10. The range is 0 to 10.
When you randomize the event occurrences but maintain the interval as an average you change the range maximum and the overall distribution across the range but not the expected average values.
yorwba
When you randomize the event occurences, you create intervals that are shorter and longer than average, so that a random point is more likely to be in a longer interval, so that the expected length of the interval containing a random point is greater than the expected length of a random interval.
To see this, consider just two intervals of length x and 2-x, i.e. 1 on average. A random point is in the first interval x/2 of the time and in the second one the other 1-x/2 of the time, so the expected length of the interval containing a random point is x/2 * x + (1-x/2) * (2-x) = x² - 2x + 2, which is 1 for x = 1 but larger everywhere else, reaching 2 for x = 0 or 2.
pfedak
If it wasn't clear, their statements are all true when the events follow a poisson distribution/have exponentially distributed waiting times.
jwarden
The way, I understand it is that with a Poisson process, at every small moment in time there’s a small chance of the event happening. This leads to on average lambda events occurring during every (larger) unit of time.
But this process has no “memory” so no matter how much time has passed since the last event, the number of events expected during the next unit of time is still lambda.
null
meatmanek
Poisson distributions are sort of like the normal distribution for queuing theory for two main reasons:
1. They're often a pretty good approximation for how web requests (or whatever task your queuing system deals with) arrive into your system, as long as your traffic is predominantly driven by many users who each act independently. (If your traffic is mostly coming from a bot scraping your site that sends exactly N requests per second, or holds exactly K connections open at a time, the Poisson distribution won't hold.) Sort of like how the normal distribution shows up any time you sum up enough random variables (central limit theorem), the Poisson arrival process shows up whenever you superimpose enough uncorrelated arrival processes together: https://en.wikipedia.org/wiki/Palm%E2%80%93Khintchine_theore...
2. They make the math tractable -- you can come up with closed-form solutions for e.g. the probability distribution of the number of users in the system, the average waiting time, average number of users queuing, etc: https://en.wikipedia.org/wiki/M/M/c_queue#Stationary_analysi... https://en.wikipedia.org/wiki/Erlang_(unit)#Erlang_B_formula
PessimalDecimal
There is another extremely important way in which they are like the normal distribution: both are maximum entropy distributions, i.e. each is the "most generic" within their respective families of distributions.
[1] https://en.wikipedia.org/wiki/Poisson_distribution#Maximum_e...
[2] https://en.wikipedia.org/wiki/Normal_distribution#Maximum_en...
emmelaich
Useful for understanding load on machines. One case I had was -- N machines randomly updating a central database. The database can only handle M queries in one second. What's the chance of exceeding M?
Also related to the Birthday Problem and hash bucket hits. Though with those you're only interested in low collisions. With some queues (e.g. database above) you might be interested when collisions hit a high number.
Rant423
An application of the Poisson distribution (1946)
https://garcialab.berkeley.edu/courses/papers/Clarke1946.pdf
hammock
Poisson, Pareto/power/zipf and normal distributions are really important. The top 3 for me. (What am I missing?) And often misused (most often normal). It’s really good to know which to use when
FilosofumRex
It's surprising that so few people bother to use non-parametric probability distributions. With today's computational resources, there is no need for parametric closed form models (may be with the exception of Normal for historical reasons), each dataset contains its own distribution.
klysm
It’s easier to do MCMC when the distributions at hand have nice analytic properties so you can take derivatives etc. You should also have a very good understanding of the standards distributions and how they all relate to each other
klysm
Normal is overused for sometimes sensible reasons though. The CLT is really handy when you have to consider sums
jwarden
> What am I missing?
Beta
joe_the_user
I can understand a message that javascript needs to be enabled for your ** site.
But permanently redirecting so I can't see this after I enable javascript is just uncool and might not endear one on site like hn where lots of folks disable js initially.
Edit: and anonymizing, disabling and reloading... It's just text with formatted math. Sooo many other solutions to this, jeesh guys.
digger495
Steve, le
DAGdug
What’s special about this treatment? It’s the 101 part of a 101 probability course.
I really like the Poisson Distribution. A very interesting question I've come across once is:
A given event happens at a rate of every 10 minutes on average. We can see that:
- The expected length of the interval between events is 10 minutes.
- At a random moment in time the expected wait until the next event is 10 minutes.
- At the same moment, the expected time passed since the last event is also 10 minutes.
But then we would expect the interval between two consecutive events to be 10+10 = 20 minutes long. But we know intervals are 10 on average. What happened here?
The key is that by picking a random moment in time, you're more likely to fall into a bigger intervals. By sampling a random point in time the average interval you fall into really is 20 minutes long, but by sampling a random interval it is 10.
Apparently this is called the Waiting Time Paradox.