Skip to content(if available)orjump to list(if available)

Normalizing Ratings

Normalizing Ratings

50 comments

·May 2, 2025

nlh

Similarly - one of my biggest complaints about almost every rating system in production is how just absolutely lazy they are. And by that, I mean everyone seems to think "the object's collective rating is an average of all the individual ratings" is good enough. It's not.

Take any given Yelp / Google / Amazon page and you'll see some distribution like this:

User 1: "5 stars. Everything was great!"

User 2: "5 stars. I'd go here again!"

User 3: "1 star. The food was delicious but the waiter was so rude!!!one11!! They forgot it was my cousin's sister's mother's birthday and they didn't kiss my hand when I sat down!! I love the food here but they need to fire that one waiter!!"

Yelp: 3.6 stars average rating.

One thing I always liked about FourSquare was that they did NOT use this lazy method. Their score was actually intelligent - it checked things like how often someone would return, how much time they spent there, etc. and weighted a review accordingly.

Hizonner

I buy a lot of "technical things", and you constantly see one or two star ratings from people who either don't know what the thing is actually supposed to do, or don't know how to use it.

My favorites: A power supply got one star for not simultaneously delivering the selected limit voltage and the selected limit current into the person's random load. In other words, literally for not violating the laws of physics. An eccentric-cone flare tool got one star for the cone being off center. "Eccentric" is in the name, chum....

stevage

Or worse, a 1 star rating for a product they loved but there was a problem with delivery.

derefr

I take this not as people being dumb, but as a clear conflict of interest: people want to be able to rate the logistics provider separately from the product, but marketplaces don't want to give people the option to do that — as that would reveal that the marketplace will sometimes decide to use "the known-shitty provider" for some orders. (And make no mistake, the marketplace knows that that provider is awful!)

BeFlatXIII

Or shopping for USB charging bricks. No matter where in the quality spectrum you look, there is a constant percentage of one-star reviews for “this overheated and burned my house down.”

anon7000

Totally. I’ve noticed lots of sites moving away from comments on reviews too. For example, Amazon reviews on mobile can be “helpful” or I can report them.

Why can’t I downvote or comment on it? As a user, I just want more context.

But obviously, it’s not in Amazon’s interest to make me not want to buy something.

esperent

> for not violating the laws of physics.

I would personally frame that as a review for poor documentation. A device shouldn't expect users to know laws of physics to understand it's limitations.

Hizonner

If you don't know that particular law of physics, you have no business messing with electricity. You'll very likely damage something, and quite possibly damage someone.

We're talking about a general-purpose device meant to drive a circuit you create yourself. I'm not sure what a good analogy would be. Expecting the documentation for a saw to tell you you have to cut all four table legs the same length?

nmstoker

This feels like a stretch, all the more so given they were specifically talking about "technical things". Assumptions around documentation, reading of docs and widespread physics knowledge seem like they could all be different here.

zzo38computer

I think that numeric ratings (especially if only one number can be specified) (and then averaging or making or other types of statistics) are not as useful as actually reading the reviews in order to determine whether or not it addresses your concerns with it, and if they have specific complaints or specific things they say are good, to judge them by yourself according to your own intentions.

nlh

Agreed. And particularly with the advent of LLMs, this is something that could be done quite easily. Don't even give users an option of giving a numeric/star rating - just allow people to write a sentence or two (or 10) and let the LLM do the sentiment analysis and aggregate a score.

kazinator

-1 stars! They forgot it was my cousin's sister's mother's birthday, and the obnoxious waiter snarkily pointed out that my cousin's sister is just another cousin, and her mother is just my aunt.

theendisney

With averages: to have 5 stars you need a hudred 5 star ratings for each one star rating.

If one would normalize the ratings they could change without doing anything. A former customer may start giving good ratings elsewhere making yours worse or give poor ones inproving yours.

Maybe the relevance of old ratings should decline.

ajmurmann

Is that actually bad? What happened is that we learned more about the customer's rating system. I might never have had Cuban food and love it the first time I try it on Miami but then keep eating it and it turns out the first restaurant was actually not as good as I thought, I just really like Cuban food.

This actually somewhat goes into another pet peeve of mine with rating systems. I'd like to see ratings for how much I will like it. An extreme but simple example might be that the ratings of a vegan customer of a steak house might be very relevant to other vegans but very irrelevant to non-vegans. More subtle versions are simply about shared preferences. I'd love to see ratings normalized and correlated to other users to create a personalized rating. I think Netflix used to do stuff like this back in the day and you could request your personal predicted score via API but now that's all hidden and I'm instead shown different covers off the same shows over and over

kayson

The normalization doesn't have to be "live". You could apply the factor at time of rating and then not change it.

theendisney

Then could let everyone start with 100 one star ratings. If they rate their first thing it counts as 1/101 vote. If they start with a one star rating it will be their highest ever.

Alternatively you could apply the same rating to the customer and display it next to their user name along with their own review counter.

What also seems a great option is to simply add up all the stars :) Then the grumpy people wont have to do anything.

tibbar

One of my favorite algorithms for this is Expectation Maximization [0].

You would start by estimating each driver's rating as the average of their ratings - and then estimate the bias of each rider by comparing the average rating they give to the estimated score of their drivers. Then you repeat the process iteratively until you see both scores (driver rating, and user bias) converge.)

[0] https://en.wikipedia.org/wiki/Expectation%E2%80%93maximizati...

stevage

I like rating systems from -2 to +2 for this reason.

The big rating problem I have is with sites like boardgamegeek where ratings are treated by different people as either an objective rating of how good the game is within its category, or subjectively how much they like (or approve of) the game. They're two very different things and it makes the ratings much less useful than they could be.

They also suffer a similar problem in that most games score 7 out of 10. 8 is exceptional, 6 is bad, and 5 is disastrous.

homeonthemtn

I'd rather we just did an increment of 3 rating. 1. Bad 2. Fine 3. Great

2 and 4 are irrelevant and/or a wild guess or user defined/specific.

Most of the time our rating systems devolve into roughly this state anyways.

E.g.

5 is excellent 4.x is fine <4 is problematic

And then there's a sub domain of the area between 4 and 5 where a 4.1 is questionable, 4.5 is fine and 4.7+ is excellent

In the end, it's just 3 parts nested within 3 parts nested within 3 parts nested within....

Let's just do 3 stars (no decimal) and call it a day

Retric

All rating systems are relative to other ratings on the platform. So it doesn’t matter if you dumb things down or not.

The trick is collecting enough ratings to average out the underlying issues and keeping context. IE: You want rankings relative to the area, but also on some kind of absolute scale, and also relative to the price point etc.

homeonthemtn

I'd argue that a 3 star system makes that easier to average or otherwise compare vs a 5 or (for whatever insane reason) a 10 based system

Retric

The less choices you the more random noise you get from rounding.

A reviewer might round up a 7/10 to a 3 as it’s better than average, while someone else might round down a 8/10 because it’s not at that top tier. Both systems are equally useful with 1 or 10,000 reviews but I’m not convinced they are equivalent with say 10 review.

Also, most restaurants that stick around are pretty good but you get some amazingly bad restaurants that soon fail. It’s worth separating overpriced from stay the fuck away.

Retr0id

> I'm genuinely mystified why its not applied anywhere I can see.

I wonder if companies are afraid of being accused of "cooking the books", especially in contexts where the individual ratings are visible.

If I saw a product with 3x 5-star reviews and 1x 3-star review, I'd be suspicious if the overall rating was still a perfect 5 stars.

mzmzmzm

A problem with accounting for "above average" service is sometimes I don't want it. If a driver goes above and beyond, offering a water bottle or something else exceptional, occasionally I would rather be left alone during a quiet, impersonal ride.

parrit

For uber you don't need a rating at all. The tracking system knows if they were late, if they took a good route and if they dropped you off at the wrong location.

Anything really bad can be dealt with via a complaint system.

Anything exceptional could be asked by a free text field when giving a tip.

Who is going to read all those text fields and classify them? AI!

healsdata

Counterpoint -- Lyft attempted to charge me a late fee when a driver went to the wrong spot in a parking by garage.

parrit

Star rating doesn't help here

rossdavidh

I have often had the same thought, and I have to believe the reason is that the companies' bottom line is not impacted the tiniest bit by their ratings' systems. It wouldn't be that hard to do better, but anything that takes a non-zero amount of attention and effort to improve, has to compete with all of those other priorities. As far as I can tell, they just don't care at all about how useful their rating system is.

Alternatively, there might be some hidden reason why a broken rating system is better than a good one, but if so I don't know it.

adrmtu

Isn't this basically a de-biasing problem? Treat each rider’s ratings as a random variable with its own mean μᵤ and variance σᵤ², then normalize. Basically compute z = (r – μᵤ)/σᵤ, then remap z back onto a 1–5 scale so “normal” always centers around ~3. You could also add a time decay to weight recent rides higher to adapt when someone’s rating habits drift.

Has anyone seen a live system (Uber, Goodreads, etc.) implement per-user z-score normalization?

nmstoker

Does anyone else get that survey rating effect where you start off thinking the company is reasonable, you give a 4 or 5, then the next page asks for why you chose this and as you think it through you realise more and more shitty things they did, so you go back to bring them down to a 2 or 3. Effectively by asking in detail they undermine the perception of them

enaaem

Check the bad reviews. If the 1-2 star reviews are mostly about the rude owner, then you know the food is good.