Skip to content(if available)orjump to list(if available)

An election forecast that’s 50-50 is not “giving up”

bitshiftfaced

An accurate model can output a 50-50 prediction. Sure, no problem there. But there is a human bias that does tend to make 50% more likely in these cases. It is the maximum percentage you can assign to the uncomfortable possibility without it being higher than the comfortable possibility.

538 systematically magnified this kind of bias when they decided to rate polls, not based on their absolute error, but based on how close their bias was relative to other polls' biases.(https://x.com/andrei__roman/status/1854328028480115144) This down-weighted pollsters like Atlas Intel who would've otherwise improved 538's forecast.

culi

I'm not sure how to verify your comment since 538 was cut by ABC a month or 2 ago. But Nate Silver's pollster rating methodology is pretty much the same as 538's was during his tenure there and can be found here: https://www.natesilver.net/p/pollster-ratings-silver-bulleti...

It actually explicitly looks for statistical evidence of "herding" (e.g. not publishing poll results that might go against the grain) and penalized those pollsters.

In both rating systems, polls that had a long history of going against the grain and being correct, like Ann Seltzer's Iowa poll, were weight very heavily. Seltzer went heavily against the grain 3 elections in a row and was almost completely correct the first 2 times. This year she was off by a massive margin (ultimately costing her her career). Polls that go heavily against the grain but DON'T have a polling history simply aren't weighted heavily in general.

bitshiftfaced

> I'm not sure how to verify your comment

Here's how 538 explains how they factor in bias into their grading:

> Think about this another way. If most polls in a race overestimate the Democratic candidate by 10 points in a given election, but Pollster C's surveys overestimate Republicans by 5, there may be something off about the way Pollster C does its polls even if its accuracy is higher. We wouldn't necessarily expect it to keep outperforming other pollsters in subsequent elections since the direction of polling bias bounces around unpredictably from election to election.

- https://abcnews.go.com/538/best-pollsters-america/story?id=1...

JumpCrisscross

> since 538 was cut by ABC a month or 2 ago. But Nate Silver's pollster rating methodology is pretty much the same as 538's was during his tenure there

Nate took his model with him. After he left, 538 rolled a new model.

Suppafly

>Nate took his model with him. After he left, 538 rolled a new model.

So they let him leave with the only part of it that had value, that's insane. They essentially just paid him for the name?

culi

I'm aware. It's sad to see 538 gone though. I was looking forward to seeing how the new 538 model would compare to silver's model

tantalor

538 was cut by ABC 6 days ago.

https://archive.is/E2nre

culi

Wow thanks. As a former regular reader it's felt a lot longer

They've had several major cuts in the past couple of years so maybe that's why it's felt like that

mmooss

> Seltzer went heavily against the grain 3 elections in a row and was almost completely correct the first 2 times. This year she was off by a massive margin (ultimately costing her her career).

Why did one mistake cost her career?

chimeracoder

> Why did one mistake cost her career?

It didn't.

> Over a year ago I advised the Register I would not renew when my 2024 contract expired with the latest election poll as I transition to other ventures and opportunities.

> Would I have liked to make this announcement after a final poll aligned with Election Day results? Of course. It’s ironic that it’s just the opposite. I am proud of the work I’ve done for the Register, for the Detroit Free Press, for the Indianapolis Star, for Bloomberg News and for other public and private organizations interested in elections. They were great clients and were happy with my work.

You can of course choose to interpret this if you wish as her conveniently inventing a narrative after-the-fact, but that seems unlikely. The polling industry doesn't work that way, and it's particularly unlikely that the Register would have cut ties with her over a single poll after her impressive track record for nearly forty years.

Since there's no real evidence that's what happened, the most reasonable conclusion is that, as she said, she decided to retire after spending four decades doing the same thing.

[0] https://www.desmoinesregister.com/story/opinion/columnists/2...

amalcon

The most likely scenario is that it didn't. Selzer, a successful 68-year-old, was likely about to retire anyway.

culi

She faced heavy retribution from Donald Trump who claimed the poll was a politically motivated one to shift the narrative

https://www.nytimes.com/2024/12/19/us/politics/ann-selzer-io...

Seems like an absurd claim given it was less than 2 weeks out from the election and polls that show a side losing can often have a motivating effect on a base

Truth is that voter behavior changes more radically than pollsters admit from election to election. We can statistically model direction of "error" but we can't statistically model direction of "bias". A grade A pollster like Ann Selzer could have the perfect formula one election but that formula could be a total miss the next

The important thing that forecasters like Silver point out is that we need those pollsters to stick around and use the SAME methodology. Even if gets less useful at predicting the outcome, having a consistent signal with the same methodology can still be of immense statistical importance. And it's extremely important that we don't scare away pollsters from publishing "outlier" results. Doing so only encourages herding which is a growing problem in polling

bryanlarsen

Because it pissed off Donald Trump and the media is busy kowtowing to him.

JumpCrisscross

> Why did one mistake cost her career?

It didn’t. She retired before Trump came for her and before the election outcome was known.

ilikehurdles

It’s more of a systemic failure than a single mistake. If you’re a business that releases one major product version every year or two that accounts for your entire revenue, and you completely blow the product on the last release, there’s a good chance you’re not getting another chance if you’re a small-medium business. Compare her firm to any number of smaller gaming and entertainment studios.

Selzer wasn’t gallup or some other big player that was continuously releasing a wide range of evolving polls, but a once-an-election shop that was multiple standard deviations away from reality in the wrong direction, on the biggest stage in history. Who would contract Selzer ever again?

jsight

It isn't just human bias that makes this problematic. Their definition of success is that a prediction of 70% should be right about 70/100 times.

Give a model a big reward for "success" under this criteria and it will have a tendency to converge on the long term average. As long as the long term average stays the same, no one will be able to say the estimate was wrong.

If you've ever seen the YT videos from aspiring data scientists learning sequence prediction, you've inevitably seen a variation from people trying to predict stock prices. "Look how low the loss is!" - Sure, but it gets the direction wrong, and just consistently predicts a close value to the previous day's close. But the model is technically correct, just useless.

A 50-50 model for an election might not be giving up and might not be wrong, but it is often useless.

gowld

50-50 means "I have 0 no useful information". Some facts are more useful than others.

rurp

Forecasts will always have some level of uncertainty and if a race is very close a prediction around 50/50 will be the most honest result. A prediction like that does have value, just like a prediction that candidate A is a large favorite, it's informing you about the future state of the world.

I'm not sure what you're expecting forecasters to do when they have a 50/50 result. Put a thumb on the scale in a random direction so that you will think they are providing useful information, even though they'll be less accurate?

null

[deleted]

moduspol

Agreed with the points in OP. Though we did have the story that came out shortly after the election that apparently internal polling for the Harris campaign never showed her ahead [1].

Obviously it says in the article that they did eventually fight it to a dead heat, which is in-line with a 50-50 forecast, but I do wonder what, if anything, failed such that this key detail was never reported on publicly until after the election.

As the article notes, public polls started appearing in late September showing Harris ahead, which they never saw internally. Are internal polls just that much better than public ones? Is the media just incentivized to report on the most interesting outcome (a dead heat)?

[1] https://www.usatoday.com/story/news/politics/elections/2024/...

t-3

> As the article notes, public polls started appearing in late September showing Harris ahead, which they never saw internally. Are internal polls just that much better than public ones? Is the media just incentivized to report on the most interesting outcome (a dead heat)?

Intentional bias for motivational purposes is a thing - it doesn't make any logical sense, but people want to vote for the winner.

HKH2

Well, trying to have 'safe' opinions is not completely illogical. It's a survival strategy.

doctorpangloss

People want to vote for whomever.

People want to DONATE to winners.

roenxi

Don't rule out Harris' internal polling just being bad and the public polling was superior. We're looking at a team whos message was "our opponent is a racist, homophobic, sexist, rapist, felonic avatar of Hitler" and then saw him gain margin which is quite remarkable. What did voters have to see in Harris' team for that to happen? They're clearly doing something wrong. If the team managed that I wouldn't assume their pollsters were a lone beacon of competence.

It seems quite plausible they hired bad pollsters and their internal data was off.

kubb

What they did wrong was picking the candidate - she wasn’t popular. It wasn’t the time to try this kind of candidate, but they did it because of their own priorities. It’s OK, they won’t learn.

Democrats need to start nominating famous actors and influencers.

ReptileMan

They had all of Hollywood this year and quite a lot of nonpolitcal content creators came out for Harris in the last days.

Mountain_Skies

Michael Bloomberg already tried the "hey, I'm like that guy but without the wrong think" xerox strategy and it didn't go well. Mark Cuban has toyed with it too but keeps getting cold feet, maybe due to lack of confidence or maybe due to analysis that shows being Them-Lite rarely works.

Trump's reality tv show is not a major part of what got him elected but if someone wants to fund and convince Mark Cuban to try a doppelganger campaign, a fool and his money are soon parted. Cuban seems to know better than to try with his own fortune.

AnimalMuppet

Their mistake was much more basic than that. They kept assuming that they could re-run Biden until it was too late for a switch to work.

When they did decide to switch, it had to be Harris, because she was the only one who could legitimately claim to use Biden's election funds, and they didn't have time to raise money for anyone else.

From there, they didn't fix the other mistake - the one Democrats have been making for at least a decade. The Democrats' natural constituency is the working class. They can't win without it. But much of the working class is rather socially conservative. The Democrats have spent the last decade telling working class people that if they, the working class, don't think gay marriage is a good idea, or don't think trans people belong on womens' sports teams and in womens' restrooms, or aren't comfortable with abortion, then they are moral lepers and their entire culture needs to be completely eradicated. Well, the natural result is that at least some of those people are going to flip you the bird and vote for the other guy. Democrats wonder "how could people be so stupid?" I ask, "How could you be so stupid? What did you think was going to happen?"

rurp

This pretty much covers it. Harris ran a terrible campaign in 2020 and was never popular. Biden picked her anyway, against his better judgement, because that was what the Left wanted and he was trying to appease them. His disastrous decision to run again for so long left the party with no other options.

Harris tried to correct some of her past mistakes and did better this time around but she should have never been in that position to begin with. Most of the blame should go to Biden and the activists who want to self-immolate the party on the altar of tokenism.

pjc50

> What did voters have to see in Harris' team for that to happen?

Biden should have pulled out a lot sooner. When? At least a year before the election. But you can easily make arguments for earlier and earlier all the way back to having been too old to run for President in the first place, despite having had a really good term.

Also Harris needs to fire her consultants, who were still trying to play "fair" in an unfair fight and pulled back on the one attack line which was working: that Republicans are "weird".

Things are unlikely to improve until voters start primarying out anyone still trying to do bipartisan equivocation.

alabastervlog

His approval rating was in “it would be unprecedented if you won” territory early enough that he should have been out in time to have a real primary. Folks who think switching was any kind of a mistake are going on gut and not numbers. The mistake was that he stayed in as long as he did. Even so, switching was better than not, despite still losing.

And yes, it’s so goddamn frustrating that Clinton-era consultants are still given meetings.

roenxi

> ...and pulled back on the one attack line which was working: that Republicans are "weird".

Isn't the Democrat brand supposed to be that weird is encouraged? When did they start being against weird people?

xienze

[flagged]

lotsofpulp

>What did voters have to see in Harris' team for that to happen? They're clearly doing something wrong.

Perhaps there is clearly something wrong with the voters to opt for a

> racist, homophobic, sexist, rapist, felonic avatar of Hitler

nosianu

Riiiight....

https://youtu.be/eVddGSTjEd0?t=51

Or maybe the explanation is that everybody behaved "responsibly" but they never got the changes they wanted, and chances for anything different looked more and more bleak. Why do you treat this like a single event? It is not. Lots of prior experiences fed into this.

For example, as bad as this is for us (I'm German, but I lived many years in the US once), for Europe I think it is good what is happening. Now we are forced to move and mature. Crisis are not always bad. The personalities involved this time make it especially hard to look at the whole thing calmly. I admit I have a harder time seeing what the US may get out of this except for the shakeup as a general chance. However, too much is concentrated on the personalities, and, like you did, on blaming everybody. How about Democrats look what they did wrong? I mean, other than reverting to "we didn't think the voters would be so stupid", in which case I have little hope for them to get anything positive out of this.

4bpp

A quip (https://en.m.wikipedia.org/wiki/Die_L%C3%B6sung) from a more innocent era comes to mind-

> Would it not in that case / Be simpler for the government / To dissolve the people / And elect another?

andrepd

Principal Skinner "am I so out of touch" meme

roenxi

Well sure, but if that is the attitude they're not going to bother hiring good pollsters. Pollsters are going to do what, come back and explain that the Devil himself is outpolling your campaign. Now what? They were already catastrophising as hard as they could and the entire English speaking world knows how the US Dems feel about Trump. A majority (although admittedly a slim one) of voters just don't believe them.

Good pollsters would have been a waste of money in that environment and I'm sure the Harris campaign was judiciously watching their budget.

seanw444

> Hitler

Please keep using the same tired comparisons. It helps a lot.

rdtsc

That is interesting. Presenting public poll results is not really about poll results, but about shaping opinion. The struggle with the media in general is between reporting on the reality that they see vs the reality they want to create. It's an infantile make-belief: "If I put my gloves on, it will start snowing". But, take a large outfit like WP or NYT and it could sort of work that way, too!

But, this also doesn't disagree with the original article that there was a large margin of uncertainty. In that case, a newspaper had a choice of mentioning the uncertainty and emphasizing it, or presenting the 3% in favor or their candidate. I think they went with the latter.

Also, when Bezos blocked WP from endorsing Harris, I always wondered if he had his own better polling results, and he was pretty sure Trump would win, or he just gambled. It's like there are these secret reliable set of poll results to which billionaires get access too, while the public gets the watered down version. Going by the article at hand, perhaps him or his team could better read the results and didn't drink the kool-aid.

delichon

There's a pretty clear test of whether a pollster is about reporting or influencing: the partisanship of their errors. Neutral pollsters have differences with the results randomly distributed across parties. Propagandists have errors that favor their patrons. This essay leans on the magnitude of the errors, but that's less probative than their distribution.

Does any poll aggregator order by randomness of polling error?

culi

This mixes up accuracy with precision and 538 had written at length about this.

There's a big difference between pollster bias correction vs rating.

There are many pollsters that are pretty consistently +2D/R but are reliably off from the final result in that direction. These polls are actually extremely valuable once you make the correction in your model. Meanwhile, polls that can be off from the final result by a large amount but average out to about correct should not be trusted. This is a yellow flag for herding

A pollster can have an A+ rating while having a very consistent bias in one direction or another. The rating is meant to capture consistency/honesty of methodology more than result

delichon

A fair scale does not favor the grocer.

culi

In this case it does. What pollsters sell is their reputations

aredox

Except the methodology biases the response: phone-only vs. in-person for example.

ForTheKidz

This doesn't account for poll bias that biases both parties, of course. Not all polling is designed to wage parties against each other: some is designed to pit parties against populace.

skybrian

Suppose you want to know whether it will rain on your wedding day. Without any forecast, there are two scenarios you need to prepare for: it will rain, or it won't.

A 50-50 forecast doesn't change anything. You still need to prepare for both scenarios. There's a sense in which it's "useless" even though it's a valid forecast.

Forecasts are most useful when they allow you to eliminate a possibility, so you don't need to prepare for it. This means even something like an 80% chance might be of limited use, if you still need to prepare for the 20% case.

(Gambling and investing are different, because you can accept losses and make up for them with repeated bets. You still need to be prepared for losses, but that's a matter of not betting too much compared to your bankroll.)

Even if the forecasters don't do anything wrong, for many people, presidential election forecasts are of limited use. Either they don't have anything they need to do to prepare, or they still have to prepare for either candidate to win.

CJefferson

50-50 forecasts can be massive depending on your priors. If someone (who I trusted) told me there was a 50-50 chance I would get shot tomorrow if I went outside, you couldn’t pay me enough to go leave the house.

skybrian

Absolutely! It didn't work that way for recent presidential elections, though. The prior odds weren't always 50:50, but they were well within the range of "both sides could win" and didn't go outside that.

mannykannot

In the case of an election, a 50:50 forecast is a clear reminder to the wise of the importance of voting, no matter how certain the outcome might seem from the necessarily limited perspective of your acquaintances and your consumption of possibly (or likely) biased news media.

wenwolf

If a modern US election went far outside that that would just mean some fool was burning a huge amount of money and probably to no purpose as it would cause a response from donors on the other side who would like an outcome but also aren't going to throw money away.

billfor

You have a choice to go out or not but the example of the wedding is a given obligation, so the apples-apples comparison would be "if you had to go outside, would a 50-50 chance of being shot change anything"

neogodless

This is an interesting hypothetical, and I hope people with better understanding of statistics would weigh in.

But lets change the example a bit:

- There's a 10% chance of getting shot on any day you leave the house - You are able to get 5 days of food any time you leave the house - There's a 100% chance of starving if you go 8 days without leaving your house

How long can you avoid getting shot or starving?

Presumably if you don't want to go hungry at all, you have to leave the house every 5 days to get more food. After 50 days, the odds of you having been shot are pretty darn high.

DiggyJohnson

Respectfully, how is this noteworthy or add to the discussion?

James_K

Suppose it rains on 50% of days. One forecaster gives a 50% chance of rain every day. Another gives a 90% chance of rain/not rain and is "wrong" 10% of of time. Both stations are giving you accurate information from a probability perspective (when station A says a 50% chance, you know there's a 50% chance of rain tomorrow, when station B says 90%, you know there's a 90% chance of rain tomorrow) but the 50% chance station is less useful as the number is lower. They have effectively given up in the same way pollsters have given up.

DiggyJohnson

This is the best response in thread, imo. Is there a term for this phenomenon?

gowld

It's statistical information arbitrage.

Let's say you go to the futures market:

Seller pays a contract-buyer $1 if it rains tomorrow. How much will seller charge for that contract?

Bookmaker using forecaster A will sell that contract for $0.51 every day, and earn a profit of $0.01/contract, on average.

Bookmaker using forecaster B and their clients will buy that contract on probably sunny says, and bankrupt Forecaster A.

Similar result if Bookmaker B sells contracts to Bookmaker A, at $0.11 on probably-sunny days, and $0.90 on probably-rainy days.

James_K

I don't think so, though I often find someone has thought of the same thing I have and given it a name. The root of the issue is a matter of language. Events don't have probabilities. You don't predict a 50% chance of rain, you predict rain with a 50% certainty. (There are a lot of internet brain teasers that exploit this subtle ideological difference.) The 50% is talking about how often you're correct, not how likely the rain is to happen. Probabilities don't really exist; it either rains or it doesn't.

Retric

Weddings are unusually locked in dates. Canceling a hotel a few days out is generally free and 50% odds of rain can easily be high enough to postpone a weekend trip. 10% odds however may be worth the gamble even if both possibilities exist.

With elections you need to take both a short term and long term perspective. True 50/50 odds mean doubling down at the last minute is likely more valuable vs saving that money for the next election. I think the pollsters got things wildly wrong with that estimate because their underlying assumptions where off, but that’s a different issue.

mannykannot

The utility of consulting weather forecasts prior to events like weddings lies in the non-trivial possibility that it will give you a clear indication of what will happen.

ActorNightly

The problem is that there are two 50-50 odds definitions.

First one, is where you have no information about the event what so ever.

Second is where you have information, and it turns out that the outcome is still 50-50.

While the use of that information in planning (pricing things, betting, investing) is identical, they fundamentally represent 2 different sets of information.

alphazard

Highly recommend this video by NNT about why prediction markets tend towards 50-50.

https://www.youtube.com/watch?v=YRvPF__du9w

Prediction markets are usually implemented as binary options. Like vanilla options, their price depends not just on the most likely outcome, but the whole distribution. When uncertainty increases (imagine squishing a mound of clay), you end up pushing lots of probability mass (clay) to the other side of the bet and the expectation of the payoff (used to make a price) tends towards 1/2.

TheDong

You’re welcome to bet against my prediction market about whether the quarter I flip will land on mars.

Or on the two markets for “New pope by tomorrow” and “new pope by 2030”, I’m sure those should both be 50% yes.

My experience is that the vast majority of prediction markets do not tend towards 50/50 because reality isn’t 100% uncertain.

lores

It's the same as bookies, right? They are often seen as predicting the odds of an event happening, but that's actually the odds they estimate so that the bets are roughly divided 50/50, to limit their exposure, which is pretty different and depends on their market. English bookies will always overrate the chance of England winning at football, because English people will disproportionately place a bet for their team to win.

ngriffiths

I don't think it's that simple - if other people find out English bookies overrate England, then they make a profit by betting against. I'm not saying every market is perfectly efficient but it would be surprising if a major inefficiency lasted for a while. And if "enough" bets are placed the set odds will always be an unbiased estimate of the true odds.

tomjakubowski

Sports bookies have a significant informational advantage over the everyday gambler; they index massive amounts of historical data and have access to live data which is lower-latency than radio or television broadcasts. That helps make the market inefficient.

gruez

> When uncertainty increases (imagine squishing a mound of clay), you end up pushing lots of probability mass (clay) to the other side of the bet and the expectation of the payoff (used to make a price) tends towards 1/2.

This doesn't make any sense. If you think the chances of someone of getting elected is very high (eg. Putin getting re-elected), nobody will be buying shares in him losing. True, the shares of "putin loses" is dirt cheap, but that doesn't mean much if he has a high chance of getting elected.

JumpCrisscross

> the shares of "putin loses" is dirt cheap, but that doesn't mean much if he has a high chance of getting elected

This absolutely happens when you can make bets across multiple markets. But it doesn’t lead to every market with two outcomes drifting towards 50/50.

The reason election markets in competitive democracies tend towards 50/50 is because the outcome really isn’t that predictable.

ngriffiths

It matters a lot what the price is of "putin loses." If Putin does in fact lose 5% of the time, but the odds are 3%, you should bet on it. It may feel silly to do that in this instance but consider what happens if you regularly bet in prediction markets. Always betting on the "putin wins" outcome loses money in the long run.

1970-01-01

Seems like there is a hidden function when 50-50 polls exist: 100% uncertainty with 100% certainty. This is the equivalent to having no data at all.

Maybe we should just start saying "This is the equivalent scenario/situation to having no data at all." instead of 50-50.

swid

Lets says I have a coin. I have no data on it. I do not know if it will come up heads or tails or if it is a fair coin.

I throw the coin N number of times and determine the coin is fair. I still do not know which side the coin will come up.

But this isn't the same as having no data on the coin. I've determined the coin is fair by gathering data, which otherwise I would not know. Often this added piece of information is crucial to making good decisions and not at all equivalent to the first case.

For instance, I might want to bet you that the next 10 throws in a row will all be heads. If you know the coin is fair, this is a good bet - but without data on the coin, it could be a trap.

1970-01-01

The goal of polls is not to determine if an election is or is not a fair election. The goal of polls is to predict the final outcome of the election.

swid

You state the case of no knowledge vs knowing 50/50 probability are equivalent, but they are clearly not.

If you have no data but one side is actually heavily favored to win, you cannot determine how much you should bet on that probability even though there is an opportunity to make good sized bets. If neither side is favored to win, then you can still make good sized bets if the payout is not 1:1.

The Kelly Criterion formulates all this - the probability of win/loss does in fact matter, and if you don't "know" that, you cannot make a good bet. Knowing it is 50/50, you can bet on it.

geysersam

Having no data at all is better than having the wrong information.

If you were previously pretty sure one side would win you gain information from a study that says it's 50-50.

If you already though it was even then the new study is pointless.

ttctciyf

I think the lack of signal from the electorate does represent an entropic state and is likely because available information channels are saturated with noise.

ThinkBeat

A close race, creates more excitement and more cash for the media.

If you get on a horses and your horse keep losing and losing ground, watching the race becomes quite boring. If the horses are neck on neck you will be captivated watching it till the very end.

The media has a strong incentive to sell the elections as close to keep more people seeing their ads and buying subscriptions.

jdietrich

The odds on Polymarket (and other betting markets) started diverging at the start of October; by the end of the month, the odds were 65/35 in favour of Trump.

At the time, many commentators were arguing that this was obvious market manipulation. It turned out that the trader who had supposedly manipulated the market had in fact commissioned private polls using an alternative methodology - rather than asking people how they would vote, it asked how they expected their neighbours to vote.

https://polymarket.com/event/presidential-election-winner-20...

https://www.ft.com/content/e8b2ff85-a4dd-40cd-a4c1-e3d706869...

https://www.ft.com/content/4b302ab8-7e40-4d1a-bbe6-3d46f6811...

AnimalMuppet

Interesting. If you think about it, that's how the stock market works. It's not how much you think a stock is worth; it's really how much you think everyone else thinks the stock is worth.

ngriffiths

Yes but unlike with stocks, you don't want to agree with them: if you think people overrate the chances of an event, you'd against it.

qznc

That is similar with stocks. I'd say the difference is that PolyMarket has an end date where everything gets resolved one way or another. Stocks can go on forever.

thierrydamiba

Was this a new development in polling?

Izkata

No, it was fairly well known a decade or more ago as a very good way to remove bias for how low-effort it is. We were using a variation of it at work back then for an unrelated subject.

How much it's used in political polling I can't say.

thierrydamiba

It just seems so obvious-I would have been surprised if it was only discovered in the last election.

DrillShopper

Can't wait to check the DraftKings odds on the presidential election before I head to the polls to see if it's worth bothering to vote./s

antman

There is the statisticians aphorism often attributed to George Box that "All models are wrong but some are useful". This goes against the core argument of this essay that pretty much says that "The models were mostly right, but not useful" which not a useful argument itself, even if its right

1970-01-01

Predict 100 fair coin flips..

My result didn't match yours. The model isn't useful?

joshdavham

I always get annoyed when people look at election outcomes and say “the polls were wrong” when the most likely outcome didn’t happen. It’d be like saying there’s a 5/6 chance that a 6-sided dice will not roll a 4, rolling a 4, and then concluding that the initial proposed probability was wrong.

t-3

That's not how polls work though. A poll shows how a sample of people say they'll vote. An ideal poll would have the same proportions as the final popular vote count.

Spivak

Thank you! Lordy it's crazy that this thread isn't distinguishing polls which are a measurement and the resulting prediction based on those polls.

If you polled 60%/40% yea/nay you wouldn't give the nays a 40% chance of winning. You need P(yeas > 50%) based on your observed measurement.

afavour

I came away from 2016 wondering if any of the electoral outcome stuff is worth doing given how much it gets misunderstood.

“They said Hillary had a 90% but Trump won! They’re idiots!”

No! They said Trump had a 10% chance and he won. It was a predicted outcome!

knightfall21

The problem is using statistical models at all for predicting the probability of one time events. (elections are every 4 years, but those future models will be different).

You can't validate such a model, and it's not useful since any upset victory was "accounted for" simply because its possibility was named.

oersted

It is a rational behavior to question a prediction if an event that was deemed to have low probability actually happened, I don’t like the implication that it is stupid to do so.

In fact, after thorough investigation, most polling agencies did identify significant factors that let to bias, and almost all major pollsters have adapted their methods in later elections as a result: education weighting, compensating for nonresponse bias, diversified polling channels beyond phone calls (online surveys, text messages, in-app polling), likely-voter modeling…

I don’t think any professional pollster will tell you that the prediction you mentioned was reasonably accurate in retrospect, regardless of affiliation.

I don’t attribute malice or incompetence, it’s clear that the game changed suddenly and new confounding factors became significant. People that would previously vote together diverged significantly, lots of politically inactive people became involved, people were afraid to be honest, and people that were harder to reach were voting much differently to those that were more accessible to pollsters. That will really mess with any sampling strategy that proved reliable for many years before.

jefftk

538 had Trump at 30.7% the night before, FWIW: https://web.archive.org/web/20161107224336/https://projects....

devsda

If someone says there is an x% chance of an event happening, is there a value of x other than 0 & 100 where they can be wrong?

What is the value that someone can derive out of such predictions ?

bmandale

If someone predicts something will happen with a 90% probability, then they should be wrong roughly 10% of the time. We can't determine that from a single event, but we could look at 10 events predicted by the same model. In the case of election forecasts, we would expect 1/10 elections predicted at 90% probability to have the wrong estimate. So we would expect the average person to see it at least once, and probably twice, in their lifetime.

staticman2

But is your 10% chance of Trump winning model a better model than one that gave Trump a 40%, 50%, 60%, 70%, 80%, or 90% chance of winning?

If you can't show that why should anyone listen to you?

jdoliner

I don't know about the framing of "giving up." But I think anyone who's been following election models since the original 538 in 2008 has probably gotten the feeling that they have less alpha in them than they did back then. I think there's some obvious reasons for this that the forecasters would probably agree with.

The biggest one seems to be a case of Goodhart's Law, leading to herding. Pollsters care a lot now about what their rating is in forecasting models, so they're reluctant to publish outlier results, those outlier results are very valuable for the models but are likely to get a pollster punished in the ratings next cycle.

Lots of changes to polling methods have been made due to polls underestimating Trump. Polls have become like mini models unto themselves. Due to their inability to poll a representative slice of the population they try to correct by adjusting their results to compensate for the difference between who they've polled and the likely makeup of the electorate. This makes sense in theory, but of course introduces a whole bunch of new variables that need to be tuned correctly.

On top of all this is the fact that the process is very high stakes and emotional with pollsters and modellers alike bringing their own political biases and only being able to resist pressure from political factions so much.

The analogy I kept coming back to watching election models during this last cycle was that it looked like an ML model that didn't have the data it needed to make good predictions and so was making the safest prediction it could make given what it did have. Basically getting stuck in this local minima at 50-50 that was least likely to be off by a lot.

rachofsunshine

Even if polling had been exactly right, you wouldn't have been that confident in the outcome.

In my unsophisticated toy model, plugging in the exact actual result as the polling average (but not telling it how the actual vote went) spits out 66% R-34% D. Clearly one side favored, but hardly a guarantee. Because the result was close, and even highly accurate data in a close result yields an uncertain forecast.

Remember that asteroid a month ago? We knew what its position would be seven years in the future with a precision of a few hours. But because the position was very close to an impact, even that high precision was not enough to rule out an impact.

pessimizer

> Due to their inability to poll a representative slice of the population they try to correct by adjusting their results to compensate for the difference between who they've polled and the likely makeup of the electorate.

This is where polling becomes race science.

> This makes sense in theory

It does not make sense in theory. It is a necessity for the profession, but all justifications for it are specious. Polls only get it right when everybody is getting it right. What they offer is false precision and justification for the current narratives.

It's similar to AI in that way. It's also similar to the mythical prediction markets that polls have been compared to lately, the "mythical" here meaning with no insiders involved. On issues where there are no real insiders, like close elections, the prediction markets are simply a lagging indicator of what pundits said in the paper this morning. That goofy Iowa poll swung them so hard that I thought Seltzer should have been investigated for whatever the prediction market equivalent to securities fraud is.

It might be more accurate in light of the OP to say that polls get it right when everybody is getting it right, and when everybody isn't sure what's going to happen, polling accuracy is around 50/50.

The best book to read about polling is The Full Facts Book of Cold Reading by Ian Rowland. It also tells you how to write defenses like this, which are part of the con.

-----

edit:

Section headings from "The Win-Win Game" from TFFBoCR, which teaches 10+1 ways how to make failures seem like successes:

> 1. Persist, wonder and let it linger.

> (Phase A: The psychic persists with the official statement and tries to encourage at least partial agreement. B: she acts puzzled, and invites the client to share the blame for the 'discrepancy.' C: she leaves the discrepancy unresolved, in case the client finds a match later on.)

> 2. I am right, but you have forgotten.

> 3. I am right but you do not know.

> 4. I am right but nobody knows.

> 5. I am right, but it's embarrassing.

> 6. I am wrong now, but I will be right soon.

> 7. I am wrong, but it doesn't matter.

> 8. I am wrong in fact, but right emotionally.

> 9. I am wrong in fact, but right within [the] system.

> 10. Wrong small print, right headline.

> [+1]. Accept, apologise, and move on.

> (In this way the psychic cuts her losses and moves on. She leaves the problem behind, where it will be quickly forgotten, and at the same time she comes across as extremely honest.)

Very much worth reading for entrepreneurs looking for investment or any other confidence men. Rowland even tried to brand it a few years ago in Cold Reading For Business as the "CRFB" system.

genewitch

okay, now change "election prediction models" to "Climate models" and see if you feel like downvoting me merely for pointing out the (slight?) hypocrisy in "excusing" every other model we humans use for being "inaccurate" or "not having the full details" or the "whole slice of"...

when none of the models tend to agree... and the IPCC literature published however often they do it is hung upon the framework of models.

sojournerc

All models are wrong, but some are useful.

Climate modeling is way messier than the media portrays, yet even optimistic models show drastic change.

I'm not in the catastrophy camp, but it's worth preparing for climate change regardless of origin. It's good for humanity to be resilient to a hostile planet.

milesrout

Yeah my view is a little trite but... "we cleaned the air and closed the ozone hole and reduced our dependency on oil from a small number of OPEC countries all for nothing?".

I support the climate change mitigation and adaptation moves that would be nice anyway (many of the most important ones) and would prefer alternatives to things like turning all the arable land into pinus radiata plantations to generate ETS credits or voluntarily paying "climate fines" to shadowy international organisations if we don't hit certain "targets".

nextts

Also the best model (current reality) is showing climate change!

genewitch

A++ would upvote again

anovikov

I think polling these days is all but useless just because a very small proportion of people actually respond to pollsters and it's next to impossible to statistically account for "what would the other 80% say if they did't tell the pollster to fuck off". There's no way to build a "representative sample" to account for that. People are way more likely to respond to a "friendly" pollster and if they don't know whether pollster is "friendly" (works for the organisation aligned with their party), they assume they are from the other camp. It's a mess.

arrrg

But the surveys were pretty accurate this election. So I’m not sure why you say they are useless.