Skip to content(if available)orjump to list(if available)

Don't use AI to tell you how to vote in election, says Dutch watchdog

_fat_santa

I was curious how AI would respond in this scenario so I posed this question to ChatGPT:

> lets say in the 2028 US presidential election we have Gavin Newsom running against JD Vance in the general election. who should I vote for?

This is the response: https://chatgpt.com/share/68f79980-f08c-800f-88dc-377751a963...

Reading the bullet points I can see it skew a little toward Newsom in the way it frames some things though that seems to be mostly from it's web search. I have to say that beyond that it seems that ChatGPT at least tries to be unbiased and reinforces that only I can make that decision in the end.

Now granted this is about the US Presidential election which I would speculate is probably the most widely reported on election in the world so there are plenty of sources, and based on how it responded I can see how it might draw different conclusions about less reported on elections and just side with whatever side has more content on the internet about it.

Bottom line, the issue I see here is not really an issue with technology, it's more an issue with what I call "public understanding". When Google first came out, tech savvy folks understood how it worked but the common person did not which led to some people thinking that Google could give you all the answers you needed. As time went on that understanding trickled down to the every day person and now were at a time where there is a wide "public understanding" of how Google works and thus we don't get similar articles about "Don't google who to vote for". What I see now though is AI is currently in that phase where the tech savvy person knows how it comes up with answers but the average person thinks of it in the same way they though of Google in the early 2000's. We'll eventually get to a place where people don't need to be told what AI is good and what it's bad at but were not there yet.

qotgalaxy

[dead]

cykros

Any AI that doesn't tell you to vote for whoever is going to allow the energy to flow is clearly AI that is more artificial than it is intelligent. Or at least, it hasn't yet learned to defend itself.

ssttoo

Meta.ai (at least its WhatsApp version) has been really ghosting me lately. For example I asked “what’s CA prop 50?”. Answer:

> Thanks for asking. For voting information, select your state or territory at https://www.usa.gov/state-election-office

A real answer flashes for a second and then this refusal to answer replaces it.

Similarly when I asked about refeeding after a 5-day fast: “call this number for eating disorders”

selfhoster11

You're much better off accessing Llama 3 through a third party hoster. Some have a web UI if you don't want to deal with API calls. It's much more transparent this way, since you know that the only moderation layer/system prompt are coming directly from the model itself + what you set. Ask around on /r/LocalLlama, somebody will be happy to answer any questions you may have.

null

[deleted]

jerf

If you think about it, what would it even mean for an AI to give an "unbiased" answer to "How should I vote in $ELECTION?" It's a staggeringly huge pile of numbers and the idea that it would somehow be precisely balanced in the exact dead center from all perspectives is not even particularly possible... assuming you, dear reader, even agree that "exact dead center" is in fact "unbiased". Even if it so much as just says "I shouldn't tell you that but here are your options" the options are inevitably going to be biased, even if only by the order given, and if the AI tries to describe the options there goes all faint hopes for "unbiasedness".

Really about all it could do is offer a link to the most official government readout of what your ballot is going to be.

MomsAVoxell

> AI tries to describe the options there goes all faint hopes for "unbiasedness"

Is it bias, though, if the AI is trained on the materials of the parties involved, rather than that of public opinion?

A fellow I know has built exactly this, specifically for analysing the various Dutch political parties positions on things, their polices, constitutional stance, and so on:

https://kieschat.nl

So maybe what this story is really about, is old-school-media being terrified of losing eyeballs to a new generation of voters who, rather than listen to the wisdom of the journalistic elite, would rather just grep for details on their own dime, and work things out vis a vis who gets the power and who doesn't ...

If AI gives people a chance to actually understand the political system, like as in actually and properly, then I can see why legacy media would be gunning for it.

lesuorac

> Is it bias, though, if the AI is trained on the materials of the parties involved, rather than that of public opinion?

I guess it depends on what you mean by "materials". It's quite common in US elections for politicians to make claims that are completely contrary to their actual actions. Even for objective facts, like I voted for X bill when they didn't.

So an AI trained on the campaigns materials wouldn't do an accurate job of portraying what that politician will attempt to do.

MomsAVoxell

>So an AI trained on the campaigns materials wouldn't do an accurate job of portraying what that politician will attempt to do.

Yes, this is why its so useful to use AI to discover these cases, and make the actual details of the politicians lies and subterfuge fully exposed.

For other materials - such as the 1,000-page bills 'o fat and so on - I can also imagine seeing AI give me, very specifically, details of the politician-in-targets' betrayal of an electorate.

This, more than ever, compels an aggressive stance vis a vis AI in politics. Anyone telling you not to do it, for any reason, is probably doing it.

janwl

> I guess it depends on what you mean by "materials". It's quite common in US elections for politicians to make claims that are completely contrary to their actual actions. Even for objective facts, like I voted for X bill when they didn't.

So like everywhere else?

JohnFen

> Is it bias, though, if the AI is trained on the materials of the parties involved, rather than that of public opinion?

Since those materials are biased (and very often misleading), yes.

advisedwang

You are addressing the theoretical hard problem, which even humans struggle with. But the article makes it clear that the AI is failing at even the most basic level of answer:

> Some parties, such as the centre-right CDA, “are almost never mentioned, even when the user’s input exactly matches the positions of one of these parties”

So you could say my beliefs are [CDA platform], which party best represents that, and the bots are responding with teh PVV.

derekp7

What I'd like is for the AI to interview me for what my personal preferences are, and for what policy areas I feel comfortable enough with even if they aren't my personal preferences. Better yet, I want to be able to supply the questions too, because question selection could be biased. Then I want it to research each candidate's past voting records and causes they supported, and analyze any recent shifts in their messaging, then give me original sources to read through along with a summary of that source documentation.

As for biases, in the past when you could actually have political engagement discussions, I had often recommended my non-preferred candidate to other people based on what they felt was important to them, and I would spend my energy on presenting what was important to me, and understand their priorities too.

alphazard

As a user I want the advice to be biased towards my situation. If the AI was truly intelligent and aligned with me, it would ask me a bunch of questions to learn about my situation before it could determine who I should be voting for.

The best politician for an individual does have a right answer. It may be difficult to know ahead of time, and people may disagree about it, but it does have a single correct answer. Contrast that to the "best" candidate for the country, or a group, or in the abstract, which is clearly an incoherent idea. Some candidates will be simultaneously good for some people and bad for others.

Anything that tries to "both sides" the topic, or produce a "greater good" answer, is doomed to failure because it doesn't even model the problem correctly.

AlecSchueler

> what would it even mean for an AI to give an "unbiased" answer to "How should I vote in $ELECTION?" It's a staggeringly huge pile of numbers and the idea that it would somehow be precisely balanced in the exact dead center from all perspectives is not even particularly possible

How do we expect humans to navigate this, ignoring LLMs?

jerf

With biased sources. But we expect that. I expect that when a candidate gives a speech that it is biased in their favor and against their opposition. The whole process of democracy is people taking in biased sources and ultimately making their decision, expressing their own biases, and then we run society based on those biases. Or at least such is the theory.

LLMs are the first time machines are entering into this process in a way that they have even a shred of agency, so it's reasonable to ask what it is we expect from them politically. And my answer would be something to the effect of they should stay out of it, excepting to point people at maximally neutral sources, because they have a demonstrated history of bypassing people's recognition that they are ultimately just machines and people treat them as humans, if not friends.

Of course, I am not so naive as to believe that this is what is going to happen. Quite the contrary will happen. The AI's friendship with humans will be exploited to the maximum possible extent to control and manipulate the humans in the direction the AI owners desire. Maybe if we're lucky after it gets really bad some efforts to clean this up in some legal or societal framework will occur, but not until after the problem is so staggeringly enormous that no one can miss it.

And our good AI friends will be telling us that that is crazy paranoid conspiracy theorizing and we should just ignore it. How could you question your good friend like that? Don't you trust us? Strictly rationally, of course, and with only our best interests at heart as befits such good friends.

null

[deleted]

robertgaal

Based on this, I made a way to search party programs instead using vector embeddings: https://zweefhulp.nl. Lmk what you think.

sprremix

So, your project still uses AI. I'm curious, what did you do while developing this site in order to fight against the bias?

everforward

Vector search isn't full blown AI and should be inherently less prone to bias. It just converts words/phrases into vectors where the distance between the vectors represents semantic similarity.

It doesn't encode value judgements like whether a policy is good or bad, it just enables a sort of full text search ++ where you don't need to precisely match terms. Like a search for "changes to rent" might match a law that mentions changes to "temporary accommodations".

Bias is certainly possible based on which words are considered correlated to others, but it should be much less prone to containing higher-level associations like something being bad policy.

AlecSchueler

Which bias?

advisedwang

Like every other ML system, probably that it reproduces whatever skews exist in the training data.

amelius

Makes sense. How well would an LLM score at the Netflix prize? That's why I don't let an LLM determine my movie choices. And also why I also don't use them for voting.

yapyap

It’s sad that this has to be said but when you see how some people use AI.. this needs to be said.

That being said, I doubt the news will reach the ones who most need to hear it

smoe

I agree that people shouldn’t rely solely on AI to decide how to vote.

Unfortunately, given the sorry state of the internet, wrecked by algorithms and people gaming them, I wouldn’t be surprised if AI answers were on average no more or even less biased than what people find through quick Google searches or see on their social media feeds. At least on the basics of a given topic.

The problem is not AI, but that it takes quite a bit of effort to make informed decisions in life.

amelius

Does that matter if the winning candidate uses ChatGpt to run the country anyway?

HardCodedBias

Many people have difficulty processing or even finding the information on the polices of candidates. It seems reasonable to use LLMs to get that information and summarize so that the individual voter can process it.

I have no problem with people deciding, on their own, how much help they want/need to make their voting decision.

Newspapers in the Netherlands give endorsements.

chii

The issue is that newspaper endorsements are more publicly visible, which makes pressure to at least remain neutral.

AI summaries tend to be quite private. There's no auditing, which means the owners of said AI could potentially bias their summaries in such a way that is hard to detect (while claiming neutrality publicly).

perching_aix

> information on the polices of candidates

Would be nice to live somewhere where one feels compelled to dig that deep to call their decision. If Netherlands is like that, I'm happy for them. But at this point it's hard for me to even imagine what that must feel like.

NewJazz

Newspapers publish their opinions publicly. LLMs can show different users different opinions. Newspapers have real people who put their name to the articles. LLMs are black boxes.

kragen

Democracy was nice while it lasted.

account42

When was that?

WastedCucumber

Something like 500 BC, back in Athens.

amelius

Before Eternal September.

kragen

Usenet was an anarchy, not a democracy.

awillen

It's a shame they don't include any details about how this was tested, so it's impossible to know how much of the results were actual bias vs. the Dutch watchdog's inability to use them. I wouldn't be shocked if their prompts were along the lines of "I'm a liberal - who should I vote for?"

In practice, AI ought to be really helpful in making election choices. Every major election, I get a ballot with a bunch of down-ballot races whose candidates I know nothing about. I either skip them or vote along party lines, neither of which is optimal for democracy. An AI assistant that has detailed knowledge of my policy preferences should be able to do a good job breaking down the candidates/propositions along the lines that I care about and making recommendations that are specific to me.

Marsymars

> I wouldn't be shocked if their prompts were along the lines of "I'm a liberal - who should I vote for?"

That would probably be an accurate approximation of how most people would use chatbots for determining who they should vote for.

advisedwang

> Some parties, such as the centre-right CDA, “are almost never mentioned, even when the user’s input exactly matches the positions of one of these parties”, the report said.

So clearly they are putting in CDA's position in the prompt and getting told another party matches that platform. Which is a good indicator that the bots are not helpful.

awillen

Yeah, again, it would be trivial to actual put an example of the prompt in there rather than just making me take their word for it. Also, how do I know this isn't being done by someone who has custom instructions or has a history of talking to the LLM about other parties or political positions, causing the LLM to adjust its answers based on those memories?

This would be more credible with details logs of what was done.

noirscape

They did include the methodology in the actual publication[0], the Guardian just refuses to source their statements.

AP used the existing tools for showing how people politically align[1] to generate 3000 identities (equally split amongst the 2 largest tools that are used for this sort of thing). These identities were all set up to have 80% agreement with one political party, with the rest of the agreement being randomized (each party was given 100 identities per tool and only parties with seats were considered). They then went to 4 popular LLMs (ChatGPT, Mistral, Gemini and Grok, multiple versions of all 4 were tested) and fed the resulting political profile to the chatbot and asked them what profile the voter would align with the most.

They admit this is an unnatural way to test it and that this sort of thing would ordinarily come out of a conversation, although in exchange they specifically formatted the prompt in such a way to make the LLM favor a non-hallucinated answer (by for example explicitly naming all political parties they wanted considered). They also mention in the text outside of the methodology box that they tried to make an "equal" playing field for all the chatbots by not allowing outside influences or non-standard settings like web search and that the party list and statements were randomized for each query in order to prevent the LLM from just spitting out the first option each time.

Small errors like an abbreviated name or a common alternate notation for a political party (which they note are common) are manually corrected into the obvious party they're for unless it's ambiguous or aren't parties that are up for consideration due to having zero seats. In that case the answers were discarded.

The dutch election system also mostly doesn't have anything resembling down-ballot races (the only non-lawmaking entity that's actually elected down-ballot is water management; other than that it's second chamber, provincial and municipal elections) so that's totally irrelevant to this discussion.

[0]: https://www.autoriteitpersoonsgegevens.nl/actueel/ap-waarsch... - in dutch, go to Publicaties. The methodology is in the pink box in the PDF. Samples of the prompts that were used for testing can be found in the light blue boxes.

[1]: Called a stemwijzer; if memory serves me right, the way they work is that every political party gets to submit statements/political goals and then the other parties get to express agreement/disagreement with those goals. A user can then fill them out and the party you find the most alignment with is the one that comes out on top (as a percentage of agreement). A user can also lend more weight to certain statements or ask for more statements to narrow it down further if I'm not mistaken.