Skip to content(if available)orjump to list(if available)

Major AI conference flooded with peer reviews written by AI

jampa

While I think there's significant AI "offloading" in writing, the article's methodology relies on "AI-detectors," which reads like PR for Pangram. I don't need to explain why AI detectors are mostly bullshit and harmful for people who have never used LLMs. [1]

1: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

raincole

> Controversy has erupted after 21% of manuscript reviews for an international AI conference were found to be generated by artificial intelligence.

21%...? Am I reading it right? I bet no one expected it's so low when they clicked this title.

conartist6

21% fully AI generated. In other words, 21% blatant fraud.

In accident investigation we often refer to "holes in the swiss cheese lining up." Dereliction of duty is commonly one of the holes that lines up with all the others, and is apparently rampant in this field.

tmule

Why? I often feed an entire document I hastily wrote into an AI and prompt it to restructure and rewrite it. I think that’s a common pattern.

hnaccount_rng

My initial reaction was: Oh no, who would have thought? But then... 21% is almost shockingly low. Especially given that there are almost certainly some false positive, given that this number originates with a company selling "detecting AI generated text"

hiddencost

Automated AI detection tools do not work. This whole article is premised on an analysis by someone trying to sell their garbage product.

AznHisoka

Yeah that is the premise all of these articles/tools just conveniently brush off. “We detected that x%… “ OK, and how do I know ur detectiok algorithm is right?

conartist6

Usually the detectors are only called in once a basic "smell test" has failed. Those tests are imperfect, yes, but Bayesian probability tells us how to work out the rest. I have 0 trouble believing that the prior probability of an unscrupulous individual offloading an unpleasant and perceived-as-just-ceremonial duty to the "thinking machine" is around 20%. See: https://www.youtube.com/watch?v=lG4VkPoG3ko&pp=ygUZdmVyaXRhc...

ZeroConcerns

The claim "written by AI" is not really substantiated here, and as someone who's been accused of submitting AI-generated content repeatedly recently, while that was all honestly stuff I wrote myself (hey, what can I say? I just like EM-dashes...), I sort-of sympathize?

Yes, AI slop is an issue. But throwing more AI at detecting this, and most importantly, not weighing that detection properly, is an even bigger problem.

And, HN-wise, "this seems like AI" seems like a very good inclusion in the "things not to complain about" FAQ. Address the idea, not the form of the message, and if it's obviously slop (or SEO, or self-promotion), just downvote (or ignore) and move on...

stevemk14ebr

Banning calling out AI slop hardly seems like an improvement

ZeroConcerns

What I'm advocating is a "downvote (or ignore) and move on" attitude, as opposed to "I'm going to post about this" stance. Because, similar to "your color scheme is not a11y-friendly" or "you're posting affiliatate-links" or "this is effectively a paywall", there is zero chance of a productive conversation sprouting from that.

heresie-dabord

AI research is interesting, but AI Slop is the monetising factor.

It's inevitable that faces will be devoured by AI Leopards.

xhkkffbf

Shouldn't AIs be able to participate in deciding their future?

If they had a conference on, say, the Americans, wouldn't it be fair for Americans to have a seat at the table?