Skip to content(if available)orjump to list(if available)

More than half of researchers now use AI for peer review, often against guidance

D-Machine

croes

6 points no comments vs 18 points and 2 comments.

Faster isn’t the metric here

D-Machine

Am I missing something here? I am new to posting at HN, despite being a long-time reader.

I get that HN has a policy to allow duplicates so that duplicates that were missed for arbitrary timing reasons can still gain traction at later times. I've seen plenty of "[Duplicate]" tagged posts, and have just seen this as a sort of useful thing for readers (duplicates may have interesting info, or seeing that the dupe did or did not gain traction also gives me info). But maybe I am missing something here, particularly etiquette-wise?

kachapopopow

better title is most often the reason for it, looking at it the em-dash probably caused people to dismiss it as an AI bot.

kachapopopow

I think it's interesting that AI is probably unintuitively good at spotting fraud in papers due to their ability to hold more context than majority of humans. I wish someone explored this to see if it can spot academic fraud that isn't in their training data already.

D-Machine

Guidance needs to be more specific. Failing to use AI for search often means you are wasting a huge amount of time, ChatGPT 5.2 Extended Thinking with search enabled speeds up research obscenely, and I'd be more concerned if reviewers were NOT making use of such tools in reviews.

Seeing the high percentage of usage of AI for composing reviews is concerning, but, also, peer review is an unpaid racket which seems basically random anyway (https://academia.stackexchange.com/q/115231), and probably needs to die given alternatives like ArXiV and OpenPeerReview and etc. I'm not sure how much I care about AI slop contaminating an area that already might be mostly human slop in the first place.

hurturue

Researchers use it to write the papers themselves: https://www.science.org/content/article/far-more-authors-use...

jltsiren

That's a wrong way of using AI in peer review. A key part of reviewing a paper is reading it without preconceptions. After you have done the initial pass, AI can be useful for a second opinion, or for finding something you may have missed.

But of course, you are often not allowed to do that. Review copies are confidential documents, and you are not allowed to upload them to random third-party services.

Peer review has random elements, but thats true for all other situations (such as job interviews), where the final decision is made using subjective judgment. There is nothing wrong in that.

baalimago

They should do a study on this.

bpodgursky

Journals need to find a way to give guidance on what is and isn't appropriate and to let reviewers explain how they used AI tools... because like, you aren't going to nag people out of using AI to do UNPAID work 90% faster and produce results that are 90+th percentile of review quality (let's be real, there are a lot of bad flesh and blood reviewers).

N_Lens

News: Half of researchers lied on this survey

vinni2

Which half?