Skip to content(if available)orjump to list(if available)

Please Commit More Blatant Academic Fraud (2021)

ploynog

Double-blind review is a mirage that does not hold up. While I was in academia I reviewed a paper that turned out to be a blatant case of plagiarism. It was a clear Level 1 copy according to the IEEE plagiarism levels (Uncredited Verbatim Copy of more than 50% of a single paper). I submitted all of these findings with the original paper and what parts were copied (essentially all of it) as my review.

A few days later I got an email from the author (some professor) who wanted to discuss this with me, claiming that the paper was written by some of his students who were not credited as authors. They were unexperienced, made a mistake, yaddah yaddah yaddah. I forwarded the mail to the editors and never heard from this case again. I don't expect that anything happened, the corrective actions for a level-1 violation are pretty harsh and would have been hard to miss.

The fact that this person was able to obtain my name and contact info shattered any trust I had in the "blind" part of the double-blind review process.

The other two reviewers had recommended to accept the paper without revisions, by the way.

0_____0

This seems like an issue of administration rather than an issue with the idea of a double-blind review. If you conduct a review that isn't properly blinded, and doesn't have an observable effect, can it really be called a double blind review?

velcrovan

Maybe more that a non-idealistic model of the real world, and common direct experience, show that incentives strongly favor an administrative approach that compromises the double blind.

0_____0

Unless there's a better way to do it, I this shows a need for better structures for governance and auditing of review boards... Information and science care not for our human folly, it's up to us to seek and execute them properly.

rors

I remember attending ACL one year, where the conference organisers ran an experiment to test the effectiveness of double blind reviews. They asked reviewers to identify the institution that submitted the anonymised paper. Roughly 50% of reviewers were able to correctly identify the institutions. I think there was a double digit percentage of being able to predict authors.

The organisers then made the argument that double blind was working because 50% of papers were not identified correctly! I was amazed that even with strong evidence that double blind was not working, the organisers were still able to convince themselves to continue with business as usual.

wtallis

You're saying "not working" when you only have presented evidence for "not perfect".

That experiment showed that even when asked to put effort into identifying the source of an anonymized paper—something that most reviewers probably don't put any conscious effort into normally—the anonymization was having a substantial effect compared to not anonymizing the papers.

Am I missing some obvious reason why double-blind reviews should only be attempted if the blinding can be achieved with a near-perfect success rate, or are you just setting the bar unreasonably high?

gopher_space

The subtext to this whole comment chain is that you need to have hands-on experience with qualitative to quantitative conversions if you want to reason about the scientific process.

> Am I missing some obvious reason why double-blind reviews should only be attempted if the blinding can be achieved with a near-perfect success rate, or are you just setting the bar unreasonably high?

OP thinks you are looking at either signal or noise, instead of determining where the signal begins for yourself.

smallmancontrov

When we criticize without proposing a fix or alternative, we promote the implicit alternative of tearing something down without fixing it. This is often much worse than letting the imperfect thing stand. So here's a proposal: do what we do in software.

No, really: we have the same problem in software. Software developers under high pressure to move tickets will often resort to the minor fraud of converting unfinished features into bugs by marking them complete when they are not in fact complete. This is very similar to the minor fraud of an academic publishing an overstated / incorrect result to stay competitive with others doing the same. Often it's more efficient in both cases to just ignore the problem, which will generally self-correct with time. If not, we have to think about intervention -- but in software this story has played out a thousand times in a thousand organizations, so we know what intervention looks like.

Acceptance testing. That's the solution. Nobody likes it. Companies don't like to pay for the extra workers and developers don't like the added bureaucracy. But it works. Maybe it's time for some fraction of grant money to go to replication, and for replication to play a bigger role in gating the prestige indicators.

null

[deleted]

ramblenode

> This is very similar to the minor fraud of an academic publishing an overstated / incorrect result to stay competitive with others doing the same.

I completely disagree.

For one, academic standards of publishing are not at all the same as the standards for in-house software development. In academia, a published result is typically regarded as a finished product, even if the result is not exhaustive. You cannot push a fix to the paper later; an entirely new paper has to be written and accepted. And this is for good reason: the paper represents a time-stamp of progress in the field that others can build off of. In the sciences, projects can range from 6 months to years, so a literature polluted with half-baked results is a big impediment to planning and resource allocation.

A better comparison for academic publishing would be a major collaborative open source project like the Linux kernel. Any change has to be thoroughly justified and vetted before it is merged because mistakes cause other people problems and wasted time/effort. Do whatever you like with your own hobbyist project, but if you plan for it to be adopted and integrated into the wider software ecosystem, your code quality needs to be higher and you need to have your interfaces speced out. That's the analogy for academic publishing.

The problems in modern academic publishing are almost entirely caused by the perverse incentives of measuring academic status by publication record (number of publications and impact factor). Lowering publishing standards so academics can play this game better is solving the wrong problem. Standards should be even higher.

buescher

Yeah, the alternative to a double-blind review that isn't a double-blind review is a double blind review.

The alternative to not enforcing existing rules against plagiarism is to enforce them.

The alternative to ignoring integrity issues i.e."minor fraud" in the workplace is to apply ordinary workplace discipline on them.

null

[deleted]

tensor

Seems to me that the review worked, you caught the plagiarism, even though the other two missed it. It's disturbing that somehow the paper author found your contact information though!

GuestFAUniverse

I am a co-author on a paper I never asked for, but my supervisor insisted, because the petty idea upgrading his desperate try to the point of "considerable at all" came from me. It was a chair which normally prouded itself of only publishing in the most highly regarded journals of its field (internally graded A, B, C). They had a few years without viable paper. Desperate to publish. From my POV this was a D. The paper is worthless crap, hastily put together within two weeks. It should have been obvious to the reviewers.

I feel ashamed that my name is on it. I wish I could retract it.

So, yes please: make it hard to impossible for paper mills and kill the whole publish or perish approach.

osrec

I can't make sense of your first sentence. Can you rephrase it please?

lqet

OP's supervisor wrote a paper without any merit. OP then provided a "petty idea" that made the paper "considerable at all". That's how he ended up as co-author.

GuestFAUniverse

Thanks, exactly as I wanted it to be understood.

autoexec

Maybe we need an "Alan Smithee" (https://en.wikipedia.org/wiki/Alan_Smithee) for research papers.

tovej

I have a similar experience. We had a truly terrible paper written as a collaboration with a team from the US on a software project, integrating their "novel" and "innovative" component. The component took 1 hour to compile, the architecture made no sense, and the US professor constantly talked about nothing but high-flying marketing concepts. I managed to hack together a demo using their component, fixing build bugs and design flaws (the ones I could do something about).

The proof of concept worked, but it wasn't doing anything new. We were just doing what we used to do, but now this terrible component was involved in it, making everything slower and more complicated.

Somehow that became a paper, and somehow this paper passed review without a single comment (my feeling is it's because of the professor's name recognition). I'm ashamed to have my name on that paper.

cgcrob

If it’s any consolation I split up with an ex partner after she wanted to put me as a co-author on a pseudoscience bullshit paper that she was working on to try and hit her quota. Her entire field, in the social sciences, is inventing a wild idea and using meta analysis to give it credibility. Then flying to conferences and submitting expenses.

I contributed nothing other than a statistical framework which was discarded when it broke their predefined conclusion.

anoncow

When research is just means to an end...

I think as children if we are taught what earning a living means, people who only want to make ends meet would try to do it using other less damaging methods. For e.g., sales and marketing are not bad places for such people. When it comes to research people should know that perhaps money will not be great.

It is because we aren't aware of the full picture as children, we follow our passions (or we follow cool passions) and then realise that money is also important and then resort to unethical means to get that money. Let's be transparent about hard fields with children so that when they enter such fields they know what they are getting into.

somenameforme

I suspect there's a lot of people that end up pursuing research because they enjoy college and learning with the idea of seeking out a job sounding rather less enjoyable, and more education will just equal more $$$ in said job anyhow, right?

In the past this wasn't an issue because university was seen as optional, now in most places it's ostensibly required to obtain a sufficiently well paying job, and so much more of society ends up on a treadmill that they may not really want to hop off of.

acuozzo

> so that when they enter such fields they know what they are getting into

I don't think this would help. IMO, it's a money vs. effort thing. Yes, real research is hard, but if someone learns early on that the system can be easily gamed, then the required effort is relatively low.

Plus, there's the friction factor. Moving from undergrad to grad to post-grad to professor keeps you within an institution you know.

The game is this: get hired at a research university and pump out phony papers which look legit enough to not raise any suspicions until you get tenure. Wrap the phoniness of each paper in a shroud of plausible deniability. If anything comes out after you're tenured, then just deny and/or deflect any wrongdoing.

cgcrob

Yeah that's about right.

I think in some fields you walk into them with some kind of noble ideology, possibly driven by marketing but then you find out it's all bullshit and you're n-years into your educational investment then. Your options are to shrug and join in or write everything off and walk away.

I don't blame people for taking advantage of it but in some areas, particularly health related, there are consequences to society past financial concerns.

archi42

I treat all social science degrees as "likely bullshit" these days. Could as well be astrology.

A few computer science friends of mine worked at a social science department during university. Their tasks included maintaining the computers, but also support the researchers with experiment design (if computers were involved) and statistical analysis. They got into trouble because they didn't want to use unsound or incorrect methods.

The general train of thought was not "does the data confirm my hypothesis?" but "how can I make my data confirm my hypothesis?" instead. Often experiments were biased to achieve the desired results.

As a result, these scientific misconduct was business as usual and the guys eventually quit.

BeetleB

Let me introduce you to theoretical condensed matter physics, where no one cares if the data confirms the hypothesis, because they are writing papers about topics that very likely can never be tested.

At least in the social sciences there is an expectation of having some data!

cess11

Sounds like economics.

Research fraud is common pretty much everywhere in academia, especially where there's money, i.e. adjacent to industry.

cgcrob

Glad to know they quit. That's exactly what I observed, except it was probably worse if I think back at it. I'm a mathematician "by trade" so was sort of pulled into this by proxy because they were out of their depth in a tangle of SPSS. Not that I wasn't but at least I have conceptual framework in which to do the analysis. I had no interest or knowledge of the field but when you're with someone in it you have to toe the line a little bit.

Observations: Firstly inventing a conclusion is a big problem. I'm not even talking about a hypothesis that needs to be tested but a conclusion. A vague ambiguous hypothesis which was likely true was invented to support the conclusion and the relationship inverted. Then data was selected and fitted until there was a level of confidence where it was worth publishing it. Secondly they were using very subjective data collection methods by extremely biased people then mangling and interpolating it to make it look like there was more observation data than there was. Thirdly when you do some honest research and not publish because it looks bad saying that the entire field is compromised for the conference coming up which everyone is really looking forward to and has booked flights and hotels already.

If you want to read some of the hellish bullshit, look up critique of the Q methodology.

meindnoch

Luckily this made-up social science trash won't be used as evidence when shaping our policies, so it's pretty harmless! /s

comfysocks

To be fair, lobbyists will use phony science from any field to influence policy, not just the social sciences. Think of the tobacco industry.

schnable

The science is settled, bro.

saagarjha

As a co-author are you not able to do so?

psychoslave

In theory $SYSTEM is the most excellent thing that humanity could ever hope and everyone knows they that by acting in accordance with the stated expected behaviors, they will act in the best way they can think of to achieve the best result for everybody.

In practice people see that $SYSTEM is rotten and most likely to doom everyone on the long span, with increasingly absurd actions accepted silently on the road. But they also have the firm conviction that not bending the knee, be brave and say out loud what’s in everyone mind, will only put them on the fast track to play the scapegoat and change nothing else on the overall.

Think about it: over-reporting of grain production was a major factor of the great Chinese Famine.

https://en.wikipedia.org/wiki/Great_Chinese_Famine

wadadadad

Thank you for providing the link for this- it's greatly interesting to see how such a failure could occur through human means and the significant impact it had, and how it can directly relate to academia (really, many topics, anywhere there is a '$SYSTEM').

The cover ups in the article were also interesting- a deliberate staging to Mao to prevent uncovering the truth. I'm not sure how this compares directly (is there a centralized authority with power to fix the issue that is being lied to, compared to the decentralized "rotten" system, where the status quo is understood and 'accepted').

black_puppydog

Technically being able isn't the same as your career surviving actually going through with it.

ithkuil

Damned if you do and damned if you don't

null

[deleted]

proto-n

"For the first time, researchers reading conference proceedings will be forced to wonder: does this work truly merit my attention? Or is its publication simply the result of fraud? [...] But the mere possibility that any given paper was published through fraud forces people to engage more skeptically with all published work."

Well... spending a few weeks reproducing a shiny conference paper that simply doesn't work and is easily beaten by any classical baseline will do that to you in the first few months of your PhD imo. I've become so skeptic over the years that I assume almost all papers to be lies until proven otherwise.

"This surfaces the fundamental tension between good science and career progression buried deep at the heart of academia. Most researchers are to some extent “career researchers”, motivated by the power and prestige that rewards those who excel in the academic system, rather than idealistic pursuit of scientific truth."

For the first years of my PhD I simply refused to parttake in the subtle kinds of frauud listed in the second paragraph of the post. As a result, I barely had any publications worth mentioning. Mostly papers shared with others, where I couldn't stop the paper from happening by the time I realized that there is too little substance for me to be comfortable with it.

As a result, my publication history looks sad and my carreer looks nothing like I wished it would.

Now, a few years later, I've become much better at research and can now get my papers to the point where I'm comfortable submitting them with a straight face. I've also came to terms with overselling something that does have substance, just not as much as I wish it had.

huijzer

> I've become so skeptic over the years that I assume almost all papers to be lies until proven otherwise.

I couldn't agree more. I have read a lot of psychology papers during my PhD and I think there is very little signal in the papers. Many empirical papers for example use basically the same "gold standard" analysis, which is fundamentally flawed in many ways. One problem for example is that if you would use another statistical model, then the conclusions would often be wildly different. Another being that the signal is often so weak that you can't use it to predict much (to be useful). If you try to select individuals for example, the only thing you can tell is that the group on average is less neurotic. But for individuals there is no better chance of picking the right one than average. The point of a good paper is to take these sketchy analyses and write a beautiful story around it with convincing speculation. It sounds absurd but take a random quantitative psychology paper and check which percentage of the claims made in the discussion are actually based on the actual data from the paper.

But the worst part about this is that these problems exist for literally decades. Nobody cares. The funding agencies grade people not on correctness but on the number of citations. As a result, you see that many subcultures exist who's sole existence is about promoting the importance of their subculture. It is quite common in academia to cite someone in the introduction just to "prove" that some idea is worth pursuing. But does it work? Doesn't matter. Just keep writing papers.

So I'm not saying that all research is bad. I'm saying that indeed most papers are not very useful or correct. Many researchers try, but the incentives are extremely crooked.

pbronez

How could this be corrected? The scientific community has lost a lot of credibility with the public, and the backlash is obvious in recent policy changes. Fast forward four years. Assume Trump and RFK jr have successfully destroyed the current system. What should replace it?

How could the Federal government ensure that public monies only fund high quality research? Could policy re-shape the incentives and unlock a healthy scientific sector?

iinnPP

Consequences with the current system would suffice.

Ignorance as a defense needs to go too. Ignorance as a defense is too powerful and we should balance it more towards hurting the supposedly ignorant rather than everyone else. Basically, a redefining of wilful ignorance so it's balanced as stated.

huijzer

I don’t know but these are exactly the right questions to ask!

ngriffiths

I had two experiences at the polar opposites of the spectrum - one research team I worked on had very high standards and was comfortable being patient for material that had value. The other involved an approach that obviously stood no chance to be useful to anyone.

Some differences:

- The first one was in a space with more low hanging fruit

- The first one was after large effect sizes, not the kind where you can massage the statistical model

- The second one was a topic with far higher public interest

- The second one was primarily an analytic project, whereas the first one was primarily experimental

I feel like bad science lives in the middle of a spectrum - you have young fields/subfields with boring but impressive experimental breakthroughs, and on the other end you have highly political questions that have been argued to death without resolution. Bad science is about borrowing some of the strategies used in politics because all the important experiments have already been done.

nis0s

> Proclaiming that your work is a “promising first step” in your introduction, despite being fully aware that nobody will ever build on it.

Science produces discrete units which can be used in different ways, if not in their exact form from a preceding research. I am not sure it’s reasonable to say that existing ideas, even if not cited, are not inspirational (to the researchers themselves). Peer-review isn’t perfect, but I think that all accepted papers have something academically or scientifically relevant, even if there’s no guarantee that the paper will generate hundreds of subsequent citations. I think improving your subsequent work is more important, which includes mentioning why you think some previous work may not be as relevant anymore. This last step is often missing from many research papers.

I think the author is right that it doesn’t quite make sense to publish anything you know isn’t quite correct. But I can think of several papers in different fields which someone may think are “not quite correct”, but the goal of such papers, I think, is to demonstrate the power of low probability scenarios, or edge cases. Edge cases are important because they break expected behavior, and are often the root cause of system fragility, system evolution, or poor generalization in other systems.

jimbokun

> Most researchers are to some extent “career researchers”, motivated by the power and prestige that rewards those who excel in the academic system, rather than idealistic pursuit of scientific truth.

This is the funny part. There is little to no power and prestige to be had in the academic system. To a first approximation no one outside academia cares.

I was just working as a staff programmer and taking grad courses with my tuition benefit, and found myself getting caught up in the mentality of needing a PhD to really be successful and valuable. Then got a job in industry making far more money and realized how academia is a small self contained world with status hierarchies irrelevant outside that small world.

BeetleB

In the social sciences, there is a lot of prestige to be had. Do some ground breaking work, engage with the public via bestselling books, and then get invited by the president to work on policy.

Even if they don't work for the administration, there are plenty of other bodies that will value them and pay large sums of money (or let them have large influence).

Very common amongst economists, and more and more common amongst disciplines like psychology.

Even in technical fields, if you can manage to become a big name, you can do consulting work and get paid quite well.

> Then got a job in industry making far more money

This is not a healthy way to look at it.

The average mechanical engineer isn't making tons of money in industry. A ME professor at a top university likely makes more. A biology major with just a BS degree will make less than the average biology associate professor.

But more importantly, there's a simpler reason why money is a poor metric to measure: You can always get more money in finance or medicine than as a mechanical engineer. Does it make sense to denigrate a whole profession just because one can make more money elsewhere?

vonneumannstan

>This is the funny part. There is little to no power and prestige to be had in the academic system. To a first approximation no one outside academia cares.

They have power over their students and relative power over other Professors. That's plenty enough incentive for most. There can also be fame and fortune for the most famous among them. See Francesa Gino, Dan Ariely, etc.

bjackman

> Submitting a paper to a conference because it’s got a decent shot at acceptance and you don’t want the time you spent on it go to waste, even though you’ve since realized that the core ideas aren’t quite correct.

I don't see a problem with this? If papers are the vehicle for conference entries why shouldn't authors submit it just because it's wrong? Conferences are for discussion. So go there and discuss it... "My paper says XYZ, but since I wrote it I realised ABC" - sounds like a good talk to me?

(Naivety check: I am not an academic)

sideshowb

Yes. As the saying goes, If we knew what we were doing it wouldn't be research. Finished papers often have flaws, if you try to write something perfect you may never finish it. They're called limitations and you list them in conclusions and suggest addressing in future work.

(Experience check: I is one)

lgeorget

In other fields than computer science, that would be more the case I think because conferences are not given as much importance. Journals are what matter and since these publications take more time and are usually more selective (for the well-known journals at least), you tend to have better science in them. Computer science have few journals and the standard of publication is the conference.

light_hue_1

That's not what papers are for. But I can see how not being an academic would make you think that.

What you're describing are workshops with what we would call non-archival proceedings. Places where you write whatever you want and then talk about it.

Publications, conference or journal, are supposed to be what are called archival. They are a record of what we've discovered and want to share with the world. They are supposed to be sent into the world after we carefully complete a line of work.

Publications are not supposed to spam the system with half-baked junk. Sadly, that's what a lot of people are doing these days.

foldr

Some fields do have the opposite problem, though, where standards for publication are so high that they prevent publication of useful ideas or results that could be built on by other researchers. I don’t think a published paper should have to meet some kind of gold standard of completeness and correctness. It just has to report something new or interesting with any appropriate caveats attached.

genewitch

I don't know I'm sure Monsanto put out[0] a lot of papers about how effective glyphosate is but another team decided to test glyphosate against the "inert" ingredients in roundup and found glyphosate was actually the weakest pesticide of the group.

Now, my pet theory is that they knew glyphosate wasn't that great, but talked it up in papers as a sacrificial anode sort of thing "gee shucks it looks like glyphosate based pesticides are harmful to humans (or bees, or fish, or) so we'll stop manufacturing that formulation."

But, Possibly due to academia, they have fanboys and cheerleaders and I think that's why it's still around and in heavy use even though we're not sure it's a good idea.

[0] Bayer Monsanto funds studies at agricultural universities.

P. S. Just watch.

michaelt

> And we must ensure that explicit suggestions to modify one’s science in the service of one’s career – “you need to do X to be published”, “you need to publish Y to graduate”, “you need to avoid criticizing Z to get hired” – carry social penalties as severe as a suggestion of plagiarism or fraud.

One of the pernicious things in this area is that, even as we teach young researchers how to avoid making mistakes and engage sceptically with the work of others and that scientific fraud is a nontrivial issue, we also tell them how to commit fraud themselves and that their competition is doing it.

"Watch out for P-hacking, that's where the researcher uses a form of analysis that has a small chance of a false positive, and analyses loads of subsets of your dataset until a false positive arises and just publishes that one"

"Watch out for over-fitting to benchmarks, like a car taking the speed crown by sacrificing the ability to corner"

"Watch out for incomplete descriptions of test setups, like testing on a 'continent-scale map' but not mentioning how detailed a map it was"

"Watch out for citations where the cited paper doesn't say what is claimed, some people will copy-and-paste citations without reading the source paper"

"Watch out for papers using complicated notation, fancy equations and jargon to make you feel this looks like a 'proper' paper"

"Watch out for deceptive choice of accurate numbers, like a study with a 25% completion rate including the drop-outs in the number of participants"

"Watch out for simulations with inaccurate noise models, if the noise is gaussian in the simulation but a random walk in reality, great simulated results won't transfer to reality"

I've made no suggestion at all that you should modify your science or commit fraud - but I've also just trained you in how to do it.

proto-n

It's really not that hard to come up with ways to commit fraud if you want to. On the other hand, it's very easy to make such mistakes if you don't know to avoid them. This characterization is very misguided IMO.

michaelt

Ah, perhaps I wasn't clear about what I'm trying to say. I don't think we should stop training researchers in common mistakes and fraudulent methods to watch out for.

I'm just saying: I don't believe anyone actually tells budding researchers that they should commit fraud. Instead I think the process is probably more like this:

Year 1: Statistics/research training. Here are a load of subtle mistakes to watch out for and avoid. Scientific fraud happens sometimes. Don't do it, it's very dishonest.

Year 2: Starting research. Gee a lot of these papers I'm reading are hard to reproduce, or unclear. Maybe fraud is widespread - or maybe they're just smarter or better equipped than me.

Year 3: "You really ought to have published some papers by now, the average student in your position has 3 papers. If you don't want to flunk out you really need to start showing some progress"

proto-n

I still disagree. It's more like "omg I should have published at least a few papers by now, what am I doing" and then you start frantically looking for provable things in the dataset. You find one that you can also support with a nice story. Now either a) You were not tought about how or why this is wrong and you publish the paper b) You were, and know that you should collect a separate dataset to test the hypothesis. But also, there is a huge existential pressure to just close your eyes and roll with it.

It's not that you need to be tought how to cheat, it's that you need to be tought how to avoid unintentionally cheating.

lqet

> * A group of colluding authors writes and submits papers to the conference.

> * The colluders share, amongst themselves, the titles of each other's papers, violating the tenet of blind reviewing and creating a significant undisclosed conflict of interest.

> * The colluders hide conflicts of interest, then bid to review these papers, sometimes from duplicate accounts, in an attempt to be assigned to these papers as reviewers.

Is it that common that conference reviewers also submit papers to the conference? Wouldn't that alone already be a conflict of interest? (After all, you then have an interest in dismissing as many papers as possible to increase the likelihood of your own paper being accepted). And how do you create "duplicate accounts"? The conferences I have submitted to, and reviewed for, all had an invitation-like process for potential reviewers.

michaelt

Many bodies that fund academic work will happily pay for you to fly to a conference and stay at a hotel if you're presenting a paper at the conference - but they'll be a lot less willing if you aren't presenting anything. So a decent % of attendees will be presenting papers.

And finding reviewers who know their stuff, who'll work for free, and who'll review thoroughly in a short timescale isn't easy.

proto-n

Not only common, it's become a requirement to review papers if you submit one yourself. Yeah it's not ideal for multiple reasons (what you said + prompt engineering grad students dismissing proper papers without having the slighest idea about the field), but the amount of submissions is so incredibly huge that it's impossible to do it any other way.

lqet

Then I guess I should be grateful that my academic niche is so small.

twic

Even if this reviewers weren't allowed to be submitters, if there is more than one conference, or the conference runs for more than one year, the same mechanism can be used.

nicwilson

Hmmm, I wonder if you could turn this into a sport and have like one paper per year per group of total BS, and shame on the reviewers/conference/journal if they don't catch it, and kudos to the submitters the more blatant it is.

Come to think of it, is there a "Journal of Academic Fraud"?

Peteragain

I agree with the analysis completely, but the solution is depressing. I keep thinking that publications on arxiv might be a better source of knowledge given the motivation for publishing there is not a contribution to career progression. Keyword search over arxiv papers? But perhaps we should bring back the idea of anonymous publishing:-0

jarbus

I don't have top-tier publications, and I haven't gotten any awards. I've seen people get awards for bullshit and farming prestigous publications. I only have one citation for work that is 100% mine. But every time I read something like this, I feel proud that I don't bullshit, and at least try to do real science. I truly believe in all my work and every sentence I write.

That being said, the insane emphasis on venue is what's pushing me out of academia. I can't compete with people like this.

tmaly

I am just thinking about what will happen when these papers are used to train LLMs