Skip to content(if available)orjump to list(if available)

It's Not the Incentives (2018)

It's Not the Incentives (2018)

53 comments

·April 28, 2025

jgeada

It is the incentives.

Maybe you're moral and keep to the straight and narrow. However, the system hires all types and the ones that just follow the incentives will do better. They get promoted, have more power, hire more people like them. Eventually the moral types will just be the exception and no longer affect the average.

Incentives exist because they change the behavior of the whole; they work as intended. Just that what is intended isn't always desirable or even a good idea.

Edman274

People try to maximize the good and minimize the bad consequences of their actions. They might not do it using utils or with actual quantification but they are doing it. And definitionally, there's no way to get rid of an incentive to defect, because getting rid of it creates a new incentive to defect in a different way. Like, for the purposes of talking about this article, "incentive" could be shorthand for "any reason you could come up with to do something wrong to get ahead" but it could also more broadly be defined as "the expected good results of a choice". As an example, as long as money is important in society, there is always going to exist an "incentive" to rob a bank. That can't be removed. What we can do is make it harder to rob a bank, and force reputational damage and jail to thieves. Creating a society where money doesn't matter might be possible, but then there'd be no bank. By the same token, there will always be an incentive to fake data. We can make it harder to fake data and force reputational damage to people who fake data, but that incentive to fake will exist. The only way that it wouldn't exist would be if we made it so that the outcomes of research didn't matter at all, but it would be hard to imagine a society functioning where any research would be happening if no outcomes mattered. If that were the case, then high school dropouts would try to get research grants for baking soda and vinegar volcanoes. The only way to prevent that would be to create a system where people have to justify their research without caring about the results, but then you've reintroduced "incentives", just different ones that can still be cheated again.

By arguing that it's the moral character of people that's the problem and not the mere incentives, one key disincentive is reintroduced which is the reputational damage thing I alluded to earlier. Most people don't rob banks not because there's no incentive, but because the disincentive (jail, reputational damage) is so high as to make that course of action seem stupid. But if you argue that it's incentives and not moral character to blame, you remove the disincentive of making defectors suffer reputational damage. You can't remove an incentive entirely. You can only change them, and add disincentives. Reputational harm is one of those disincentives, and so is forcing things like pre-registering experiments, open access journals, etc.

treetalker

I agree with you. It's the law of large numbers; individuals make free choices, yet in the aggregate the incentives make the likelihood of decisions fairly predictable. This is the essence of "nudging" and decision architecture (and dark patterns, etc.).

glitchc

I suggest reading The Selfish Gene. A system where only the selfish can thrive is shown to collapse with certainty, no matter how many times one runs the experiment.

asdsadasdasd123

This idea that we can just setup a good system with good incentives falls flat on its face before we even need to consider its merits because the problem is that there are always snakes involved in the development of these systems. The most important thing for societal survival is moral character AND THEN the system itself.

Jensson

So your argument is that capitalism is not about selfishness but mutual beneficial trades, as otherwise it would have collapsed by now? If you mean it is just a matter of time, every system will collapse sooner or later so that doesn't say much.

bigstrat2003

No, it's people being unwilling to accept agency and own up to their faults. If we're evaluating a system of rules and how to improve them, trying to understand perverse incentives is important. But it is not an excuse for any individual who violates moral standards to do so because they had an incentive.

derektank

The problem is that much perverse behavior is not inherently immoral. Not all forms of p-hacking are unjustifiable, nor are attempts to increase citations through publication fragmentation, nor is appealing to grant providers with research that you consider duplicative. But all of these things make science worse.

null

[deleted]

masfuerte

Shame, or its avoidance, is an incentive.

When researchers become shameless cheats science suffers.

The depressing conclusion is that we need to change the incentives to work in a world without shame. This may work to some extent, but the result will certainly be worse than a world in which most researchers try to do the right thing.

beepbopboopp

Its funny, the whole presenting ones self as an objective beacon of morality is a response to an incentive system in itself. For whatever reason; parents, school the author was convinced that this type of morality would yield a better life or piece of mind or some optimal outcome.

codetrotter

> convinced that this type of morality would yield a better life or piece of mind or some optimal outcome

Yeah. For example a society where people work together for the benefit of all, instead of having some people exploit the others.

leereeves

It's interesting that this concept of ethics evolved at all.

"Nature is red in tooth and claw." It's a brutal, heartless competition for resources and survival.

And yet through the vicious process of evolution, we developed empathy and a sense of fairness. Those instincts must have some cold, rational benefit for the survival of the group.

bitmasher9

In systems that quickly filter out most people (like Academia) the incentive structure is even more selective.

So maybe it’s how limited the resources are (limited grants, limited tenure positions) more so than the incentives.

bsder

And, even moreso for the short term.

If the scientist in question doesn't get found out for 30 years but then becomes a pariah, it doesn't matter. They displaced a better scientist for their entire career. There is no retroactively fixing that.

There is a huge incentive to cough up something that will make you "famous". If it doesn't make you famous, well, you can bury it and simply be a pedestrian scientist--no harm, no foul. If it does make you "famous", well, you might make it out the other side without anybody being able to pin anything decisive on you. And, if you get caught and become a pariah in 10 years, well, you likely earned way more than you would have in 30 years anyway.

Lying, in this case, almost always comes out ahead.

kelseyfrog

It's the incentives, or more accurately it's the fact that we can't change incentives because when that conversation starts, a hundred people come out of the walls and say, "But changing the incentives will make it worse!", "Don't you see this edge case that someone will exploit?", and "You can't complain about fixing the incentives unless you have an air-tight plan that has no flaws!"

It demoralizes everyone until we all just give up and continue to blame the incentives. The incentives will continue until the incentive to change grow stronger than not changing and for the folks with the opinions above, the incentive to change will have to grow very large indeed.

rickdicker

How do we fix that? Perhaps it can't be done democratically, and you just need someone to come into a high position of power who is willing to be dramatic and disruptive?

justin_oaks

Generally it isn't done because the people in power benefit from the status quo.

To fix it, you need collective action. And now you have a collective action problem: https://en.wikipedia.org/wiki/Collective_action_problem

lelandbatey

One way to address it: change very slowly. You can strangle the bad incentives by reducing their perceived utility by introducing alternatives. The bad incentive exists because it's trying to do something. Ideally, you set up a companion incentive that gets the system a little more in the direction everyone wants, you systemically allow folks to effectively "choose their incentive", and eventually you phase out the old incentive. Problems with this are: it's glacial, and it doesn't solve everything instantaneously. Folks don't like either of those things, so you probably need to use other tools of political change to make those things more palatable.

In general, it's a lot of work, it's all in the details, and it takes forever. So lots of folks will fall back on the "why not just give the small dictator(s) all the power?"

rickdicker

Thanks for the thoughtful reply. It must be sad for the people who dedicate their lives to slow-change to see everything get steamrolled by a quick-fix-salesman dictator. Do you have faith in this kind of approach or do you think it's more of a "better than nothing" longshot? Maybe it's the kind of thing where it can't really be done for big, politically "hot" issues, but for other, niche problems that are less visible to the news-watching layman, it's still an effective way of making change?

paulorlando

From Munger in The Psychology of Human Misjudgment: "And human nature, with its version of what I call incentive-caused bias, causes this terrible abuse. And many of the people who are doing it you would be glad to have married into your family compared to what you’re otherwise going to get. "Now there are huge implications from the fact that the human mind is put together this way, and that is that people who create things like cash registers, which make most behavior hard, are some of the effective saints of our civilization. And the cash register was a great moral instrument when it was created. And Patterson knew that, by the way. He had a little store, and the people were stealing him blind and never made any money, and people sold him a couple of cash registers and it went to profit immediately."

huijzer

Yes exactly Charlie has been saying again and again to not underestimate the power of incentives. I’m a big fan of Tal Yarkoni but I think this article is probably not correct.

bitmasher9

> You’re (probably) not going to “change things from the inside”

I’ve had this exact conversation with so many students across so many industries. A good chunk of them end up middle managers keeping their head down never feeling empowered enough to make the changes they want to see. Those that end up actually making impactful changes were disrupting from the beginning.

kubb

This talks mainly about research.

But if you work at a megacorp, like Facebook, and lobotomize people for money, is it your fault, and are you a bad person for doing it?

tejohnso

The "I'm just doing my job" excuse goes a long way with people. Every once in a while you'll hear about someone who just couldn't stomach it anymore. Working in megacorp finance for example. Or insurance, where there are plenty of unkind incentives. Doesn't happen very often though. For the most part people have a job to do and they do it.

Centigonal

yes, and maybe? I think it's important to hold the tension that you are doing something that is both good and harmful (like both building a really nice chat app you give away for free and using the network effects of that app to collect everyone's data and influence them, or protecting people from fraud but also improving the bottom line of Wall Street banks that are contributing to the increasing financialization of everything).

I think acknowledging incentive structures is alright, but ending the process there and not leaning into understanding and manipulating them is the real issue.

__MatrixMan__

Yes and yes.

JohnFen

Very well said. It's not just scientists, I've seen the same thing happen across the board -- including here on HN.

Yes, incentives exist that can make doing the wrong thing easy and even personally advantageous. That doesn't excuse or forgive doing the wrong thing -- that thing is still wrong and people doing it are responsible for their own behavior regardless of "incentives".

smallmancontrov

Blame the system not the person => person skates => bad but not catastrophic.

Blame the person not the system => system skates => catastrophic.

If you want to punish wrongdoers on your own time, great, we have no quarrel. Ideally we would blame both, but 80% of the time when someone is advocating blaming the person I find that they have a conflict of interest and secretly want to preserve the system. "Small town morality" sounds good but does not scale and this combination of facts is easily exploited to divert attention away from important system maintenance. There was a time when I felt obliged to extend the benefit of the doubt on this matter but after having said benefit exploited very intentionally on two different occasions I now consider it a bad policy, so: first we worry about fixing the system. That is not negotiable.

null

[deleted]

skybrian

I think the article is about a different scenario:

Blame the system => person skates and the system doesn’t change

That is, blaming the system often isn’t about changing it.

You need the power to actually fix things and a plan to fix them.

ttoinou

    A random bystander who happened to eavesdrop on a conversation between a group of scientists kvetching about The Incentives could be forgiven for thinking that maybe, just maybe, a bunch of very industrious people who generally pride themselves on their creativity, persistence, and intelligence could find some way to work around, or through, the problem.

Thats exactly what they did. They understood the incentives better than others and stayed away from The Officially Approved Science. Reducing talent quality is a good way way to help solve this problem : your system sucks = smart useful people are going somewhere else

rgyams

I totally understand the frustration with the common excuse of: oh it's the incentives, but I believe it oversimplifies the issue. While it's true that perverse incentives exist, we still have the agency to choose how we respond to them. If we can't resist cutting corners in small ways, we shouldn't be surprised when others do so in far more damaging ways.

__MatrixMan__

Yes, let's all take this to heart. But once we get to the corollary:

> It's not the incentives, it's them

What are we to do? Shame them, sure, but until a critical mass of us is prepared to interfere, to inject ourselves without consent into the business of others, then it comes back to fixing the incentives.

justin_oaks

Avoiding shame is an incentive.

It boils down to: "Why are people violating these unenforced rules? Sure it benefits them, but don't they feel bad?"

atian

> Show me the incentive, I’ll show you the outcome.

readthenotes1

I enjoyed reading the book "punished by rewards" that warned us that we often confuse rewards with activities if we are rewarded for doing the activity.

Think about sports. People play to win, when we really should be playing to play.

It may be "me", but I have to remind my self to play as well as I can and not think about win/loss

parrit

If you don't follow the incentives are you out of a career?

And if so, what is the pragmatic thing to do to change things. Because getting a job in tech instead just means you swap to another set of bad incentives.

I wonder if unionizing would help?

Otherwise how does the idealist eat?

taylorallred

Question from genuine ignorance: are scientists not incentivized to do good work even if it doesn't result in cool/useful/desired outcomes? I would think that even "the data from the experiment was inconclusive" should still be considered a valid contribution to the field as long as it was done correctly.

sashank_1509

Yes, to a gross approximation, scientists cannot and do not advance their career by giving a negative result. That is just how it is, anyone who’s tried being a scientist for a day realizes this.

In ML for example, if you try some weird idea, and it does not beat baseline methods on any specific benchmark, then your paper 9.99/10 will not get accepted. In fact I don’t think I’ve ever seen a negative result paper ever get accepted into a good ML conference. At the very least, the authors make up their own benchmark and claim their method is best in their own benchmark, and the reviewers then quibble about whether such a benchmark is relevant, and then after back and forth, they come to a decision and decide to accept the paper or not.