"A computer can never be held accountable"
304 comments
·February 3, 2025_Algernon_
TeMPOraL
> How does one counteract that self-serving incentive? Doesn't seem like we've found a good way considering we seem to be spearheading straight into techno-feudalism.
The EU is trying right now. Discussed right now over here:
https://news.ycombinator.com/item?id=42916849
See: https://artificialintelligenceact.eu/chapter/1/ and https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
AI systems are defined[0] in the Act in such a way as to capture the kind of "hands-off" decision-making systems where everyone involved could plead ignorance and put the blame on the system working in mysterious ways; it then proceeds to straight up ban a whole class of such systems, and classifies some of the rest as "high risk", to be subject to extra limitations and oversight.
This is nowhere near 100% a solution, but at least in the limited areas, it sets the right tone: it's unacceptable to have automated systems observing and passing judgement on people, based on mysterious criteria that "emerged" in training. Whatever automation is permitted in these context, is basically forced to be straightforward enough that you could trace back from the system's recommendation to specific rules that were executed and to people that put them in.
--
alephnerd
The EU AI Regulations has some very broad carve outs added for Law Enforcement, such as real-time facial recognition. And individual EU states still have the authority to carve out exceptions as they wish.
Like every other form of tech regulation done by the EU, it's all bark but no bite because politicians of individual countries have more power than MEPs.
What's to stop Fidesz, PiS (if they return to power), etc from carving out broad exceptions for their own Interior Ministries? They've already done this with spyware like Pegasus.
Instead of techno-feudalism, it's basically techno-paternalism which is essentially the same thing. In both cases, individual agency is being limited by someone else.
dasil003
At least someone is trying. I don't see anyone doing better anywhere else in the world.
dimal
Somehow, I think that using a bureaucracy to fix this bureaucratic tendency won’t solve it. Not that it’s not well-intentioned, but at some point we need to expect that individual humans to behave ethically, or else we’re doomed. Over and over, we try to fix society by adding more and more bureaucracy and the situation only gets worse.
TeMPOraL
We keep adding layers of bureaucracy precisely because we can't expect individual humans to behave ethically at scale.
Personal ethics aren't fully personal in nature - they're just as much a function of social environment as of one's "heart of hearts". Ethical behavior being expected and respected makes it easier for any individual to stick to their principles in the face of conflicting factors, like desires or basic self-preservation instinct.
Individual ethics in social context is a powerful organizer, and can keep small groups of people (30-50) stable and working together for a common goal. Humans evolved to cooperate together at that size - such groups can self-govern and self-police just by everyone's sense of right and wrong. But it quickly breaks down past that size. Letting some individuals assume leadership role and direct others only stabilizes the group up to ~150 people (Dunbar number); past that, the group loses cohesion and ends up splitting up.
Long story short: larger groups have survival advantage both against environment and over smaller groups. Because of it, we've eventually learned how to scale human societies to arbitrary sizes. The trick to do it is hierarchical governance to overcome Dunbar limit, and explicit rules to substitute for intimate relationships. By stacking layer upon layer, we grew from tribes of 150 governed by the spoken words of their chiefs, through kings managing networks of nobles via mutual obligation, to modern nation states that encompass millions of citizens in a hierarchy that's 4+ level deep (central government -> provinces -> regions -> towns), slowly building up more levels as nation states group into blocs. With each layer we added, complexity of explicit governance grew, giving rise to what we now call bureaucracy.
The modern bureaucracy isn't some malignant growth or unfortunate side effect - it's the very glue that allows human societies to scale up to millions of people and more. It's the network of veins and nerves of a modern society - it's what allows me, one tiny cell, to benefit from the contribution of other cells and contribute on my own, and it's what keeps all the cells working as a whole. This means that, yes, as the society grows and faces new challenges, the solution usually really is more bureaucracy.
elihu
The defense against this is to have very clear legal principles that identify the person or people fully accountable for the machine's decisions.
Admittedly this may result in some strange results if followed to its logical conclusion, like a product manager at a self-driving car company being the recipient of ten thousand traffic tickets.
DrillShopper
> The defense against this is to have very clear legal principles that identify the person or people fully accountable for the machine's decisions.
Be careful how hard you push for this - this is how the prosecutors in the Royal Mail fiasco drove postmasters out of business and drove a few to suicide.
JTbane
That case was absolutely crazy, imagine getting accused of fraud and embezzlement just because of a computer bug.
daveguy
> product manager at a self-driving car company being the recipient of ten thousand traffic tickets.
I'm the case of self driving cars, the company itself could be held liable. Everyone invested in the company who puts a bad product in the market should be financially impacted. The oligarchy we are heading for wants no accountability or oversight -- all profit, no penalty.
lenerdenator
> The defense against this is to have very clear legal principles that identify the person or people fully accountable for the machine's decisions.
Most legal principles are designed to reduce liability. That's the whole point of incorporation, for example.
ajb
Incorporation is to reduce liability for debt. It's not supposed to reduce liability for criminal negligence. Or other criminal offences.
dsr_
No, legal principles are designed to specify liability.
Xmd5a
>identify the person or people fully accountable for the machine's decisions.
Which required more tech, not less.
dartos
Not necessarily.
Tbh I’m in favor of holding C-suite responsible for the actions of their company, unless the company has extremely clear bylaws regarding accountability.
If, say, a health insurance provider was using an entirely automated claim review process that falsely denies claims, I think the C-level people should be responsible.
pj_mukh
Technology has nothing to do with this.
Before technology there was "McKinsey told me to do this". Abrogation of liability is a tale as old as time.
bluefirebrand
At least with a fully human chain of responsibility, the buck stops with someone
A computer cannot ever be made to atone for it's misdeeds
Humans can
WJW
Someone has never encountered bureaucracy I see. Human chains of responsibility manage to dissolve blame into nothingness all the time.
null
greentxt
Not really. The vast majority of adults are not at all accountable in today's society. We blame.
codr7
Aka the Nuremberg defense; sometimes it works, sometimes not so much.
jimbokun
Sarbanes Oxley in required CEO and CFO to certify company financial statements specifically to avoid the possibility of them pleading that rogue employees acted independently.
Seems like a similar legal framework could require that decision makers are held accountable for any decisions made by an AI under their control.
dragonwriter
> Seems like a similar legal framework could require that decision makers are held accountable for any decisions made by an AI under their control.
If you delegate a decision to an AI, you are simply making the decision while trusting the AI, and should not be any less responsible than if you made the decision by any other means.
(If you are directed by a superior authority to allow an AI to make a decision but are assigned nominal responsibility anyway, though, that superior authority is, in fact, making the decision by delegating to the AI bypassing you, to anticipate the obvious “install a human scapegoat organizationally between the people actually making the decision to use the AI and the AI itself response”.)
Pet_Ant
> If you are directed by a superior authority to allow an AI to make a decision but are assigned nominal responsibility anyway, though, that superior authority is, in fact, making the decision by delegating to the AI bypassing you, to anticipate the obvious “install a human scapegoat organizationally between the people actually making the decision to use the AI and the AI itself response.
Solution is that company won’t hire you unless you are willing to take the blame and rubber stamp the AI’s decisions. Unemployed people are not in a position to protest.
Also, the point of AI is that the decisions are too complex to justify. They are grey and iffy. We don’t usually hold people accountable for anything that nebulous. Wrongly deny someone insurance coverage and they die? No consequences even without AI
Sadly at the scale of the world, we take shortcuts and need to be effective. As anyone with rare disease will tell you doctors do for ages till they get a proper diagnosis.
BiteCode_dev
Precisely what americans state you should not do: regulate.
There should be laws stating who has skin in the game, maybe by stating that if you take responsability for the profit by having a high salary, you also take responsability for the damage with prison.
buran77
> Precisely what americans state you should not do: regulate.
Everyone thinks the same until they're screwed over and then they want someone to do something about that. The big misunderstanding is that "regulation" is just the stuff you don't like. In reality it's everywhere the state gets involved. Every rule that the state ever put in place is regulation. Even the little ones. Even the ones that you like.
Computers cannot be held accountable more than a car, or a gun, or an automated assembly line can. That's why you have a human there no matter what, being legally accountable for everything. The human's rank and power defines how much of the risk they are allowed to or must take.
immibis
Libertarians love certain regulations - mostly the ones where the government allocates some stuff as "theirs" and uses violence to prevent other people from using it without paying them a fee.
crabbone
> Precisely what americans state you should not do
Regulations, on a large scale, were pioneered by America as a response to Great Depression. For a long time Europe was behind the US on this front.
Regulations, actually, worked miracles for the US. But two things happened: early success that prevented further improvements (medical care), and mechanistic misapplication of the practice (over-regulating businesses like hairdressing etc.) Blinded by the later, a lot of Americans believe that regulations, in general, are bad. Well, now we see a small group of people who stands to gain a lot from deregulating many aspects of American life is about to rob blind the remaining very large group of American people :|
lloeki
> [B]ureaucrats can be expected to embrace a technology that helps to create the illusion that decisions are not under their control
Isn't it the responsibility of the bureaucrat to use a computer system and whatever its output is?
"GPS said I should drive off of a cliff" doesn't seem like a very potent argument to dismiss decisional responsibility. The driver is still responsible for where the car goes.
The only case where the responsibility would shift to the computer - or rather the humans having made the computerized thingy - would be a Pentium FDIV-class bug, i.e computer system produces incorrect output+ from correct input, from which an earnest decision is then based on.
+ assuming it is indistinguishable from correct output.
_Algernon_
The difference is that bureaucrats make decisions on behalf of others so incentives are less aligned or not aligned at all.
If you drive a car and the GPS tells you to drive of the cliff, you wont do it because you don't want to die.
If some bureaucrat rejects somebody's health care claim leading to them dying prematurely, it's just a normal Tuesday.
svilen_dobrev
> "GPS said I should drive off of a cliff"
for a bureaucrat, it's "GPS said WHO-EVER-ELSE should drive off of a cliff." Their problem.
Adding, Have a good day..
TeMPOraL
Try to do something with a bank in a branch office.
Clerk: can't do anything about it, the system doesn't let me. I can get you the manager.
Branch manager: well, I can't do anything about it, "the computer says no". Let me make a call to regional office ... (10 minutes of dialing and 30 minutes of conversation later) ... The system clearly says X, and the process is such that Y cannot be done before Z clears. Even the regional office can't speed Z up.
You: explains again why this is all bullshit, and Z shouldn't even be possible to be triggered for you
Branch manager: I can put a request with the bank's IT department to look into it, but they won't read it until at least tomorrow morning, and probably won't process it earlier than in a week or so.
At this point, you either give up or send your dispute to the central office via registered mail (and, depending on its nature, might want to retain a lawyer). Most customers who didn't give up earlier, will give up here.
Was the system wrong? Probably. Like everyone, banks too have bugs in the system, on top of a steady stream of human errors. Thanks to centralized IT systems, the low-level employees are quite literally unable to help you navigate weird "corner case" scenarios. The System is big, complicated, handles everything, no one except a small number of people is allowed to touch it, and those people are mostly techies and not bank management. In this setup, anyone can individually claim they're either powerless or not in the position to make those decisions, and keep redirecting you around from one department to another until you either get tired or threaten to sue.
yencabulator
The real difference is that automated decisions in bank software systems are risk mitigation failsafes, and only make "negative decisions", the kind that refuse something that the bank is at a liberty to refuse. So they never need to be held accountable in this sense, they never choose the risky path.
Bank automation is like a human first making a "yes" decision, then calling a lawyer to ask if they should go ahead with it, and changing their mind if a lawyer advises not to go ahead.
orig | auto | result
no | n/a | no
yes | yes | yes
yes | no | no
forgetfreeman
Nothing short of a neo-luddite social movement would do the trick and even that would at best help a vanishingly small minority of the populace willing to make the kind of lifestyle changes necessary to support the ideology. Taken as a large enough group people kinda suck, they're kinda useless, and definitely lazy. It's a problem our species has been struggling with at least as long as we've had systems of writing available to document the failures of society. One could argue society is nothing more than our attempt to make people suck less.
Xmd5a
It can also dramatically lower corruption. Can't receive a bribe from someone whose visa has expired when this person's arrest was consigned in a database.
somat
This is why I have doubts about self driving cars, it changes the accountability from the driver to the manufacturer. And I have a hard time believing the manufacturer would want that liability, no matter how well they sold.
This is also the main reason for promoting chip cards, sure they are more secure, but the real reason the banks like it, is that it moves credit card fraud accountability from the banks problem to your problem.
Same with identity theft, there is no such thing as identity theft, it is bank fraud. But by calling it identity theft it changes the equation from a bank problem to your problem.
Companies hate accountability. And to be fair, everyone hates accountability.
maeil
Re: Automomous driving
If this becomes a thing, very quickly you'll quickly see insurance products created for those manufacturers to derisk themselves. And if the self-driving cars are very unlikely to cause accidents - or more accurately, if the number of times they get succesfully sued for accidents is low - it will be only a small part of the cost of a car.
The competitive advantage is too big for them to just not offer it when a competitor will, especially when the cat's out of the bag when it comes to development of such features. Look at how much money Tesla made from the fantasy that if you buy their car, in a few years it would entirely drive itself. There's clearly demand.
silvestrov
Another method is to create a lot of small companies that can go up in smoke when sued.
Supermarket delivery here is like that: the online supermarket does not own any delivery vans themselves and do not hire any delivery workers. Everything is outsourced to very small companies so problems with working conditions and bad parking is never the fault of the online supermarket.
enragedcacti
In California (one of the few places that's issued an L3 permit) the regulations place all of the requirements on the manufacturer. There is probably a workaround where the sacrificial company "installs" the self driving system (i.e. plugs in a USB drive) but then they would be the manufacturer and get saddled with tons of other regulations. Just for L3 driving alone they would need to get their own permit and their own proof of insurance or bond worth $5,000,000. Even then IDK if this would work given the department has a lot of leeway to reject applications on the basis of risk to public safety.
https://www.law.cornell.edu/regulations/california/title-13/...
satvikpendem
> Look at how much money Tesla made from the fantasy that if you buy their car, in a few years it would entirely drive itself. There's clearly demand
To be fair, their FSD really does feel like magic and is apparently leagues ahead of most other manufacturers [0].
nikitaga
> This is why I have doubts about self driving cars, it changes the accountability from the driver to the manufacturer. And I have a hard time believing the manufacturer would want that liability, no matter how well they sold.
Under current laws, perhaps. But you can always change the laws to redirect or even remove liability.
For example, in BC, we recently switched to "no-fault insurance", which is really a no-fault legal framework for traffic accidents. For example, if you are rear-ended, you can not sue the driver who hit you, or anyone for that matter. The government will take care of your injuries (on paper, but people's experiences vary), pay you a small amount of compensation, and that's it. The driver who hit you will have no liability at all, aside from somewhat increased insurance premiums. The government-run insurance company everyone has to buy from won't have any liability either, aside from what I mentioned above. You will get what little they are required to provide you, but you can't sue them for damages beyond that.
At least, you may still be able to sue if the driver has committed a criminal offence (e.g. impaired driving).
Don't believe me? https://www.icbc.com/claims/injury/if-you-want-to-take-legal...
This drastic change was brought upon for us to save, on average, a few hundred dollars per year in car insurance fees. So now we pay slightly less, but the only insurance we can buy won't come close to making us whole, and we are legally prevented from seeking any other recourse, even for life-altering injuries or death.
So, rest assured, if manufacturers' liability becomes a serious concern, it will be dealt with, one way or another. Bigger changes have happened for smaller reasons.
fluoridation
>So now [...] the only insurance we can buy won't come close to making us whole
"So"? I don't see what one thing has to do with the other. Why would a lack of liability imply an insurance that doesn't fully compensate a claim? It's not a given, for example for insurance against natural events.
nikitaga
EDIT: Sorry, I think I misread your question. Let me answer it more directly:
Driver insurance in BC is offered by ICBC, a "crown corporation", i.e. a monopoly run by the government. You have to buy this insurance to drive in BC. This insurance gives you some benefits (healthcare and some small compensation) in case you get in an accident. As a matter of fact, those benefits are often not enough to make you whole. They pay much less for pain and suffering, loss of income, etc. than a court would grant you if you could sue. But – you can't sue anymore. So, who is there to make sure that the government-run insurance monopoly will make you whole? Nobody. Because you don't have the legal right to be made whole anymore. And since there are no checks on the government, the government does not pay enough. Because, why would they, if they don't have to? They only have to pay you as much as their policy says they should pay you. You can not challenge the policy on the basis that it does not make you whole, because you don't have the right to be made whole anymore.
--- Original comment:
Natural events are nobody's fault, that's why you aren't made whole, that's why you can't sue anyone for them, with or without insurance. [ETA: you can only sue your private insurance company for what they promised you, which may or may not make you whole, depending on coverage].
BC government made the "idiot rear ending you" scenario into a "natural event", so to speak, so that you can't sue the idiot, or their insurance, or anyone, to recover damages. You will only get what the government-run insurance monopoly will give you, which is not much.
This isn't directly about insurance. This is about the government declaring that liability for most traffic accidents does not exist anymore. Which is the part that is relevant to this conversation. If liability can be extinguished wholesale for all drivers like this, then this can surely be done for self-driving cars. Not saying that it's a good idea, just that this option is on the table.
zerd
The old system was horrible though, you had to sue to get compensation, and could get sued for a fender-bender. The old system was _great_ for lawyers.
nikitaga
In the old system, you only had to sue for compensation if / when the government wasn't offering you what you were due. It was entirely the government's choice to drag so many cases through the courts instead of paying. But at least the judicial system eventually made you whole, if you were able to navigate it. If the government cared about us so much that they wanted to fix the system, they could have simply chosen to pay what was due from the start, saving everyone the time and the legal expenses. But they didn't.
Fraud was another concern. Huge payouts from parking lot whiplash were indeed not uncommon, with the help of lawyers. However, I fail to see how the new system was the best solution for that. They went from one extreme, where fraud was rampant, to another extreme, where we have no rights. At least the first extreme cost us only a few hundred bucks per year on average. The new extreme saves you a bit of money but leaves people injured for life with no meaningful compensation for the harm done to them.
Kind of beside the point though, regarding self-driving cars.
draugadrotten
Volvo saw this coming 2019 and their CEO said they will accept full liability.
https://www.thedrive.com/tech/455/volvo-accepting-full-liabi...
zuhsetaqi
Well, let’s see when they‘ll launch a full self driving car they accept full liability with. It’s easy to promise it and never deliver such a car.
BonoboIO
Full agreement.
For a fender bender, well money can fix a lot of things, but what happens when the car kills a mother and her toddler.
CEO goes to jail?
prmoustache
As we say: promises only bind those who believe in them.
Unless it is written and signed in some form of paper given when vehicle is sold, it doesn't mean anything legally.
moolcool
That's good optics, but can you actually do that though? Like you can declare "I claim responsibility!", but in real-life, doesn't a court have to actually find you liable?
enragedcacti
Basically yes, it's effectively just a promise, but statements like this could probably be used as evidence if it came to that. Your insurance would talk to their insurance and tell their insurance to talk to Volvo. Volvo would settle or maybe fight the case but they pinky promise not to try to push it back to you or your insurance.
Attrecomet
I DECLARED responsibility!
Rygian
Isn't that precisely what Mercedes advertises as a selling point with their self-driving technology? "Manufacturer assumes liability when Drive Pilot is on"
Lanolderen
inb4: Drive Pilot disengages when the situation is deemed unsaveable demanding manual input and it was on until 250ms before the crash. It was mentioned on page 436 of the ToS so get bent unless it's a tiny fender bender.
enragedcacti
Its a funny idea but we have the manual for Drive Pilot and any reasonable reading shows there is no exception like that. When the system is active the person in the driver's seat is considered the "fallback-ready user" and is explicitly encouraged to watch videos or do work while the system is active. In the event of a takeover request the user is told to first "gather your bearings" before taking over, and there is a "maximum allotted time of 10 seconds" to respond before the vehicle puts on hazards and comes to a stop.
In California, where Drive Pilot is approved, the manual is required to be included in the permit application and any "incorrect or misleading information" would at the absolute minimum be grounds for revocation of MB's permit.
https://www.mbusa.com/content/dam/mb-nafta/us/owners/drive-p...
BonoboIO
Oh I see, the Tesla Defense.
Disengage to deflect responsibility for a crash.
amiga386
> This is also the main reason for promoting chip cards, sure they are more secure, but the real reason the banks like it, is that it moves credit card fraud accountability from the banks problem to your problem.
It depends on the jurisdiction. Banks like it because it improves the security, i.e. the card was physically present for the transaction, if not the cardholder or the cardholder's authority. It eradicates several forms of fraud such as magnetic stripe cloning. Contactless introduced opportunities for fraud, if someone can get within a few cm of your card, but it's generally balanced by how convenient it is, which increased the overall volume of transactions and therefore fees. It's more secure from fraud than a cardholder-not-present transaction... and for CNP, you can now see banks and authorities mandating 2FA to improve their security too.
Liability is completely seperate, and depends on how strong your financial regulator is.
Banks obviously would like to blame and put the liability on customers for fraud, identity theft, etc., it's up to politicians not to let them. For example, in the UK we have country-wide "unauthorised payments" legislation: https://www.legislation.gov.uk/uksi/2017/752/regulation/77 -- for unauthorised payments (even with a chip and pin card), if it is an unauthorised payment, the UK cardholder is only liable for a £35 excess, and even then they are not liable for the excess if they did not know the payment took place. The cardholder is only liable if they acted fraudulently, or were "grossly negligent" (and who decides that is the bank initially, then the Financial Ombudsman if the cardholder disagrees)
There is similarly a scheme now in place even for direct account-to-account money transfers, since last October: https://www.moneysavingexpert.com/news/2023/12/banks-scam-fr... -- so even if a crook scams you into logging into your bank's website and completely securely transferring money to them, banks are liable for that and must refund you up to £415,000 per claim, but they're allowed to exclude up to £100 excess per claim, but they can't do that if you're a "vulnerable customer" e.g. old and doddery. Also, the £100 excess is intentionally there to prevent moral hazard where bank customers get lax if they think they'll always get refunded. Seems to me like the regulator has really thought it through. The regulator also says they'll step in and change the rules again if they see that the nature of scamming changes to e.g. lots of sub-£100 fraudulent payments, so the customer doesn't report it because they think they'll get nothing back.
skybrian
For public transportation, the service provider is liable. This isn’t going to be very comforting if your plane crashes.
But having a system where the accident rate gets driven down to near zero (like air travel) is pretty good. Waymo seems to be on that path?
shinryuu
There are more reasons to be sceptic about self-driving cars. See https://www.youtube.com/watch?v=040ejWnFkj0
theoreticalmal
Can you summarize?
trhway
>I have a hard time believing the manufacturer would want that liability, no matter how well they sold.
i guess you haven't watched the Fight Club :)
Barrin92
there's the old joke
"It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter."
Removing yourself to one or more degrees from decision making isn't only an accident but is and will more and more be done to intentionally divert accountability. "The algorithm malfunctioned" is already one of the biggest get out of jail free cards and with autonomous systems I'm pretty pessimistic it's only going to get worse. It's always been odd to me that people focus so much on what broke and not who deployed it in the first place.
miltonlost
It’s why I could never work at Meta, knowing how much I would feel responsible in aiding various genocides around the world. How any engineer there is able to ethically live with themselves is beyond me (but I also don’t make that Meta money)
Terr_
I've been getting cold-emails from them lately, and I've been toying with the idea of regretfully informing them that I don't think I could bring of enough of the "Masculine Energy" their CEO has been talking about.
DaSHacka
Arguably, depending on your country, paying your taxes is significantly worse...
miltonlost
Yes, this thing I MUST DO if I don't want to go to jail, is somehow worse than voluntarily working for a private company. Yes, we do live in a society.
EtCepeyd
> why I could never work at Meta [...] How any engineer there is able to ethically live with themselves is beyond me
https://cacm.acm.org/opinion/i-was-wrong-about-the-ethics-cr...
wombatpm
RSU’s can be very comforting.
EncomLab
"What people want from the automated calculator is not more accurate sums, but a box into which they may place their responsibility for acting on the results." - Norbert Wiener
His book God & Golem Inc. is incredibly prescient for a work created at the very beginning of the computer age.
52-6F-62
I wonder if they really want that, or if that is what is being actively peddled as the new and better way and they’re just ignorantly buying it up.
andy_ppp
It's funny because the management where I work operate the policy:
"A manager can be held accountable"
"Therefore a manager must never make a management decision"
\shrugs
andrewla
I think philosophically this is a good rule of thumb; the problem is that the euphemism treadmill (or whatever) has done its work.
"Accountable" is meaningless therapy-speak now.
CEO says "oh, this was a big problem, that we leaked everyone's account information and murdered a bunch of children. I hold myself accountable" but then doesn't quit or resign or face any consequences other than the literal act of saying they are taking "accountability".
jonahx
In contrast Law 229 of Hammurabi's Code:
"If a house builder built a house for a man but did not secure/fortify his work so that the house he built collapsed and caused the death of the house owner, that house builder will be executed."
While extreme, this is the only type of meaningful accountability: the type that causes pain for the person being held accountable. The problem is that (for better and worse) in the corporate world the greatest punishment available for non-criminal acts is firing. Depending on the upside for the bad act, this may not be nearly enough to disincentivize it.
satvikpendem
Studies show that sentences like the death penalty do not actually reduce crime. There's a reason there's a saying that stems from Hammurabi's Code, "an eye for an eye makes the whole world blind."
jonahx
My understanding is that this is not a settled issue, and there is dispute about the evidence, conflicting studies, and so on.
Regardless, to the extent that it's true, it's imo likely more to do with the nature of crimes of passion, and is certainly not evidence against the effects of incentives in general. Death penalty aside, if you are doing any kind of work, and you know you will be held accountable for errors in that work in ways that matter to you (whether SLA agreements, getting sued, losing payment, etc), it will change the way approach it. At least for the most people.
Nasrudith
The flipside is I have seen "accountable" used basically as arbitrary threat. Hold accountable for whatever I don't like without regard to any mitigating circumstances.
noobermin
Just a thought, this happened already with "the algorithms" before this current hype cycle with AI.
macintux
True, but at least the algorithms were deterministic, planned, and could be deciphered and improved if necessary.
crabbone
Around 2010 I was invited to interview for Google's SRE team. In the course of preparation for the interview, the Google's HR person appointed to me gave me a list of various questions I'd have to prep for. One of the questions was "what should be the next large project for Google?".
My answer, unironically, was "GoogleGovernment". The idea was to build SAP-like suite of programs that a country could then buy or rent and have a fully digital government to run the country...
Luckily, that question never came up in the interview, and remained an anecdote I share with other coffee drinkers around the coffee machine.
My younger self believed (inspired by a chapter from the UK citizen act translated into Prolog) that the success could be expanded much further (I didn't bother reading the accompanying paper, at least not at that time).
While it was already mentioned that there will be people either unable to overpower the computer system making bureaucratic decision as well as those who'd use it to avoid responsibility... I think it's due to the readers here tending to be older. It's hard to appreciate the enthusiasm with which a younger person might believe that such a computer system can be a boon to society. That it can be made to function better than humans currently in charge. And that mistakes, even if discovered, will be addressed in a much more efficient (than live humans would) and fixed in a centralized and timely fashion.
rzzzt
What does the backside say? I can make out the title at the bottom: "THE COMPUTER MANDATE", but not much else.
Terr_
Others have tried to figure out exactly what actual paperwork that particular image might be from (e.g. a memo or presentation flashcards) but AFAIK it's still inconclusive.
A plausible transcription:
> THE COMPUTER MANDATE
> AUTHORITY: WHATEVER AUTHORITY IS GRANTED IT BY THE SOCIAL ENVIRONMENT WITHIN WHICH IT OPERATES.
> RESPONSIBILITY: TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER WHENEVER INSTRUCTED TO DO SO
> ACCOUNTABILITY: NONE WHATSOEVER.
Sophira
There's also other text that's reversed:
> A MANDATE WITH TOO LITTLE AUTHORITY DOES NOT PROVIDE THE TOOLS REQUIRED TO TAKE ADVANTAGE OF THE LEVERAGE
> A MANDATE WITH TOO LITTLE RESPONSIBILITY PROVIDES TOO LITTLE LEVERAGE FOR THE RISKS
There's also other text that can't be read properly because the visible text overlays it, but that starts "A MANDATE WI" and ends with "FORM OF SUICIDE", with some blurred words before it. I imagine it's quite likely that the line is something along the lines of:
> A MANDATE WITH TOO LITTLE ACCOUNTABILITY IS AN INSTANT FORM OF SUICIDE
[edit: I accidentally mistyped a word in the transcription.]
layer8
See: https://mastodon.social/@mhoye/112459155499812117
Which also links to the earlier: https://infosec.exchange/@realn2s/111717179694172705
And that in turn to the somewhat related: https://www.ibm.com/blogs/think/be-en/2013/11/25/the-compute...
rzzzt
Thanks for the link. So most of the visible pages were deciphered!
shakna
The first word of the paragraph appears to be, "authority".
I can't quite make out the first paragraph, contents.
But a bit after that comes under another semi-title "responsibility" and part of it reads:
> TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER WHENEVER INSTRUCTED TO DO SO
This [0] small link might make it easier to read bits.
malwrar
…therefore the operator is responsible?
Seems like the clearest legal principle to me, otherwise we ban matches to prevent arson.
andrewflnr
This principle also works for security vulnerabilities in open source software, by the way. That is, the responsibility for preventing security exploits rests on the party who operates or deploys the software. Don't want the "risk" of open source? Feel free to use something more expensive. But it might be cheaper to pay the original developer for patches.
platz
the owner
andrewflnr
Whoever is responsible for putting the computer in a position where its decisions mattered. Whether that's the owner or their agent is a question for which we already have a couple centuries of mostly-adequate legal precedent.
WaitWaitWha
Interestingly this is an axiom of digital forensics uses in reverse of sort.
In court, digital forensics investigators can attest what was performed on the devices, timeline, details, and such. But, it should never be about a named person. The investigator can never tell who was sitting at the keyboard, pushing the buttons, or if some new and unknown method to implant those actions (or evidence).
It is always jarring to laypeople when they are told by the expert that there is a level of uncertainty, when throughout their lives computers appear very deterministic.
fennecfoxy
Tbf do we even really care about this issue?
So many private companies and the individuals haven't been held accountable in so many ways and for so long. We'd rather squabble over race, religion, sexuality, migrants etc than address the Elephant in the room.
The other side of this coin is that there is an incentive for decision makers to use computers, precisely to not be held accountable. This is captured pretty well by this quote by Neil Postman in Technopoly:
>[B]ureaucrats can be expected to embrace a technology that helps to create the illusion that decisions are not under their control. Because of its seeming intelligence and impartiality, a computer has an almost magical tendency to direct attention away from the people in charge of bureaucratic functions and toward itself, as if the computer were the true source of authority. A bureaucrat armed with a computer is the unacknowledged legislator of our age, and a terrible burden to bear. We cannot dismiss the possibility that, if Adolf Eichmann had been able to say that it was not he but a battery of computers that directed the Jews to the appropriate crematoria, he might never have been asked to answer for his actions.
How does one counteract that self-serving incentive? Doesn't seem like we've found a good way considering we seem to be spearheading straight into techno-feudalism.