A computer can never be held accountable
110 comments
·February 3, 20250xDEAFBEAD
The implication here is that unlike a computer, a person or a corporation can be held accountable. I'm not sure that's true.
Consider all of the shenanigans at OpenAI: https://www.safetyabandoned.org/
Dozens of employees have left due to lack of faith in the leadership, and they are in the process of converting from nonprofit to for-profit, all but abandoning their mission to ensure that artificial intelligence benefits all humanity.
Will anything stop them? Can they actually be held accountable?
I think social media, paradoxically, might make it harder to hold people and corporations accountable. There are so many accusations flying around all the time, it can be harder to notice when a situation is truly serious.
chefandy
There's a big difference between can and will. We absolutely can hold people and corporations accountable, but we often don't. We cannot hold a computer responsible for anything. It's a computer. No matter how complex or abstracted, its output is entirely based on instructions and data given to it by humans, interpreting and executing it as humans designed it to. It can't be discouraged or punished: a computer doesn't care if it's on or off; if it's the most important computer to have ever existed or a DoA Gateway 486 from the early 90s that sat in a dumpster from the day after it was born until the day it was smashed to bits in a garbage compactor in a transfer station. It doesn't care because it can't care. Anything beyond that is anthropomorphization.
unparagoned
Also in the justice system, a judge can be racist, and the sentence they give has been show to be related with how hungry they are, etc.
Would I rather be at the whims of how hungry somone is, or a model that can be tested and evaluated.
h0l0cube
> might make it harder to hold people and corporations accountable
The problem is that someone (or some organization) chose to employ that system, and if the errant system doesn't oblige to have itself replaced with a new one, or be amenable to change, the responsibility rebounds back to whoever controls that system, whether that be at the level of the source code, or the circuit breaker.
michael1999
Corporations are regularly "held accountable". Remember that "accountable" just means "required or expected to justify actions or decisions; responsible."
When you sue a corporation, discovery demands that they share their internal communication. You can depose key actors and require they describe the events. These actors can be cross-examined. A trial continues this. This is the very definition of "accountable".
The problem at OpenAI is that the employees were credulous children who took magic beans instead of a board seat. Legally, management is accountable to the board. In serious cultures that believe in accountability, labour demands seats on the board. In VC story-land, employees make do with vague promises with no legal force.
0xDEAFBEAD
>The problem at OpenAI is that the employees were credulous children who took magic beans instead of a board seat. Legally, management is accountable to the board. In serious cultures that believe in accountability, labour demands seats on the board. In VC story-land, employees make do with vague promises with no legal force.
This is not a good description of the incident. The employees I mention in my comment, who quit due to lack of faith in Sam Altman, were presumably on the board's side in the Sam vs board drama.
There is still a chance that OpenAI's conversion to for-profit will be blocked. The site I linked is encouraging people to write letters to relevant state AGs: https://www.safetyabandoned.org/#outreach
I think there's a decent argument to be made that the conversion to a for-profit is a violation of OpenAI's nonprofit charter.
michael1999
I hid my point behind the snark. Apologies.
My point is: accountability is NOT an abstract property of a thing. It is a relationship between two parties. I am "accountable" to you IF you can demand that I provide an explanation for my behaviour. I am accountable to my boss. I am accountable to the law, should I be sued or charged criminally. I am NOT accountable to random people in the street.
Sam Altman is accountable to the board. The board can demand he explain himself (and did). Management is generally NOT accountable to employees in the USA. This is because labor rarely has a legal right to demand an accounting. In serious labour cultures (e.g. Germany), it is normal for the unions to hold board seats. These board seats are what makes management accountable to the employees.
OpenAI employees took happy words from sama at face value. That was not a legal relationship that provided accountability. And here we are. The decision to change from a not-for-profit is accountable to the board, and maybe the chancellors of Delaware corporate law.
peterldowns
Per Landian-Accelerationist theory, companies are already artificial intelligences. As we've seen, they can be held accountable, and the law (at least in the US) does distinguish in a variety of ways between corporate responsibility and personal responsibility. As you point out, there are lots of failure cases here, and it's something I expect to see continue to be litigated over the coming century.
gom_jabbar
Correct, for Nick Land "Business ventures are actually existing artificial intelligences"[0] and the failure cases will increase with the ongoing autonomization of capital and eventually the concept of "capital self-ownership"[1] will have to be recognized.
[0] Nick Land (2014). Odds and Ends in Collapse Volume VIII: Casino Real. p. 372.
michael1999
To be "accountable" means you can be called to "explain yourself". A dictionary definition is "required or expected to justify actions or decisions;".
Don't confuse this with judgement, punishment, firing, etc. Those are all downstream. But step one is responding to the demand that you "make an account of the facts". That a computer or a company doesn't have a body to jail has nothing to do with fundamental accountability.
The real problem is that most computer systems can not respond this demand: "explain yourself!" They can't describes the inputs to an output, the criteria and thresholds used, the history of how thresholds have changed, or the reliability of the upstream data. They just provide a result: computer says no.
What's interesting is that llms are beginning to form this capacity. What damns them is not that they can't provide an accounting, but that their account is often total confabulation.
Careless liars should not be placed in situations of authority; not because they can't be held accountable, but because they lie when they are.
nostrademons
> The real problem is that most computer systems can not respond this demand: "explain yourself!" They can't describes the inputs to an output, the criteria and thresholds used, the history of how thresholds have changed, or the reliability of the upstream data. They just provide a result: computer says no.
By this definition, many computer systems can. The answers are all in the logs and the source code, and the process of debugging is basically the act of holding the software accountable.
It's true that the average layperson cannot do this, but there are many real-life situations where the average layperson cannot hold other people accountable. I cannot go up to the CEO of Boeing and ask why the 737-MAX I was on yesterday had a mechanical failure, nor can I go up to Elon Musk and ask why his staff are breaking into the Treasury Department computer systems. But the board of directors of Boeing or the court system, respectively, can, at least is theory.
michael1999
You get it. To make the concept of accountability operational requires some standard of accounting. The historical accountability for a business agent meant to literally present oneself and provide an explanation. That where the term "accounting" comes from. But different systems have different systems of account. Firm management is accountable to the board via budgets, reports, presentations, and interviews as described in the incorporation documents. Publicly-traded firms are accountable via quarterly filings. Legal disputes require physical presence and verbal interrogation.
But your example of a debugging session or logs and register traces are also an accounting! But not one admissible in traditional forums. They usually require an expert witness to provided the interpretation and voice I/O for the process.
The reason you can't accost the CEO of Boeing isn't because they aren't accountable. It's because they aren't accountable to you! Accountability isn't a general property of a thing, it is a relationship between two parties. The CEO of Boeing is accountable to his board. Your contract was with Delta (or whoever) to provide transport. You have no contract with Boeing.
You are 100% right that the average consumer often zero rights to accountability. Between mandatory arbitration, rights waivers, web-only interfaces, and 6-hour call-centre wait times, big companies do a pretty good job of reducing their accountability to their customers. Real accountability is expensive.
dullcrisp
I don’t know, when asked to explain my actions I don’t typically just provide an MRI scan of my brain. An explanation is something different than just the sum of the inputs that produced something.
nostrademons
These days it seems like we can't hold humans accountable either.
leptons
Not true, there are plenty of poor people being held accountable.
nostrademons
Even here, the concept is decaying. Accountability, as explained elsewhere on the thread, is about being asked to explain and justify your actions. If a poor person gets arrested and shows up to court, frequently nobody listens to their explanation. The mere fact that they're poor and in court is evidence of guilt. That's not accountability; that's just punishment.
Accountability requires a common standard of conduct. People have to agree on what the rules are. If they can't even do that, the concept ceases to have meaning, and you simply have the exercise of power and "might makes right".
bamboozled
I thought the same thing, people do literally whatever they want, evade tax, annex sovereign territory, coups, war crimes, pedophilia, it seems to just be getting worse.
I honestly feel like a moron for paying taxes.
throwitaway222
AI will definitely, without a doubt, make executive decisions. It already makes lower level decisions. The company that runs the AI, can be held accountable. (meaning less likely OpenAI or the foundational LLM, but more likely the company calling LLMs that make decisions on car insurance, etc...)
themanmaran
Thing is, the chain of responsibility gets really muddled over time, and blame is hard to dish out. Let's think about denying a car insurance claim:
The person who clicks the "Approve" / "Deny" button is likely an underwriter looking at info on their screen.
The info they're looking at get's aggregated from a lot of sources. They have the insurance contract. Maybe one part is AI summary of the police report. And another part is a repair estimate that gets synced over from the dealership. A list of prior claims this person has. Probably a dozen other sources.
Now what happens if this person makes a totally correct decision based on their data, but that data was wrong because the _syncFromMazdaRepairShopSFTP_ service got the quote data wrong? Who is liable? The person denying the claim, the engineer who wrote the code, AWS?
In reality, it's "the company" in so far as fault can be proven. The underlying service providers they use doesn't really factor into that decision. AI is just another tool in that process that (like other tools) can break.
almosthere
_syncFromMazdaRepairShopSFTP_ failing is also just as likely to cause a human to deny a claim.
Just because an automated decision system exists, does not mean an OOB (out of band) correctional measure should not exist.
In other words if AI fixes a time sink for 99% of cases, but fails on 1%, then let 50% of the 1% of angry customers get a second decision because they emailed the staff. That failure system still saves the company millions per year.
chasing
Executives have always used decision-making tools. That’s not the point. The point is that the executive can’t point to the computer and say “I just did what it said!” The executive is the responsible party. She or he makes the choice to follow the advice of the decision-making tool or not.
owlbite
The scary thing for me is when they've got an 18 year old drone operator making shoot/no-shoot decision on the basis of some AI metadata analysis tool (phone A was near phone B, we shot phone B last week...).
You end up with "Computer says shoot" and so many cooks involved in the software chain that no one can feasibly be held accountable except maybe the chief of staff or the president.
h0l0cube
More than any other organization, the military can literally get away with murder, and they're motivated to recruit and protect the best murderers. It's only by political pressure that they may uphold some moral standards.
freeone3000
There is not a finite amount of blame for a given event. Multiple people can be fully at fault.
stavros
Yeah but it's fine because nobody cares if you kill a few thousand brown people extra.
hosh
A description of Promise Theory, in an article published in the Linux Journal in 2014:
"IT installations grow to massive size in data centers, and the idea of remote command and control, by an external manager, struggles to keep pace, because it is an essentially manual human-centric activity. Thankfully, a simple way out of this dilemma was proposed in 2005 and has acquired a growing band of disciples in computing and networking. This involves the harnessing of autonomous distributed agents." (https://www.linuxjournal.com/content/promise-theory%E2%80%94...)
What are autonomous agents in promise theory?
"Agents in promise theory are said to be autonomous, meaning that they are causally independent of one another. This independence implies that they cannot be controlled from without, they originate their own behaviours entirely from within, yet they can rely on one another's services through the making of promises to signal cooperation." (https://en.wikipedia.org/wiki/Promise_theory#Agents)
Note: the wikipedia article is off because it is framing agents in terms of obligations instead of promises. Promises make no guarantee of behavior, and it is up each autonomous agent to decide how much it can rely on the promises of other autonomous agents.
So to circle back to this original post with the lens of Promise Theory -- being held accountable comes from a theory of obligations rather than a theory of promises. (There is a promise made by the governing body to hold the bad actor responsible). More crucially, we are treating AIs as _proxies_ for autonomous agents -- humans. Human engineers and potentially, regulatory bodies, are promising certain performances in the AIs, but the AIs have exceeded the engineer's capability for bounding behaviors.
To make that next leap, we would be basically having AIs make their own promises, and either holding them to them to it, or consider that that specific autonomous agent is not reliable in their promises.
lrvick
Meanwhile I work in reproducible builds and remote attestation. We absolutely can and must hold computers accountable, now that we have elected them into positions of power in our society.
h0l0cube
Surely the company that is making profit out of said build systems and providing attestations holds some accountability. Someone wrote the code. Someone paid for the code to be written to a particular standard, under particular budget and resourcing constraints. Someone was responsible for ensuring the code was adequately audited. Someone claimed it was fit for purpose, and likely insured it as such, because they are ultimately responsible.
byteknight
You can only hold computers accountable if you can guarantee no outside modification. We still haven't ever successfully had a system that's not "pop-able" that I am aware of.
rzzzt
What does the backside say? I can make out the title at the bottom: "THE COMPUTER MANDATE", but not much else.
Terr_
Others have tried to figure out exactly what actual paperwork that particular image might be from (e.g. a memo or presentation flashcards) but AFAIK it's still inconclusive.
A plausible transcription:
> THE COMPUTER MANDATE
> AUTHORITY: WHATEVER AUTHORITY IS GRANTED IT BY THE SOCIAL ENVIRONMENT WITHIN WHICH IT OPERATES.
> RESPONSIBILITY: TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER WHENEVER INSTRUCTED TO DO SO
> ACCOUNTABILITY: NONE WHATSOEVER.
shakna
The first word of the paragraph appears to be, "authority".
I can't quite make out the first paragraph, contents.
But a bit after that comes under another semi-title "responsibility" and part of it reads:
> TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER WHENEVER INSTRUCTED TO DO SO
This [0] small link might make it easier to read bits.
layer8
See: https://mastodon.social/@mhoye/112459155499812117
Which also links to the earlier: https://infosec.exchange/@realn2s/111717179694172705
And that in turn to the somewhat related: https://www.ibm.com/blogs/think/be-en/2013/11/25/the-compute...
a3w
Wisdom from '79!
Could also be wisdom from the fifties, found again.
azlev
Relevant example here: https://en.m.wikipedia.org/wiki/1983_Soviet_nuclear_false_al...
canterburry
Isn't accountability simply to prevent repeat bad behavior in the future...or is it meant to be punitive without any other expectations?
If meant to prevent repeat bad behavior, then simply reprogramming the computer accomplished the same end goal.
Accountability is really just a means to an end which can be similarly accomplished in other ways with machines which isn't possible with humans.
brap
Right, but as long as you have humans, you will probably need accountability.
If a human decided to delegate killing enemy combatants to a machine, and that machine accidentally killed innocent civilians, is it really enough to just reprogram the machine? I think you must also hold the human accountable.
(Of course, this is just a simplified example, and in reality there are many humans in the loop who share accountability, some more than others)
miltonlost
You fundamentally don’t understand either accountability or what people mean by “computers can’t be held accountable”. Who is at fault when a computer makes a mistake? That is accountability.
You cannot put a computer in jail. You cannot fine a computer. Please, stop torturing what people mean because you want AI to make decisions to absolve you of guilt.
chgs
What is the purpose of putting a person in jail or fining them?
Retribution? Reformation? Prevention?
cmgriffing
Consider the Volkswagen scandal where code was written that fudged the results when in an emissions testing environment.
The only person to see major punishment for that was the software dev that wrote the code, but that decision to write that code involved far more people up the chain. THEY should be held accountable in some way or else nothing prevents them from using some other poor dev as a scapegoat.
willy_k
All of the above. Whether or not one agrees with it, humans have a need for retribution, or as we prefer to call it to feel better about it, justice. And you cannot get retribution on LLMs.
echoangle
In this context, prevention. So people see what happens if they screw up in a negligent way and make sure to not do it themselves.
miltonlost
Mixture of all three, but for the purposes of “accountability”, prevention of the behavior in the first place. But I don’t want to debate prisons when that’s derailing the larger point of “accountability in AI/computers”.
null
canterburry
What is the purpose of accountability?
miltonlost
To stop people from making illegal decisions ahead of time, and not just to punish them after. If there is no accountability to an AI, then a person making a killer robot would have no reason to not make a killer robot. If they were more to be imprisoned for making a killer robot, then they would be less likely to make a killer robot.
In a world without accountability, how do you stop evil people from doing evil things with AI as they want?
Macha
> If meant to prevent repeat bad behavior, then simply reprogramming the computer accomplished the same end goal.
Note the bad behaviour you're trying to prevent is not just the specific error that the computer made, but delegating authority to the computer to the level that it was able to make that error without proper oversight.
1shooner
This sounds like a conflation of responsibility with accountability. A machine responsible for emitting a certain amount of radiation on a patient can and should be reprogrammed. The company and/or individuals that granted a malfunctioning radiation machine that responsibility need to be held accountable.
Terr_
I think you're confusing the tool with the user.
Improving the tool's safety characteristics is not the same as holding the user accountable because they made stupid choices with unsafe tools. You want them to change their behavior, no matter how idiot-proofed their new toolset is.
wmf
In practice they will try to avoid acknowledging errors and will never reprogram the computer. That's why a human appeals system is needed.
maxbond
This makes sense if the computer was programmed that way accidentally. If the computer is a cut out to create plausible deniability, then reprogramming it won't actually work. The people responsible will find a way to reintroduce a behavior with a similar outcome.
chasing
You’ve set up an either-or here that fails to take into account a wide spectrum of thought around accountability and punishment.
When it comes to computers, the computer is a tool. It can be improved, but it can’t be held any more accountable than a hammer.
At least that’s how it should be. Those with wealth will do whatever they feel they need to do to shun accountability when they create harm. That will no doubt include trying to pin the blame on AI.
gwern
I would suggest an updated version, more germane to the current fast-developing landscape of AI agents:
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE WE MUST NEVER DENY THAT
COMPUTERS CAN MAKE DECISIONS
Terr_
I disagree, that's throwing away the 1979-era qualifier of management decision, as distinct from the decisions made by an hourly employee (or computer) following a pre-made checklist (or program.) It's not the same as FizzBuzz "deciding" to print something out.
Related qualifiers might be "policy decision" or "design decisions".
> a computer must never make a management decision.
This a little too weak for my taste.
In reality it should read "a computer can't make a management decision". As in the sun can't be prevented from rising, or the law of thermo dynamics can't be broken.
Must implies that you really shoudln't but technically it's feasible. Like "you must not murder".
A computer, like dogs, can't be held accountable; only their owners can.
Edit
If anyone tries to do this they are simply laundering their own accountability.