Google drops pledge not to use AI for weapons or surveillance
104 comments
·February 4, 2025a_shovel
I initially thought that this was an announcement for a new pledge and thought, "they're going to forget about this the moment it's convenient." Then I read the article and realized, "Oh, it's already convenient."
Google is a megacorp, and while megacorps aren't fundamentally "evil" (for some definitions of evil), they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise.
Retric
> while megacorps aren't fundamentally "evil" (for some definitions of evil),
I think megacorps being evil is universal. It tends to be corrupt cop evil vs serial killer evil, but being willing to do anything for money has historically been categorized as evil behavior.
That doesn’t mean society would be better or worse off without them, but it would be interesting to see a world where companies pay vastly higher taxes as they grow.
mananaysiempre
Most suggestions of this nature fail to explain how they will deal with the problem of people just seeing there’s no point in trying for more. On a personal level, I’ve heard people from Norway describe this problem for personal income tax—at some point (notably below a typical US senior software engineer’s earnings) the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating, and you either coast or emigrate. Perhaps that’s not entirely undesirable, but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.
BrenBarn
> Most suggestions of this nature fail to explain how they will deal with the problem of people just seeing there’s no point in trying for more. On a personal level, I’ve heard people from Norway describe this problem for personal income tax—at some point (notably below a typical US senior software engineer’s earnings) the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating, and you either coast or emigrate. Perhaps that’s not entirely undesirable, but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.
I think if you look at quality of life and happiness ratings in Norway it's pretty clear it's far from "entirely undesirable". It's good for people to do things for reasons other than money.
giantg2
"the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating"
Sounds like the effort needed for bonuses here in the US. Why try if the amount is largely arbitrary and generally lower than your base salary pay rate when you consider all the extra hours. Everything is a sham.
sweeter
We're talking about corporations here, where are they going to go? If you had a competent government, you would say "fine, then leave. But your wealth and business is staying here" at some point the government has to do its job. These corporations pull in trillions of dollars, its wild to me to suggest that suddenly everyone would stop working and making money because they were taxed at a progressive rate. Its an absurd assumption to begin with.
We could literally have high speed rail, healthcare, the best education on the planet and have a high standard of living... and it would be peanuts to them. Instead we have a handful of people with more wealth than 99% of everyone else, while the bottom 75% of those people live in horrifying conditions. The fact that medical bankruptcy is a concept only in the richest country on earth is deeply embarrassing and shameful.
zelon88
You're taking about pre-Clinton consumerism. That system is dead. It used to dictate that the company who could offer the best value deserved to take over most of the market.
That's old thinking. Now we have servitization. Now the business who can most efficiently offer value deserves the entire market.
Basically, iterate until you're the only one left standing and then never "sell" anything but licenses ever again.
Ekaros
The bait-and-switch model is absolutely amazing as well. Start by offering a service covered with ads. Then add paid tier to get rid of ads. Next add tier with payment and ads. And finally add ads back to every possible tier. Not to forget about keeping them in content all the time.
thomassmith65
Is that referring to 'customer focus'? That was still going strong in the Clinton era. It probably peaked around the year 2000.
ericmay
My problem with this take is that you forget, the corporations are made up of people, so in order for the corporation to be evil you have to take into account the aggregate desires and decision making of the employees and shareholders and, frankly, call them all evil. Calling them evil is kind of a silly thing to do anyway, but you can not divorce the actions of a company from those who run and support it, and I would argue you can't divorce those actions from those who buy the products the company puts out either.
So in effect you have to call the employees and shareholders evil. Well those are the same people who also work and hold public office from time to time, or are shareholders, or whatever. You can't limit this "evilness" to just an abstract corporation. Not only is it not true, you are setting up your "problem" so that it can't be addressed because you're only moralizing over the abstract corporation and not the physical manifestation of the corporation either. What do you do about the abstract corporation being evil if not taking action in the physical world against the physical people who work at and run the corporation and those who buy its products?
I've noticed similar behavior with respect to climate change advocacy and really just "government" in general. If you can't take personal responsibility, or even try to change your own habits, volunteer, work toward public office, organize, etc. it's less than useless to rail about these entities that many claim are immoral or need reform if you are not personally going to get up and do something about it. Instead you (not you specifically) just complain on the Internet or to friends and family, those complaints do nothing, and you feel good about your complaining so you don't feel like you need to actually do anything to make change. This is very unproductive because you have made yourself feel good about the problem but haven't actually done anything.
With all that being said, I'm not sure how paying vastly higher taxes would make Google (or any other company) less evil or more evil. What if Google pays more taxes and that tax money does (insert really bad thing you don't like)? Paying taxes isn't like a moral good or moral bad thing.
Retric
> made up of people
People making meaningful decisions at mega corporations aren’t a random sample of the population, they are self selected to care a great deal about money and or power.
Honestly if you wanted to filter the general population to quietly discover who was evil I’d have a hard time finding something more effective. It doesn’t guarantee everyone is actually evil, but actually putting your kids first is a definite hindrance.
The morality of the average employee on the other hand is mostly irrelevant. They aren’t setting policies and if they dislike something they just get replaced.
BrenBarn
I don't really agree with some of your assumptions. At many companies, many of the people also are evil. Many people who hold shares and public office are also evil.
I don't think it's necessary to conclude that because a company is evil then everyone who works at the company is evil. But it's sort of like the evilness of the company is a weighted function of the evilness of the people who control it. Someone with a small role may be relatively good while the company overall can still be evil. Someone who merely uses the company's products is even more removed from the company's own level of evil. If the company is evil it usually means there is some relatively small group of people in control of it making evil decisions.
Now, I'm using phraseology here like "is evil" as a shorthand for "takes actions that are evil". The overall level of evilness or goodness of a person is an aggregate of their actions. So a person who works for an evil company or buys an evil company's products "is evil", but only insofar as they do so. I don't think this is even particularly controversial, except insofar as people may prefer alternative terms like "immoral" or "unethical" rather than "evil". It's clear people disagree about which acts or companies are evil, but I think relatively few people view all association with all companies totally neutrally.
I do agree with you that taking personal responsibility is a good step. And, I mean, I think people do that too. All kinds of people avoid buying from certain companies, or buy SRI funds or whatever, for various ethically-based reasons.
However, I don't entirely agree with the view that says it's useless or hypocritical to claim that reform is necessary unless you are going to "do something". Yes, on some level we need to "do something", but saying that something needs to be done is itself doing something. I think the idea that change has to be preceded or built from "saintly" grassroots actions is a pernicious myth that demotivates people from seeking large-scale change. My slogan for this is "Big problems require big solutions".
This means that it's unhelpful to say that, e.g., everyone who wants regulation of activities that Company X does has to first purge themselves of all association with Company X. In many cases a system arises which makes such purges difficult or impossible. As an extreme, if someone lives in an area with few places to get food, they may be forced to patronize a grocery store even if they know that company is evil. Part of "big solutions" means replacing the bad ways of doing things with new things, rather than saying that we first have to get rid of the bad things to get some kind of clean slate before we can build new good things.
sweeter
You could use this logic to posit that any government, group, system, nation state, militia, business, or otherwise, isn't "evil" because you haven't gauged the thoughts, feelings and actions of every single person who comprises that system. Thats absurd.
If using AI and other technology to uphold a surveillance state, wage war, do imperialism, and do genocide... isn't evil, than I don't know if you can call anything evil.
And the entire point of taxes is that we all collectively decide that we all would be better off if we pooled our labor and resources together so that we can have things like a basic education, healthcare, roads, police, bridges that don't collapse etc.. Politicians and corporations have directly broken and abused this social contract in a multitude of ways, one of those ways is using loopholes to not pay taxes at the same rate as everyone else by a large margin... another way is paying off politicians and lobbying so that those loopholes never get closed, and in fact, the opposite happens. So yes, taxing Google and other mega-corporations is a single, easily identifiable, action that can be directly taken to remedy this problem. Though, there is no way around solving the core issue at hand, but people have to be able to identify that issue foremost.
nirav72
They’re not evil, they’re amoral and are designed to maximize profits for their investors. Evil is subjective.
moralestapia
>Evil is subjective.
This is a meme that needs to die, for 99% of cases out there the line between good/bad is very clear cut.
Dumb nihilists keep the world from moving forward wrt. human rights and lawful behavior.
dylan604
What is Googs going to do, leave money on the table?
And if Googs doesn't do it, someone else will, so it might as well be them that makes money for their shareholders. Technically, couldn't activist shareholders come together and claim by not going after this market the leadership should be replaced for those that would? After all, share prices is the only metric that matters
stevage
I don't buy that argument. There are things Google does better than competitors, so them doing an evil thing means they are doing it better. Also, they could be spending those resources on something less evil.
dylan604
Remember when the other AI companies wanted ClosedAI to stop "for humanity's sake" when all it meant was for them the catch up? None of these companies are "good". They all know that as soon as one company does it, they all must follow, so why not lead?
layer8
“Drop” has really become ambiguous in headlines.
abeppu
I guess a question becomes, how does dropping these self-imposed limitations work as a marketing exercise? Probably most of their customers or prospective customers won't care, but will a cheery multi-colored new product land a little differently? If Northrop Grumman made a smart home hub, you might be reluctant to put it in your living room.
HPMOR
They are dropping these pledges to avoid securities lawsuits. “Everything is securities fraud” and presumably if they have a stated corporate pledge to do something, and knowingly violate it, any drop in the stock price could use this as grounds.
a_shovel
Being a defense contractor isn't a problem that a little corporate rearrangement can't fix. Put the consumer division under a new subsidiary with a friendly name and you're golden. Even among the small percentage who know the link, it's likely nobody will really care. For certain markets ("tacticool" gear, consumer firearms) being a defense contractor is even a bonus.
lenerdenator
Marketing doesn't matter to oligarchs.
kqr
Not evil, perhaps, but run by Moloch[1] -- which is possibly just as bad. Their incentives are set up to throw virtually all human values under the bus because even if they don't, they will be out-marginal-profited by someone that does.
[1]: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
quesera
"We won't use your dollars and efforts for bad and destructive activities, until we accumulate enough of your dollars and efforts that we no longer care about your opinions".
lenerdenator
The market solves all problems.
... or at least that's what these people have to be telling themselves at all times.
smallmancontrov
The market's objectives are wealth-weighted.
This is a very important point to remember when assessing ideas like "Is it good to build swarms of murderbots to mow down rioting peasants angry over having expenses but no jobs?" Most people might answer "no," but if the people with money answer "yes," that becomes the market's objective. Then the incentives diffuse through the economy and you don't just get the murderbots, you also get the news stations explaining how the violent peasants brought this on themselves and the politicians making murderbots tax deductible and so on.
amarcheschi
Anduril already asked this question with a strong "fuck yes"
johnnyanmac
It is partially the markets fault. If they were demonized for this, there's at least be a veneer of trying to look moral. Instead they can simply go full mask off. That's why you shouldn't tolerate the intolerant.
kelseyfrog
I have full faith that the market[1] will direct the trolley onto the morally optimal track. It's invisible hand will guide mine when I decide or decide against pulling the lever. Either way, I can be sure that the result is maximally beneficial to the participants, myself included.
1. https://drakelawreview.org/wp-content/uploads/2015/01/lrdisc...
mystified5016
The magic market fairy godmother has decided that TVs with built in ads and spyware are good for you. The market fairy thinks this is so good for you that there are no longer any alternatives to a smart TV besides "no tv"
The market fairy has also decided that medication commercials on TV is good for you. That your car should report your location, speed, and driving habits to your insurer, car manufacturer, and their 983,764 partners at all times.
Maximally beneficial indeed.
causal
One of my chief worries about LLMs for intelligence agencies is the ability to scale textual analysis. Previously there at least had to be an agent taking an interest in you; today an LLM could theoretically read all text you've ever touched and flag anything from legal violations to political sentiments.
Etheryte
This has already been possible long before LLMs came along. I also doubt that an LLM is the best tool for this at scale, if you're talking about sifting through billions of messages it gets too expensive very fast.
causal
LLMs can do more than whatever we had before. Sentiment analysis and keyword searches only worked so well; LLMs understand meaning and intent. Cost and scale are not bottlenecks for long.
karaterobot
Is this more or less ethical than OpenAI getting a DoD contract to deploy models on the battlefield less than a year after saying that would never happen, with the excuse being well we only meant certain kinds of warfare or military purposes, obviously. I guess my question is, isn't there something more honest about an open heel-turn, like Google has made, compared to one where you maintain the fiction that you're still trying to do the right thing?
ddtaylor
Google has already made multiple commitments like this and broken them. One example would be their involvement in operating a censored version of Google.cn for the Chinese government from 2006 to 2010.
stevage
Someone should make a website tracking tech companies' moral promises that then get broken.
thih9
Something to remember next time Google makes a pledge. I.e. when they pledge not to do something, it just means they pledge to make a prior indirect notification before doing that thing.
bufferoverflow
I doubt that will change anything. It's not like Google's AI has some secret sauce. It's all published. So any military corp can have cutting-edge AI in their weapons for a relatively low cost.
mihaaly
With the help of Google's resources and knowledge from now on. For some dollars of course. AI will not develop itself just yet, right? So those military corp need some humans for that, preferebly those experienced already, or better yet, made it. I have a hunch, it will help them quite a bit.
By the way humans: "principles page includes provisions that say the company will use human oversight". ... which human? Trump? Putin is human too, but I guess he is busy elsewhere. Definitely not someone like Mother Theresa, she is dead anyway, and I cannot think of someone from recent years playig in the same league, somehow that end of the spectrum is not represented that well recently.
1970-01-01
I give it 2 years until we see
"Google Petard, formerly known as Boombi, will be shutting down at the end of next month. Any existing explosion data you have in your Google Account will be deleted, starting on May 1, 2027."
atlasunshrugged
I'm guessing this will be a somewhat controversial view here, but I think this is net good. The world is more turbulent than at any other time in my life, there is war in Europe, and the U.S. needs every advantage it can get to improve its defense. Companies like Google, OpenAI, Microsoft, can and should be working with the government on defense projects -- I would much rather the Department of Defense have access to the best tools from the private sector than depend on some legacy prime contractor that doesn't have any real tech capabilities.
croes
> the U.S. needs every advantage it can get to improve its defense
That’s one of the reasons for the turbulent times. Let’s face the truth, most of the defense can easily be used for offense and given the state of online security every progress gets into the wrong hands.
Maybe it’s time to pause to make it more difficult for those wrong hands.
atlasunshrugged
I guess you could put that on the U.S.'s plate and no doubt America has caused many issues around the world, but I think in generally its a good actor. Biggest conflicts today: Ukraine -- I would squarely put this on Russia, nothing to do with the U.S.; Sudan -- Maybe lack of knowledge, but I don't think it's fair to place much responsibility on the U.S. (esp relative to other actors); ditto DRC/Rwanda
Yes, many defensive uses of technologies can be used for offense. When I say defense, I also include offense there as I don't believe you can just have a defensive posture alone to maintain one's defense, you need deterrence too. Personally I'm quite happy to see many in Silicon Valley embrace defense-tech and build missiles (ex. recent YC co), munitions, and dual-use tech. The world is a scary and dangerous place, and awful people will take advantage of the weakness of others if they can. Maybe I'm biased because I spent a lot of time in Eastern Europe and Ukraine, but I much prefer the U.S. with all our faults to another actor like China or Russia being dominant
stickfigure
Just how do you propose to remove those tools from Putin's, Xi's, Khomeini's, or Kim Jong-Un's hands?
wayathr0w
It's surprising to me that they ever made such a pledge, considering...you know.
Clamchop
At what point does a public promise carry any legal weight whatsoever? If it carries none, then why not leave it in place and lie? If it carries some, for how long and who has standing to sue?
Genuine questions. Unlike "don't be evil," this promise has a very narrow and clear interpretation.
It would be nice if companies weren't able to just kinda say whatever when it's expedient.
telotortium
Absolutely no legal weight.
However, when you change a promise publicly, you signal a change in direction. It is much more honest than leaving it in place but violating it behind the scenes. If the public really cares, they can pass a law via their democratic representatives (or Google can swear a public oath before God I suppose).
Cheer2171
It's an ad.
nprateem
Because then investors won't invest.
xbar
I don't think those kids understand "pledge."
https://archive.ph/hfrKY