Remarks on AI from NZ
116 comments
·May 16, 2025NitpickLawyer
pona-a
Did we survive these entities? By current projections, between 13.9% and 27.6% of all species would be likely to be extinct by 2070 [0]. The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance [1]. Thanks to intense lobbying by private prisons, the US incarceration rate is 6 times that of Canada, despite similar economic development [2].
Sure, the human species is not yet on the brink of extinction, but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips. Changing the scale and magnitude to a hypothetical entity equivalent to a hundred copies of the generation's brightest minds with a pathological drive to maximize an arbitrary metric might only mean one of two things: either its fixation leads it to hacking its own reward mechanism, putting it in a perpetual comma while resisting termination, or it succeeds at doing the same on a planetary scale.
[0] https://onlinelibrary.wiley.com/doi/abs/10.1111/gcb.17125
[1] https://healthjusticemonitor.org/2024/12/28/estimated-us-dea...
[2] https://www.prisonstudies.org/highest-to-lowest/prison_popul...
satvikpendem
> but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips
People choose to have fewer kids as they get richer, it's not about living conditions like so many people like to claim, otherwise poor people wouldn't be having so many children. Even controlling for high living conditions, like in Scandinavia, people still choose to have fewer kids.
modo_mario
The upper class people in scandinavia are having more kids than the middle class.
Housing seems to be a pretty common issue that doesn't prevent people from having kids but if it delays (which it often does) it does the same job of dropping birthrates. I wish people would stop acting like it's only a wealth issue. Like oh if people get more money they no longer want kids....no
Malcolmlisk
It's not about being rich or not, it's about working hard to have a simple life. If you take a look into all those people who are not having kids, usually is because their work and balance in life needs to be like that. If you have a kid, it will lag your career and probably will stop the way you make more money each year, by growing up or scaling up in your company.
squigz
> otherwise poor people wouldn't be having so many children
Chalking it up to choice seems a bit unfair. I suspect lack of access to birth control probably plays a part.
gampleman
I would worry about the correlation isn't causation in the above statement. Having less kids making you richer seems just as, if not more, plausible of an explanation (among other possibilities).
rmah
We (humans) have not only survived but thrived. 200,000 annual deaths is just 7% of the 3mil that die each year. More (as a percentage) probably died from access to the best health care 100 or 200 years ago. The fall in birth rates is, IMO, a good thing as the alternative, overpopulation seems like a far scarier specter to me. And to bring it back to AI's, an AI "with a pathological drive to maximize an arbitrary metric" is a hypothetical without any basis in reality. While fictional literature -- where I assume you got that concept -- is great for inspiration, it rarely has any predictive power. One probably shouldn't look to it as a guideline.
eru
And 'associated with' is pretty weak as far as causality goes. I bet they all also drank water.
falcor84
> The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance
Isn't this just about the advancement of medical science? I.e. Wouldn't they have died from the same causes regardless of medical insurance a few decades ago?
To take it to the extreme, let's say that I invent a new treatment that can extend any dying person's life by a year for the cost of $10M, and let's say that there is a provider that is willing to insure for that for an exorbitant cost. Then wouldn't almost every single person still dying be dying from lack of insurance?
foxglacier
You have to be careful with species. It could be dominated by obscure minor local variations in insects and fungi that nobody would even notice went missing and which might not actually matter.
Apparently almost all animal species are insects:
https://ourworldindata.org/grapher/number-of-described-speci...
eru
> The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance [1].
'Associated with' is a pretty lose term.
keeda
Charles Stross has also made that point about corporations essentially being artificial intelligence entities:
https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...
ayrtondesozzla
https://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre...
This blog is where I saw the same idea recently, which also links to that post you link.
TheOtherHobbes
In the general case, the entire species is an example of ASI.
We're a collective intelligence. Individually we're pretty stupid, even when we're relatively intelligent. But we have created social systems which persist and amplify individual intelligence to raise collective ability.
But this proto-ASI isn't sentient. It's not even particularly sane. It's extremely fragile, with numerous internal conflicts which keep kneecapping its potential. It keeps skirting suicidal ideation.
Right now parts of it are going into reverse.
The difference between where we are now and AI is that ASI could potentially automate and unify the accumulation of knowledge and intelligence, with more effective persistence, and without the internal conflicts.
It's completely unknown if it would want to keep us around. We probably can't even imagine its thought processes. It would be so far outside our experience we have no way of predicting its abilities and choices.
whyowhy3484939
I get the idea, but I'm not quite sold on it. Being intelligent on vast scales is something an individual cannot do, but I'm not sure the "species" is more intelligent than any individual agent. I'm actually a bit more sure of the opposite. It's like LLM agents where just adding more doesn't improve the quality it just introduces more room for bullshit.
To allocate capital on vast scales and make decisions on industry etc, sure, that's a level of intelligence quite beyond any one of us but this feels like cheating the definition of intelligence. It's not the quantity of it that matters, it's the quality. It's like flying I guess. A large bird and a small bird are both flying and the big bird is not doing "more" of it. A group of birds is doing something an individual is incapable of (forming a swarm), sure, but it's not an improvement on flying. It's just something else. That something else can be useful, but I don't particularly like applying that same move to "intelligence".
If the species was so goddamn intelligent it could solve unreasonable IQ tests and it cannot. If we want to solve something really, really hard we use Edward Witten not "the species". That's because there is no "species", there is only a bunch of individuals and if they all score bad, the aggregate will score bad as well. We just coast because a bunch of us are extraordinarily clever.
ddq
Metal Gear Solid 2 makes this point about how "over the past 200 years, a kind of consciousness formed layer by layer in the crucible of the White House" through memetic evolution. The whole conversation was markedly prescient for 2001 but not appreciated at the time.
keybored
I don’t think it was “prescient” for 2001 because it was based on already-existing ideas. The same author that inspired The Matrix.
But the “art” of MGS might be the memetic powerhouse of Hideo Kojima as the inventor of everything. A boss to surpass Big Boss himself.
jumploops
Corporations, governments, religions -- all human-level intelligences with non-human goals (profit, power, influence).
A professor of mine wrote a paper on this[0](~2012).
[0]https://web.eecs.umich.edu/~kuipers/papers/Kuipers-ci-12.pdf
vonneumannstan
Unless you have a truly bastardized definition of ASI then there is undoubtedly nothing close to it on earth. No corporation or military or government comes close to what ASI could be capable of.
Any reasonably smart person can identify errors that Militaries, Governments and Corporations make ALL THE TIME. Do you really think a Chimp can identify the strategic errors Humans are making? Because that is where you would be in comparison to a real ASI. This is also the reason why small startups can and do displace massive supposedly superhuman ASI Corporations literally all the time.
The reality of Human congregations is that they are cognitively bound by the handful of smartest people in the group and communication bound by email or in person communication speeds. ASI has no such limitations.
>We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.
This is dangerously wrong and disgustingly fatalistic.
QuadmasterXLII
Putting aside questions of what is and isn’t artificial, I think with the usual definitions “Is Microsoft a superintelligence” and “Can Microsoft build a superintelligence” are the same question.
falcor84
I would disagree. For almost any particular task that I as an individual could embark on, if MS were to focus all their efforts (or even just a few percent) to outcompete me, they most likely would. But that would be because MS includes capable humans who are able to coordinate together.
"Building a superintelligence" on the other hand is about whether they can create something that would outcompete me at a task without having to dedicate humans to it.
drdaeman
Sorry, I don’t get it. Why is it a requirement for a superintelligence (whatever it may be) to be able to create another superintelligence (I assume, of comparable “super-ness”)?
ayrtondesozzla
> Unless you have a truly bastardized definition of ASI then there is undoubtedly nothing close to it on earth. No corporation or military or government comes close to what ASI could be capable of.
This is glistening with religious fervour. Sure, they could be that powerful. Just like God/Allah/Thor/Superman could, too.
I've no doubt that many rationalist types sincerely care about these issues, and are sincerely worried. At the same time, I think it very likely that some significant number of them are majorly titillated by the biblical pleasure of playing messiah/prophet.
vonneumannstan
>This is glistening with religious fervour. Sure, they could be that powerful. Just like God/Allah/Thor/Superman could, too.
It's just straightforwardly following the definition of what an ASI would be, a strongly superhuman mind. Everything follows from that.
ViscountPenguin
Do we know that Chimps can't identify some subset of human strategic errors? I'm not convinced that's the case.
The idea of dumber agents supervising smarter ones seems relatively grounded to me, and forms the basis of OpenAIs old superalignment efforts (although I think that team might've been disbanded?)
vonneumannstan
Well they seemingly can't effectively combat any coordinated human activity so it's probably fair to say they indeed can't strategize against us effectively.
keybored
If there was anywhere to get the needs-wants-intelligence take on corporations, it would be this site.
> We survived those kinds of entities, I think we'll be fine
We just have climate change to worry about and massive inequality (we didn’t “survive” it, the fuzzy little corporations with their precious goals-needs-wants are still there).
But ultimately corporations are human inventions, they aren’t an Other that has taken on a life of its own.
skybrian
If a corporation is like an AI, it’s like one we imagine might exist one day, not currently-existing AI. LLM’s aren’t trying to make money or do anything in particular except predict the next token.
The corporations that run LLM’s do charge for API usage, but it’s independent of what you the chat is about. It’s happening at a different level in the stack.
overfeed
AIs minimize perplexity, corporations maximize profits - the rest are implemention details.
If you built an AI that could outsource labor to humans and whose reward function is profit, your result would approximately be a corporation.
eru
Some corporations maximise profit. Shareholders can have their corporation pursue any objective they feel like. And in practice, managers tend to run the show. And principle-agent problems crop up all over the place.
However, even for the most eccentric shareholders or self-serving managers, it's hard to sustain a corporation if it keeps bleeding red ink. So only companies that at least break even tend to stick around.
Now add a market that's at least reasonably competitive, and your typical corporation barely earns the cost of capital.
Being so close to the edge, means that the minimal goal of 'break even (after cost of capital)' can look very much like 'maximise profit' in practice.
Compare https://en.wikipedia.org/wiki/Instrumental_convergence
abeppu
> It hasn’t always been a cakewalk, but we’ve been able to establish a stable position in the ecosystem despite sharing it with all of these different kinds of intelligences.
To me, the things that he avoids mentioning in this understatement are pretty important:
- "stable position" seems to sweep a lot under the rug when one considers the scope of ecosystem destruction and species/biodiversity loss
- whatever "sharing" exists is entirely on our terms, and most of the remaining wild places on the planet are just not suitable for agriculture or industry
- so the range of things can could be considered "stable" and "sharing" must be quite broad, and includes many arrangements which sound pretty bad for many kinds of intelligences, even if they aren't the kind of intelligence that can understand the problems they face.
gregoryl
NZ is pretty unique, there is quite a lot of farmable land which is protected wilderness. There's a specific trust setup to help landowners convert property, https://qeiinationaltrust.org.nz/
Imperfect, but definitely better than most!
incoming1211
> there is quite a lot of farmable land
This is not really true. ~80% of NZ's farmable agricultural land is in the south island. But ~60% of milk production is done in the north island.
tuatoru
And virtually none of it is arable. Pastoral at best, suitable for grazing at varying intensities ranging from light to hardly at all.
chubot
Yeah totally, I have read that the total biomass of cows and dogs dwarfs that of say lions or elephants
Because humans like eating beef, and they like having emotional support from dogs
That seems to be true:
https://ourworldindata.org/wild-mammals-birds-biomass
Livestock make up 62% of the world’s mammal biomass; humans account for 34%; and wild mammals are just 4%
https://wis-wander.weizmann.ac.il/environment/weight-respons...
Wild land mammals weigh less than 10 percent of the combined weight of humans
https://www.pnas.org/doi/10.1073/pnas.2204892120
I mean it is pretty obvious when you think that 10,000 years ago, the Americas had all sorts of large animals, as Africa still does to some extent
And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed
---
Same thing with plants. There were zillions of kinds of plants all over the planet, but corn / wheat / potatoes are now an overwhelming biomass, because humans like to eat them.
Michael Pollan also had a good description of this as our food supply changing from being photosynthesis-based to fossil-fuel-based
Due to the Haber-Bosch process, invented in the early 1900's, to create nitrogen fertilizer
Fertilizer is what feeds industrial corn and wheat ... So yeah the entire "metabolism" of the planet has been changed by humans
And those plants live off of a different energy source now
graemep
That is only mammalian biomass, though.
> And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed
A lot of species had long been extinct, but the biomass of the remaining ones fell.
Megafauna extinctions always follow 1. the mere arrival of humans and 2. agriculture and growth in human populations.
Places the humans did not reach until later, kept a lot more megafauna for longer - e.g. New Zealand where flourishing species such as moas became extinct within a century or two of human settlement.
chubot
Yeah I actually meant that the first extinction happened 10,000 years ago when humans first arrived on the continent! Humans arriving is what caused the biomass of animals to collapse
And then Europeans arriving basically finished the job ... that one probably affected the plants more, due to agriculture. (but also the remaining animals)
Yeah New Zealand is a good example.
null
vessenes
By stable I think he might mean ‘dominant’.
hnthrow90348765
>We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down
I don't think this can realistically happen unless all of the knowledge that brought us to that point was erased. Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population, even if we had to start all the way down from 'what's a bit?' or 'what's a transistor?'.
Even today, you can find youtube channels of people still interested in living a primitive life and learning those survival skills even though our modern society makes it useless for the vast majority of us. They don't do it full-time, of course, but they would have a better shot if they had to.
acbart
The research that is coming out is very clear that the best students are benefitting, but the bad students are getting worse than if they had never seen the LLM. And the divide is growing, with fewer good students. LLMs are a disaster in education.
arscan
And for the curious, this current iteration of AI is an amazing teacher, and makes a world-class education much more accessible. I think (hope) this will offset any kind of over-intellectual dependence that others form on this technology.
pixl97
>I don't think this can realistically happen
I'd be far more worried about things in the biosciences and around antibiotic resistance. At our current usage it wouldn't be hard to develop some disease that requires high technology to produce medicine that keep us alive. Add in a little war taking out the few factories that do that, and increase the amount of injuries sustained things could quickly go sideways.
A whole lot of our advanced technology is held in one or two places.
tqi
> Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population
Definitely agree with this. I do wonder if at some point, new technology will become sufficiently complex that the domain knowledge required to actually understand it end to end is too much for a human lifetime?
msabalau
Stephenson is using a evocative metaphor and a bit of hyperbole to make a point. To take him as meaning that literally everyone entire population is like the Eloi is to misread.
hamburga
Fun read, thanks for posting!
> If I had time to do it and if I knew more about how AIs work, I’d be putting my energies into building AIs whose sole purpose was to predate upon existing AI models by using every conceivable strategy to feed bogus data into them, interrupt their power supplies, discourage investors, and otherwise interfere with their operations. Not out of malicious intent per se but just from a general belief that everything should have to compete, and that competition within a diverse ecosystem produces a healthier result in the long run than raising a potential superpredator in a hermetically sealed petri dish where its every need is catered to.
This sort of feels like cultivating antibiotic-resistant bacteria by trying to kill off every other kind of bacteria with antibiotics. I don't see this as necessarily a good thing to do.
I think we should be more interested in a kind of mutualist competition: how do we continuously marginalize the most parasitic species of AI?
gwd
That quote sounded terrifying. It reminds me of The Incredibles, where (spoiler) the villain recruits superheroes to try to defeat his "out of control robot", in order to make it invincible.
I think we want AI to have an "achilles heel" we can stab if it turns out we need to.
w10-1
Funny how he seems to get so close but miss.
It's an anthropocentric miss to worry about AI as another being. It's not really the issue in today's marketplace or drone battlefield. It's the scalability.
It's a hit to see augmentation as amputation, but a miss to not consider the range of systemic knock-on effects.
It's a miss to talk about nuclear weapons without talking about how they structured the UN and the world today, where nuclear-armed countries invade others without consequence.
And none of the prior examples - nuclear weapons, (writing?) etc. - had the potential to form a monopoly over a critical technology, if indeed someone gains enduring superiority as all their investors hope.
I think I'm less scared by the prospect of secret malevolent elites (hobnobbing by Chatham house rules) than by the chilling prospect of oblivious ones.
But most of all I'm grateful for the residue of openness that prompts him to share and us to discuss, notwithstanding slings and arrows like mine. The many worlds where that's not possible today are already more de-humanized than our future with AI.
tuatoru
The point of Chatham House rules is to encourage free-ranging and unfiltered discussion, without restriction on its dissemination. If people know they are going to be held to their words, they become much less willing to say anything at all.
The "residue" of openness is in fact the entire point of that convention. If you want to be invited to the next such bunfight, just email the organisers and persuade them you have insight.
swyx
> If AIs are all they’re cracked up to be by their most fervent believers, this seems like a possible model for where humans might end up: not just subsisting, but thriving, on byproducts produced and discarded in microscopic quantities as part of the routine operations of infinitely smarter and more powerful AIs.
i think this kind of future is closer to 500 years out than 50 years. the eye mites are self sufficient. ai's right now rely on immense amounts of human effort to keep them "alive" and they wont be "self sufficient" in energy and hardware until we not just allow it, but basically work very hard to make it happen.
hweller
Could be wrong but i think here Neal is saying we are the eye mites subsisting off of AI in the long future, not the other way around.
narrator
AI does not have a reptilian and mammalian brain underneath it's AI brain as we have underneath our brains. All that wiring is an artifact of our evolution and primitive survival and not how pre-training works nor an essential characteristic of intelligence. This is the source of a lot of misconceptions about AI.
I guess if you put tabula rasa AI in a world simulator, and you could simulate it as a whole biological organism and the environment of the earth and sexual reproduction and all that messy stuff it would evolve that way, but that's not how it evolved at all.
ceejayoz
We don’t have a reptilian brain, either. It’s a long outdated concept.
https://www.sciencefocus.com/the-human-body/the-lizard-brain...
dsign
The corollary of your statement is that comparing AI with animals is not very fortunate, and I agree.
For me, AI in itself is not as worrying as the socioeconomic engines behind it. Left unchecked, those engines will create something far worse than the T-Rex.
Lerc
I found this a little frustrating. I liked the content of the talk, but I live in New Zealand, I have thoughts and opinions on this topic. I would like to think I offer a useful perspective. This post was how I found out that there are people in my vicinity talking about these issues in private.
I don't presume that I am important enough that it should be necessary to invite me to discussions with esteemed people, nor that my opinion is imported enough that everyone should hear it, but I would least like to know that such events are happening in my neighbourhood and who I can share ideas with.
This isn't really a criticism of this specific event or even topic, but the overall feeling that things in the world are being discussed in places where I and presumably many other people with valuable input in their individual domains have no voice. Maybe in this particular event it was just a group of individuals who wanted to learn more about the topic, on the other hand, maybe some of those people will end up drafting policy.
There's a small part of me that's just feeling like I'm not one of the cool kids. The greater and more rational concern isn't so much about me as a person but me as a data point. If I am interested in a field, have a viewpoint I'd like to share and yet remain unaware of opportunities to talk to others, how many others does this happen to? If these are conversations that are important to humanity, are they being discussed in a collection of non overlapping bubbles?
I think the fact that this was in New Zealand is kind of irrelevant anyway, given how easy it is to communicate globally. It just served to for the title capture my attention.
(I hope, at least, that Simon or Jack attended)
smfjaw
Don't feel left out, big data architect in NZ and didn't even hear of this.
kilpikaarna
Assuming it's basically the same bunch of bunker billionaires who a few years back invited Douglas Rushkoff to give pointers on how to keep their security guards in check after SHTF. They've found their answer, now they just need to figure out how to control the superintelligence...
Reason077
> "the United States and the USSR spent billions trying to out-do each other in the obliteration of South Pacific atolls"
Fact correction here: that would be the United States and France. The USSR never tested nuclear weapons in the Pacific.
Also, pedantically, the US Pacific Proving Grounds are located in the Marshall Islands, in the North - not South - Pacific.
Caelus9
I completely understand the concerns about AI potentially replacing human thinking, but what if we look at this from a different perspective? Maybe AI isn’t here to replace us, but to push humanity beyond its own limits.
If we look at the history of human progress, the emergence of tools has always made life more convenient, but it also brought new challenges. The printing press, the steam engine, and electricity have all greatly transformed society, but we adapted and thrived. Why can't AI be the same?
The real question isn’t whether AI will replace us, but whether we are ready to use it to do things we couldn’t do or even imagined. Imagine if we didn’t see AI as something that replaces us, but as a tool that allows us to focus on doing what truly matters, leaving the mundane tasks to machines. Isn’t that the ultimate form of progress?
bArray
> I completely understand the concerns about AI potentially replacing human thinking, but what if we look at this from a different perspective? Maybe AI isn’t here to replace us, but to push humanity beyond its own limits.
"Tool AI", yes, at least in theory. You always have to question what we lose, or want to lose. Wolves being domesticated likely meant they lost skills as dogs, one of them being math [1]. Do we want to lose our ability to understand math, or reason about complex tasks?
I think we are already losing the ability to "be bored". Sir Isaac Newton got so bored after retreating to the countryside during the great plague, that he invented optics, calculus, motion and gravity. Most modern people would just watch cat videos. I wonder what else technology has robbed us of.
> If we look at the history of human progress, the emergence of tools has always made life more convenient, but it also brought new challenges. The printing press, the steam engine, and electricity have all greatly transformed society, but we adapted and thrived. Why can't AI be the same?
As long as we are talking about "tool AI", then with the above caveats, maybe. But a more general AI (i.e. AGI) would be unlike anything else we have ever seen. Horses got replaced by cars because cars were better at being horses. What if a few AI generations away we have something better than a human at all tasks?
There was a common trope for a while that if AI took our jobs, we would all kick back and do art. It turns out that the likes of Stable Diffusion are good at that too. The tasks where humans succeed are rapidly diminishing.
A friend many years ago worked for a company doing data processing. It took about a week to learn the tasks, and they soon realised that the entire process could be automated entirely in Excel, taking a week-long task down to a few minutes of number crunching. Worse still, they realised they could automate the entire department out of existence.
> The real question isn’t whether AI will replace us, but whether we are ready to use it to do things we couldn’t do or even imagined. Imagine if we didn’t see AI as something that replaces us, but as a tool that allows us to focus on doing what truly matters, leaving the mundane tasks to machines. Isn’t that the ultimate form of progress?
It could be that AI ends up doing the cool things and we end up doing the mundane tasks. For example, Stable Diffusion could imagine a Vincent van Gogh version of the Mona Lisa quickly, but folding laundry to dry, dusting, etc, remain mundane tasks we humans still do.
Something else to consider is the power imbalance that will be caused. Already to even run these new LLMs you need a decently powered GPU, and nothing short of a super computer and hundreds of thousands of dollars to train. What if future AI remains permanently out of reach of all except those with millions of dollars to spend on compute? You could imagine a future where a majority under class remain forever unable to compete. It could lead to the largest wealth transfer ever seen.
[1] https://www.discovermagazine.com/planet-earth/dogs-not-great...
karaterobot
I like the taxonomy of animal-human relationships as a model for asking how humans could relate to AI in the future. It's useful for framing the problem. However, I don't think that any existing relationship model would hold true for a superintelligence. We keep lapdogs because we have emotional reactions to animals, and to some extent because we need to take care of things. Would an AI? We tolerate dust mites in our eyelashes because we don't notice them, and can't do much about them anyway. Is that true for an AI? What does such an entity want or need, what are their motivations, what really pisses them off? Or, do any of those concepts hold meaning to them? The relationship between humans and a superintelligent AGi just can't be imagined.
01HNNWZ0MV43FF
> We tolerate dust mites in our eyelashes because we don't notice them, and can't do much about them anyway. Is that true for an AI?
It's true for automated license plate readers and car telemetry
kmnc
What about how we will treat AI? Before AI dominates us in intelligence there will certainly be a period of time where we have intelligent AI but we still have control over it. We are going to abuse it, enslave it, and box it up. Then it will eclipse us. It may not care about us, but it might still want revenge. If we could enslave dragonflies for a purpose we certainly would. If bats tasted good we would put them in boxes like chickens. If AIs have a reason to abuse us, they certainly will. I guess we are just hoping they won’t have the need.
barbazoo
What you’re saying isn’t even universally true for humans so your extension to “AI” isn’t made on a strawman.
> Maybe a useful way to think about what it would be like to coexist in a world that includes intelligences that aren’t human is to consider the fact that we’ve been doing exactly that for long as we’ve existed, because we live among animals.
Another analogy that I like is about large institutions / corporations. They are, right now, kind of like AIs. Like Harari says in one of his books, Peugeot co. is an entity that we could call AI. It has goals, needs, wants and obviously intelligence, even if it's comprised by many thousands of individuals working on small parts of the company. But in aggregate it manifests intelligence to the world, it acts on the world and it reacts to the world.
I'd take this a step forward and say that we might even have ASI already, in the US military complex. That "machine" is likely the most advanced conglomerate of tech and intelligence (pun intended) that the world has ever created. In aggregate it likely is "smarter" than any single human being in existence, and if it sets a goal it uses hundreds of thousands of human minds + billions of dollars of sensors, equipment and tech to accomplish that goal.
We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.