You are the scariest monster in the woods
267 comments
·October 15, 2025myrmidon
Aerroon
>I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).
They don't have agency, because they don't have persistent state. They're like a function that you can query and get an answer. During that answer the LLM has state, but once it's done the state is gone.
Humans (and other "agents") have persistent state. If we learn something, we can commit it to long-term memory and have it affect our actions. This can enable us to work towards long-term goals. Modern LLMs don't have this. You can fake long-term memory with large context windows and feed the old context back to it, but it doesn't appear to work (and scale) the same way living things do.
mediaman
In Context Learning (ICL) is already a rapidly advancing area. You do not need to modify their weights for LLMs to persist state.
The human brain is not that different. Our long-term memories are stored separately from our executive function (prefrontal cortex), and specialist brain functions such as the hippocampus serve to route, store, and retrieve those long term memories to support executive function. Much of the PFC can only retain working memory briefly without intermediate memory systems to support it.
If you squint a bit, the structure starts looking like it has some similarities to what's being engineered now in LLM systems.
Focusing on whether the model's weights change is myopic. The question is: does the system learn and adapt? And ICL is showing us that it can; these are not the stateless systems of two years ago, nor is it the simplistic approach of "feeding old context back to it."
santadays
It seems like there is a bunch of research/working implementations that allow efficient fine tuning of models. Additionally there are ways to tune the model to outcomes vs training examples.
Right now the state of the world with LLMs is that they try to predict a script in which they are a happy assistant as guided by their alignment phase.
I'm not sure what happens when they start getting trained in simulations to be goal oriented, ie their token generation is based off not what they think should come next but what should come next in order to accomplish a goal. Not sure how far away that is but it is worrying.
zamadatix
> During that answer the LLM has state, but once it's done the state is gone.
This is an operational choice. LLMs have state, and you never have to clear it. The problems come from the amount of state being extremely limited (in comparison to the other axes) and the degradation of quality as the state scales. Because of these reasons, people tend to clear the state of LLMs. That is not the same thing as not having state, even if the result looks similar.
observationist
No, they don't - you can update context, make it a sliding window, create a sort of register and train it on maintaining stateful variables, or various other hacks, but outside of actively managing the context, there is no state.
You can't just leave training mode on, which is the only way LLMs can currently have persisted state in the context of what's being discussed.
The context is the percept, the model is engrams. Active training allows the update of engrams by the percepts, but current training regimes require lots of examples, and don't allow for broad updates or radical shifts in the model, so there are fundamental differences in learning capability compared to biological intelligence, as well.
Under standard inference only runs, even if you're using advanced context hacks to persist some sort of pseudo-state, because the underlying engrams are not changed, the "state" is operating within a limited domain, and the underlying latent space can't update to model reality based on patterns in the percepts.
The statefulness of intelligence requires that the model, or engrams, update in harmony with the percepts in real-time, in addition to a model of the model, or an active perceiver - the thing that is doing the experiencing. The utility of consciousness is in predicting changes in the model and learning the meta patterns that allow for things like "ahh-ha" moments, where a bundle of disparate percepts get contextualized and mapped to a pattern, immediately updating the entire model, such that every moment after that pattern is learned uses the new pattern.
Static weights means static latent space means state is not persisted in a way meaningful to intelligence - even if you alter weights, using classifier free guidance or other techniques, stacking LORAs or alterations, you're limited in the global scope by the lack of hierarchical links and other meta-pattern level relationships that would be required for an effective statefulness to be applied to LLMs.
We're probably only a few architecture innovations away from models that can be properly stateful without collapsing. All of the hacks and tricks we do to extend context and imitate persisted state do not scale well and will collapse over extended time or context.
The underlying engrams or weights need to dynamically adapt and update based on a stable learning paradigm, and we just don't have that yet. It might be a few architecture tweaks, or it could be a radical overhaul of structure and optimizers and techniques - transformers might not get us there. I think they probably can, and will, be part of whatever that next architecture will be, but it's not at all obvious or trivial.
messe
> They don't have agency, because they don't have persistent state. They're like a function that you can query and get an answer. During that answer the LLM has state, but once it's done the state is gone.
That's solved by the simplest of agents. LLM + ability to read / write a file.
aniviacat
But they can only change their context, not the model itself. Humans update their model whenever they receive new data (which they do continuously).
A live-learning AI would be theoretically possible, but so far it hasn't been done (in a meaningful way).
reactordev
The trick here is never turning it off so the ICL keeps growing and learning to the point where it’s aware.
fullstackchris
but even as humans we still don't know what "aware" even means!
throwuxiytayq
I have no words to express how profoundly disappointed I am to keep reading these boring, shallow, shorttermist, unimaginative takes that are invalidated by a model/arch upgrade next week, or - in this case - more like years ago, since pretty much all big LLM platforms are already augmented by RAG and memory systems. Do you seriously think you’re discussing a serious long term limitation here?
KronisLV
> pretty much all big LLM platforms are already augmented by RAG and memory systems
I think they're more focusing on the fact that training and inference are two fundamentally different processes, which is problematic on some level. Adding RAG and various memory addons on top of the already trained model is trying to work around that, but is not really the same to how humans or most other animals think and learn.
That's not to say that it'd be impossible to build something like that out of silicon, just that it'd take a different architecture and approach to the problem, something to avoid catastrophic forgetting and continuously train the network during its operation. Of course, that'd be harder to control and deploy for commercial applications, where you probably do want a more predictable model.
wolrah
> Human cognition was basically bruteforced by evolution--
"Brute forced" implies having a goal of achieving that and throwing everything you have at it until it sticks. That's not how evolution by natural selection works, it's simply about what organisms are better at surviving long enough to replicate. Human cognition is an accident with relatively high costs that happened to lead to better outcomes (but almost didn't).
> why would it be impossible to achieve the exact same result in silicon
I personally don't believe it'd be impossible to achieve in silicon using a low level simulation of an actual human brain, but doing so in anything close to real-time requires amounts of compute power that make LLMs look efficient by comparison. The most recent example I can find in a quick search is a paper from 2023 that claims to have simulated a "brain" with neuron/synapse counts similar to humans using a 3500 node supercomputer where each node has a 32 core 2 GHz CPU, 128GB RAM, and four 1.1GHz GPUs with 16GB HBM2 each. They claim over 126 PFLOPS of compute power and 224 TB of GPU memory total.
At the time of that paper that computer would have been in the top 10 on the Top500 list, and it took between 1-2 minutes of real time to simulate one second of the virtual brain. The compute requirements are absolutely immense, and that's the easy part. We're pretty good at scaling computers if someone can be convinced to write a big enough check for it.
The hard part is having the necessary data to "initialize" the simulation in to a state where it actually does what you want it to.
> especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
Creating convincing text from a statistical model that's devoured tens of millions of documents is not intelligent use of language. Also every LLM I've ever used regularly makes elementary school level errors w/r/t language, like the popular "how many 'r's are there in the word strawberry" test. Not only that, but they often mess up basic math. MATH! The thing computers are basically perfect at, LLMs get wrong regularly enough that it's a meme.
There is no understanding and no intelligence, just probabilities of words following other words. This can still be very useful in specific use cases if used as a tool by an actual intelligence who understands the subject matter, but it has absolutely nothing to do with AGI.
random3
I think dismissing possibility of evolving AI, is simply ignorance (and a huge blindspot)
This said, I think the author's point is correct. It's more likely that unwanted effects (risks) from the intentional use of AI by humans is something that precedes any form of "independent" AI. It already happens, it always has, it's just getting better.
Hence ignoring this fact makes the "independent" malevolent AI a red herring.
On the first point - LLMs have sucked almost all the air in the room. LLMs (and GPTs) are simply one instance of AI. They are not the beginning and most likely not the end (just a dead end) and getting fixated on them on either end of the spectrum is naive.
ectospheno
The worst thing Star Trek did was convince a generation of kids anything is possible. Just because you imagine a thing doesn’t make it real or even capable of being real. I can say “leprechaun” and most people will get the same set of images in their head. They aren’t real. They aren’t going to be real. You imagined them.
dsr_
That's not Star Trek, that's marketing.
Marketing grabbed a name (AI) for a concept that's been around in our legends for centuries and firmly welded it to something else. You should not be surprised that people who use the term AI think of LLMs as being djinn, golems, C3PO, HAL, Cortana...
random3
Do maybe have a better show recommendation for kids - many Animal Farm?
How is convincing people that things within the limits of physics are possible wrong or even "the worse thing"?
Or do you think anything that you see in front of didn't seem like StarTrek a decade before it existed?
jncfhnb
I think you could make AGI right now tbh. It’s not a function of intelligence. It’s just a function of stateful system mechanics.
LLMs are just a big matrix. But what about a four line of code loop that looks like:
```while true: update_sensory_inputs() narrate_response() update_emotional_state() ```
LLMs don’t experience continuous time and they don’t have an explicit decision making framework for having any agency even if they can imply one probabilistically. But the above feels like the core loop required for a shitty system to leverage LLMs to create an AGI. Maybe not a particularly capable or scary AGI, but I think the goalpost is pedantically closer than we give credit.
MountDoom
> I think you could make AGI right now tbh.
Seems like you figured out a simple method. Why not go for it? It's a free Nobel prize at the very least.
jncfhnb
Will you pay for my data center and operating costs?
nonethewiser
Where is the "what the thing cares about" part?
When I look at that loop my thought is, "OK, the sensory inputs have updated. There are changes. Which ones matter?" The most naive response I could imagine would be like a git diff of sensory inputs. "item 13 in vector A changed from 0.2 to 0.211" etc. Otherwise you have to give it something to care about, or some sophisticated system to develop things to care about.
Even the naive diff is making massive assumptions. Why should it care if some sensor changes? Maybe its more interesting if it stays the same.
Im not arguing artificial intelligence is impossible. I just dont see how that loop gets us anywhere close.
jncfhnb
That is more or less the concept I meant to evoke by updating an emotional state every tick. Emotions are in large part a subconscious system dynamic to organize wants and needs. Ours are vastly complicated under the hood but also kind of superficial and obvious in its expression.
To propose the dumbest possible thing: give it a hunger bar and desire for play. Less complex than a sims character. Still enough that an agent has a framework to engage in pattern matching and reasoning within its environment.
Bots are already pretty good at figuring out environment navigation to goal seek towards complex video game objectives. Give them an alternative goal to maximize certainty towards emotional homeostasis and the salience of sensory input changes because an emergent part of gradual reinforcement learning pattern recognition.
Edit: specifically I am saying do reinforcement learning on agents that can call LLMs themselves to provide reasoning. That’s how you get to AGI. Human minds are not brains. They’re systems driven by sensory and hormonal interactions. The brain does encoding and decoding, informational retrieval, and information manipulation. But the concept of you is genuinely your entire bodily system.
LLM-only approaches not part of a system loop framework ignore this important step. It’s NOT about raw intellectual power.
jjkaczor
... well, humans are not always known for making correct, logical or sensical decisions when they update their input loops either...
48terry
Wow, who would have thought it was that easy? Wonder why nobody has done this incredibly basic solution to AGI yet.
jncfhnb
The framework is easy. The implementation is hard and expensive. The payoff is ambiguous. AGI is not a binary thing that we either have or don’t. General intelligence is a vector.
And people are working on this.
zeroonetwothree
This seems to miss a core part of intelligence, which is a model of the world and the actors in it (theory of mind).
jncfhnb
That is an emergent property that the system would learn to navigate the world with as a function of sensory inputs and “emotional state”.
Video game bots already achieve this to a limited extent.
Jensson
> ```while true: update_sensory_inputs() narrate_response() update_emotional_state() ```
You don't think that has already been made?
jncfhnb
Sure, probably, to varying levels of implementation details.
lambaro
"STEP 2: Draw the rest of the owl."
jncfhnb
I disagree.
Personally I found the definition of a game engine as
``` while True: update_state() draw_frame()```
To be a profound concept. The implementation details are significant. But establishing the framework behind what we’re actually talking about is very important.
tantalor
Peak HN comment. Put this in the history books.
fullstackchris
Has anyone else noticed that HN is starting to sound a lot like reddit / discussion of similar quality? Can't hang out anywhere now on the web... I used to be on here daily but with garbage like this its been reduced to 2-3 times per month... sad
fullstackchris
you do understand this would require re-training billions of weights in realtime
and not even "trainingl really.... but a finished and stably functioning billion+ param model updating itself in real time...
good luck, see you in 2100
in short, what ive been shouting from a hilltop since about 2023: LLMs tech alone simply wont cut it; we need a new form of technology
roxolotl
The point of the article isn’t that abstract super intelligent agi isn’t scary. Yes the author says that’s unlikely but that paragraph at the start is a distraction.
The point of the article is that humans wielding LLMs today are the scary monsters.
irjustin
But that's always been the case? Since we basically discovered... Fire? Tools?
otikik
Yes but the narrative tries to make it about the tools.
"AI is going to take all the jobs".
Instead of:
"Rich guys will try to delete a bunch of jobs using AI in order to get even more rich".
gamerdonkey
Those are examples that are discussed in the article, yes.
snarf21
The difference in my mind is scale and reach and time. Fire, tools, war are localized. AGI could have global and instant and complete control.
boole1854
If anyone knows of a steelman version of the "AGI is not possible" argument, I would be curious to read it. I also have trouble understanding what goes into that point of view.
omnicognate
If you genuinely want the strongest statement of it, read The Emperor's New Mind followed by Shadows of the Mind, both by Roger Penrose.
These books often get shallowly dismissed in terms that imply he made some elementary error in his reasoning, but that's not the case. The dispute is more about the assumptions on which his argument rests, which go beyond mathematical axioms and include statements about the nature of human perception of mathematical truth. That makes it a philosophical debate more than a mathematical one.
Personally, I strongly agree with the non-mathematical assumptions he makes, and am therefore persuaded by his argument. It leads to a very different way of thinking about many aspects of maths, physics and computing than the one I acquired by default from my schooling. It's a perspective that I've become increasingly convinced by over the 30+ years since I first read his books, and one that I think acquires greater urgency as computing becomes an ever larger part of our lives.
nonethewiser
Can you critique my understanding of his argument?
1. Any formal mathematical system (including computers) have true statements that cannot be proven within that system.
2. Humans can see the truth of some such unprovable statements.
Which is basically Gödel's Incompleteness Theorem. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
Maybe a more ELI5
1. Computers follow set rules
2. Humans can create rules outside the system of rules in which they follow
Is number 2 an accurate portrayal? It seems rather suspicious. It seems more likely that we just havent been able to fully express the rules under which humans operate.
myrmidon
Gonna grab those, thanks for the recommendation.
If you are interested in the opposite point of view, I can really recommend "Vehicles: Experiments in Synthetic Psychology" by V. Braitenberg.
Basically builds up to "consciousness as emergent property" in small steps.
Chance-Device
To be honest, the core of Penrose’s idea is pretty stupid. That we can understand mathematics despite incompleteness theorem being a thing, therefore our brains use quantum effects allowing us to understand it. Instead of just saying, you know, we use a heuristic instead and just guess that it’s true. I’m pretty sure a classical system can do that.
nemo1618
AI does not need to be conscious for it to harm us.
amatecha
My layman thought about that is that, with consciousness, the medium IS the consciousness -- the actual intelligence is in the tangible material of the "circuitry" of the brain. What we call consciousness is an emergent property of an unbelievably complex organ (that we will probably never fully understand or be able to precisely model). Any models that attempt to replicate those phenomena will be of lower fidelity and/or breadth than "true intelligence" (though intelligence is quite variable, of course)... But you get what I mean, right? Our software/hardware models will always be orders of magnitude less precise or exhaustive than what already happens organically in the brain of an intelligent life form. I don't think AGI is strictly impossible, but it will always be a subset or abstraction of "real"/natural intelligence.
walkabout
I think it's also the case that you can't replicate something actually happening, by describing it.
Baseball stats aren't a baseball game. Baseball stats so detailed that they describe the position of every subatomic particle to the Planck scale during every instant of the game to arbitrarily complete resolution still aren't a baseball game. They're, like, a whole bunch of graphite smeared on a whole bunch of paper or whatever. A computer reading that recording and rendering it on a screen... still isn't a baseball game, at all, not even a little. Rendering it on a holodeck? Nope, 0% closer to actually being the thing, though it's representing it in ways we might find more useful or appealing.
We might find a way to create a conscious computer! Or at least an intelligent one! But I just don't see it in LLMs. We've made a very fancy baseball-stats presenter. That's not nothing, but it's not intelligence, and certainly not consciousness. It's not doing those things, at all.
Chance-Device
The only thing I can come up with is that compressing several hundred million years of natural selection of animal nervous systems into another form, but optimised by gradient descent instead, just takes a lot of time.
Not that we can’t get there by artificial means, but that correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute, rather than on the order of a few months.
And it might be that you can get functionally close, but hit a dead end, and maybe hit several dead ends along the way, all of which are close but no cigar. Perhaps LLMs are one such dead end.
danielbln
I don't disagree, but I think the evolution argument is a red herring. We didn't have to re-engineer horses from the ground up along evolutionary lines to get to much faster and more capable cars.
alexwebb2
> correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute
Who says we have to do that? Just because something was originally produced by natural process X, that doesn't mean that exhaustively retracing our way through process X is the only way to get there.
Lab grown diamonds are a thing.
squidbeak
Even this is a weak idea. There's nothing that restricts the term 'AGI' to a replication of animal intelligence or consciousness.
sdenton4
The overwhelming majority of animal species never developed (what we would consider) language processing capabilities. So agi doesn't seem like something that evolution is particularly good at producing; more an emergent trait, eventually appearing in things designed simply to not die for long enough to reproduce...
itsnowandnever
the penrose-lucas argument is the best bet: https://en.wikipedia.org/wiki/Penrose%E2%80%93Lucas_argument
the basic idea being that either the human mind is NOT a computation at all (and it's instead spooky unexplainable magic of the universe) and thus can't be replicated by a machine OR it's an inconsistent machine with contradictory logic. and this is a deduction based on godel's incompleteness theorems.
but most people that believe AGI is possible would say the human mind is the latter. technically we don't have enough information today to know either way but we know the human mind (including memories) is fallible so while we don't have enough information to prove the mind is an incomplete system, we have enough to believe it is. but that's also kind of a paradox because that "belief" in unproven information is a cornerstone of consciousness.
throw7
The steelman would be that knowledge is possible outside the domain of Science. So the opposing argument to evolution as the mechanism for us (the "general intelligence" of AGI) would be that the pathway from conception to you is not strictly material/natural.
Of course, that's not going to be accepted as "Science", but I hope you can at least see that point of view.
slow_typist
In short, by definition, computers are symbol manipulating devices. However complex the rules of symbol manipulation, it is still a symbol manipulating device, and therefore neither intelligent nor sentient. So AGI on computers is not possible.
progbits
Computer can simulate human brain on subatomic level (in theory). Do you agree this would be "sentient and intelligent" and not just symbol manipulating?
If yes, everything else is just optimization.
disambiguation
I suppose intelligence can be partitioned as less than, equal to, or greater than human. Given the initial theory depends on natural evidence, one could argue there's no proof that "greater than human" intelligence is possible - depending on your meaning of AGI.
But then intelligence too is a dubious term. An average mind with infinite time and resources might have eventually discovered general relativity.
foxyv
I think the best argument against us ever finding AGI is that the search space is too big and the dead ends are too many. It's like wandering through a monstrously huge maze with hundreds of very convincingly fake exits that lead to pit traps. The first "AGI" may just be a very convincing Chinese room that kills all of humanity before we can ever discover an actual AGI.
The necessary conditions for "Kill all Humanity" may be the much more common result than "Create a novel thinking being." To the point where it is statistically improbable for the human race to reach AGI. Especially since a lot of AI research is specifically for autonomous weapons research.
BoxOfRain
Is there a plausible situation where a humanity-killing superintelligence isn't vulnerable to nuclear weapons?
If a genuine AGI-driven human extinction scenario arises, what's to stop the world's nuclear powers from using high-altitude detonations to produce a series of silicon-destroying electromagnetic pulses around the globe? It would be absolutely awful for humanity don't get me wrong, but it'd be a damn sight better than extinction.
me_again
"As soon as profit can be made" is exactly what the article is warning about. This is exactly the "Human + AI" combination.
Within your lifetime (it's probably already happened) you will be denied something you care about (medical care, a job, citizenship, parole) by an AI which has been granted the agency to do so in order to make more profit.
gota
> I don’t really believe in the threat of AGI (Artificial General Intelligence—human-level intelligence) partly because I don’t believe in the possibility of AGI and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.
I'm on board with being skeptical that LLMs will lead to AGI; but - there being no possibility seems like such a strong claim. Should we really bet that there is something special (or even 'magic') about our particular brain/neural architecture/nervous system + senses + gut biota + etc.?
Don't, like, crows (or octopuses; or elephants; or ...) have a different architecture and display remarkable intelligence? Ok, maybe not different enough (not 'digital') and not AGI (not 'human-level') but already -somewhat- different that should hint at the fact that there -can- be alternatives
Unless we define 'human-level' to be 'human-similar'. Then I agree - "our way" may be the only way to make something that is "us".
kulahan
We still haven’t figured out what intelligence even is. Depending on what you care about, the second-smartest animal in the world varies wildly.
squidbeak
This is a bogus argument. There's a lot we don't understand about LLMs, yet we built them.
0xffff2
There's a lot _I_ don't understand about LLMs, but I strongly question whether there is a lot that the best experts in the field don't understand about LLMs.
simianparrot
We built something we don’t understand by trial and error. Evolution took a few billion years getting to intelligence, so I guess we’re a few sprints away at least
pixl97
>only way to make something that is "us".
Which many people seem to neglect that instead of making us, we make an alien.
Hell, making us a is a good outcome. We at least somewhat understand us. Setting off a bunch of self learning, self organizing code to make an alien, you'll have no clue what comes out the other side.
ImPleadThe5th
I'm kind of shocked by this thread. I cant get over the hubris that we think concepts that were introduced to society at large by science fiction in and before the 19th century is _inevitable_ just because we made a really really good predictive text engine in the 21st century.
Just because a concept exists star trek episode does not guarantee technology moving in that direction. I understand art has an effect on reality, but how hard are we spinning our gears because some writer made something so compelling it lives in our collective psyche?
You can point to the communicator from star trek and I'll point to the reanimation of Frankenstein's monster.
raldi
The author comes so close to getting it, with the paragraph about how if you drop a human into an environment, they inevitably take over as the deadliest and most powerful creature.
But the next step is to ask why; in the case of the Gruffalo it was obvious: fangs, claws, strength, size…
In the case of humans, it’s because we’re the most intelligent creature in the forest. And for the first time in our history, we’re about to not be.
lif
yes, and:
ruthlessness + strength + WMD =/= intelligence
us-merul
"Humans will do what they’ve always tried to do—gain power, enslave, kill, control, exploit, cheat, or just be lazy and avoid the hard work—but now with new abilities that we couldn’t have dreamed of." -- A pretty bleak, and also accurate, observation of humanity. I have to hope that the alternative sentence encompassing all of the good can lead to some balance.
Flamingoat
No, it is not accurate at all. There are some people that do all of these sure, however the vast majority of people live pretty ordinary lives where they do very little of what is described.
I actually think it is very intellectually lazy to be this cynical.
ViktorRay
I think it is partially true. The vast majority of human beings don’t act like that. But it seems the ones in power or close proximity to power do.
This is why it is important to have societies where various forms of power are managed carefully. Limited constitutional government with guaranteed freedoms and checks and balances for example. Regulations placed on mega corporations is another example. Restrictions to prevent the powerful in government or business (or both!) from messing around with the rest of us…
cdirkx
The problem is that technology exponentially increases the negative effects of bad actors. The worst a sociopath could do in the stone age was ruin his local community; while today there are many more dystopian alternatives.
Flamingoat
I don't think that is true either. There have been despots throughout all of human history that have killed huge amounts of people with technology that is considered primitive now.
Whereas much of the technology we have today has a massive positive benefit. Simply access to information today is amazing, I have learned how to fix my own vehicles, bicycles and do house repairs from simply YouTube.
As I said being cynical is being intellectually lazy because it allows you to focus on the negatives and dismiss the positives.
null
adornKey
I don't think it's that accurate. Evil people are rare - and lazy people usually don't cause problems. The most real damage comes from human stupidity - from the mass of people that just want to help and do something good. Stupid People blindly believe anything they're told. And they do a lot of really bad things not because they're evil and lazy, but because they want to help achieve even the most stupid goal. Usually even nasty propagandists leaders aren't that evil - often they're just an intellectual failure - or have some mental issues. Themselves they don't do much practical evil - the mob of nice stupid people does the dirty work, because they just want to help.
wmeredith
I really liked this article, but it is pessimistic. Unfortunately that seems to be the culture-du-jour. Anger and fear drive engagement effectively, as it always has. If it bleeds, it Leeds has been a thing in news organizations since at least the 70s.
If we ignore the headlines peddled by those who stand to benefit the most from inflaming and inciting, we live in a miraculous modern age largely devoid of much of the suffering previous generations were forced to endure. Make no mistake there are problems, but they are growing exponentially fewer by the day.
An alternate take: humans will do what they’ve always tried to do—build, empower, engineer, cure, optimize, create, or just collaborate with other humans for the benefit of their immediate community—but now with new abilities that we couldn’t have dreamed of.
pixl97
I mean, any article that doesn't include both is incomplete.
>If it bleeds, it Leeds has been a thing in news organizations since at least the 70s.
The term yellow journalism is far older.
mannanj
this is accurate because of the few who do it. however the cautionary and hopeful tale behind it is the majority when they stand up against it can change the distribution of power. today however we're comfortable and soft and too scared to do - so posts like this remind us to gain some courage to stand up for change.
woeirua
There's this weird disconnect in tech circles, where everyone is deathly afraid of AGI, but totally asleep on the very real possibility of thermonuclear war breaking out in Europe or Asia over the next 10 years. There's already credible evidence that we came perilously close to the use of tactical nuclear weapons in Ukraine which likely would've spiraled out of control. AGI might happen, but the threat of nuclear war keeps me up at night.
runjake
100%.
Except the things that kill most of us[1] keeps me up at night.
1. https://ourworldindata.org/does-the-news-reflect-what-we-die...
masfuerte
I'm more concerned with the ways that other people can kill me than with the ways they kill themselves.
nonethewiser
>There's already credible evidence that we came perilously close to the use of tactical nuclear weapons in Ukraine which likely would've spiraled out of control.
I do agree nukes are a far more realistic threat. So this is kind of an aside and doesn't really undermine your point.
But I actually think we widely misunderstand the dynamic of using nuclear weapons. Nukes haven't been used for a long time and everyone kind of assumes using them will inevitably lead to escalation which spirals into total destruction.
But how would Russia using a tactical nuke in Ukraine spiral out of control? It actually seems very likely that it would not be met in kind. Which is absolutely terrifying in it's own right. A sort of normalization of nuclear weapons.
Stevvo
There is no evidence that use of tactical nuclear weapons in Ukraine would spiral out of control. I like to think that the US/UK/France would stay out of it, if only because the leaders value their own lives if not those of others.
j2kun
> everyone is deathly afraid of AGI
I think this is a vast overstatement. A small group of influential people are deathly afraid of AGI, or at least using that as a pretext to raise funding.
But I agree that there are so many more things we should be deathly afraid of. Climate change tops my personal list as the biggest existential threat to humanity.
squidbeak
Other doomsday risks aren't any reason to turn our heads away from this one. AI's much more likely to end up taking an apocalyptic form if we sleep on it.
woeirua
We should worry more about doomsday risks that are concrete and present today. Despite the prognostications of the uber wealthy, the emergence of AGI is not guaranteed. It likely will happen at some point, but is that tomorrow or 200 years in the future? We can’t know for sure.
teucris
But this isn’t a suggestion to turn away from AI threats - it’s a matter of prioritization. There are more imminent threats that we know can turn apocalyptic that swaths of people in power are completely ignoring and instead fretting over AI.
zeroonetwothree
Everyone thinks their own field is the most important and deserving of attention and funding. Big surprise.
jjtheblunt
> credible evidence that we came perilously close to the use of tactical nuclear weapons in Ukraine
I've not seen that. can you link to it?
proto-n
Well, one of these is something that most reaonable people work on avoiding, while the other is something that a huge capitalist industrial machine is working to achieve like their existence depends on it.
jvanderbot
About as helpful as "Guns don't kill people ... "
And equally rebutted by Eddie Izzards "Well, I think the gun helps".
seniortaco
My thought as well. Nuclear weapons are also horrifying.
And with LLMs, it's difficult to prevent the proliferation to bad actors.
It seems like we're racing towards a world of fakery where nothing can be believed, even when wielded by good actors. I really hope LLMs can actually add value at a significant level.
rootusrootus
> It seems like we're racing towards a world of fakery where nothing can be believed
Spend a couple minutes on social media and it is clear we are already there. The fakes are getting better, and even real videos are routinely called out as fake.
The best that I can hope for is that we all gain a healthy dose of skepticism and appreciate that everything we see could be fake. I don't love the idea of having to distrust everything I see, but at this point it seems like the least bad option.
But I worry that what we will experience will actually be somewhat worse. A sufficiently large number of people, even knowing about AI fakery, will still uncritically believe what they read and see.
Maybe I am being too cynical this morning. But it is hard to look at the state of our society today and not feel a little bleak.
fruitworks
All of the solutions to AI "safety" are analagous to gun control: it is a centralization of power.
pixl97
That is assuming that ASI doesn't centralize power itself. I mean, if you are a non-human part of the animal kingdom you'd probably say that humans have centralized power around themselves.
fruitworks
I don't assert that there is a political solution to AI. It's possible that both avenues result in a total centralization of power.
Cthulhu_
Are you claiming a lack of gun / AI control is democratizing? That's not working for (the lack of) gun control in the US at the moment though.
Compare also with capitalism; unchecked capitalism on paper causes healthy competition, but in practice it means concentration of power (monopolies) at the expense of individuals (e.g. our accumulated expressions on the internet being used for training materials).
fruitworks
>Are you claiming a lack of gun / AI control is democratizing?
This is obviously the case. It results in a greater distribution of power.
>That's not working for (the lack of) gun control in the US at the moment though.
In the US, one political party is pro gun-control and the other is against. The party with the guns gets to break into the capitol, and the party without the guns gets to watch. I expect the local problem of AI safety, like gun safety will also be self-solving in this manner.
Eventually, Gun control will not work anywhere, regardless of regulation. The last time I checked, you don't need a drone license. And what are the new weapons of war? Not guns. The technology will increase in acessibility until the regulation is impossible to enforce.
The idea that you can control the use of technology by limiting it to some ordained group of is very brittle. It is better to rely on a balance of powers. The only way to secure civilization in the long run is to make the defensive technology stronger than the offensive technology.
rootusrootus
> Compare also with capitalism; unchecked capitalism on paper causes healthy competition
Is that not conflating capitalism with free markets? I have way more confidence in the latter than the former.
bilater
I sort of agree, but I hate the premise of this article because it sneakily focuses only on the potential harm of human + AI collaboration, without acknowledging the good.
That said, I agree that human + AI can cause damage and it’s precisely why, from a game theory perspective, the right move is to go full steam ahead. Regulation only slows down the good actors.
Valhalla is within reach, but we have to leap across a massive chasm to get there. Perhaps the only way out of this is through: accelerating fast enough to mitigate the “mid-curve” disasters, such as population revolt due to mass inequality or cyberattacks caused by vulnerabilities in untested systems.
Timsky
> We don’t need to worry about AI itself, we need to be concerned about what “humans + AI” will do. Humans will do what they’ve always tried to do—gain power, enslave, kill, control, exploit, cheat, or just be lazy and avoid the hard work—but now with new abilities that we couldn’t have dreamed of.
Starting with the AI itself: LLMs sold as AI are the greatest mislead. Text generation using Markov chains is not particularly intelligent, even when it is looped back through itself a thousand times and appears alike an intelligent conversation. What is actually being sold is an enormous matrix trained on terabytes of human-written, high-quality texts, obviously in violation of all imaginable copyright laws.
Here is a gedanken experiment to test if an AI has any intelligence: until the machine starts to determine and resolve contradictions in its own outputs w/o human help, one can sleep tight. Human language is a fuzzy thing that is not quite suitable for a non-contradictory description of the world. Building such a machine would require resolving all the contradictions humanity has ever faced in a unified way. Before it happens, humanity will be drowned in low-quality generated LLM output.
Insanity
So “guns don’t kill people, people with guns kill people”.
But for AI I’m not sure that preposition will hold indefinitely. Although I do think we are a far away from having actual AGI that would pose this threat.
Still, the author has a good but obvious point.
null
supermatt
> guns don’t kill people
* SIG P320 enters the chat *
kazinator
> Just like a hammer, sword, or a rifle lying on a ground is nothing to be feared, so too is AI.
AI is not just going to lie on the ground until someone picks it up to do harm.
An intelligent rifle with legs is something to be feared.
You cannot compare AI to inanimate objects under human control which have no agency. Especially not if you are bringing the imaginary AGI into the conversation.
The idea that AGI is just a hammer is absurd.
JonathanRaines
I agree with what you're worried about - humans using AI to do bad things. However, have you considered also being worried about what AI can do? (can you ever have too much existential dread?) You cast it as a tool, it may be the first thing we've made that goes beyond that.
I struggle to understand how people can just dismiss the possibility of artificial intelligence.
Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).
As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.