Stone Soup AI (2024)
27 comments
·February 25, 2025hansonkd
jpadkins
The analogy breaks down because physical property and intellectual property are different. When we input creative works into training sets, we do not withhold those works from someone else! Digital copies are different than scarce resources. *
Also, all the AI ToS I've read have stated they will use my inputs to improve their services. I haven't seen an AI service state they won't use my inputs.
* Against Intellectual Property is a good book that explores this idea https://cdn.mises.org/15_2_1.pdf
dfltr
And to top it all off, they're charging us for the soup, and it's getting more expensive every time we give them another ingredient.
RodgerTheGreat
It would be more accurate to imagine a version of the tale where the stone soup chef rifles through people's houses to collect ingredients without permission (if they were against it surely they would've opted out of his services and obtained guard dogs?), and then opened a stand to sell the soup in the town square at premium prices while tainting the wares of his fellow vendors with his leftover slop.
lsy
Adopting this perspective would improve the quality of efforts around this technology. Instead of thinking of it as somehow creating an "intelligence", seeing it as a complex lens on the training data that is controlled by the prompt helps you understand that the output isn't generated by the model, but by people. And various existing pieces of human effort are brought into focus and collimated by nudging the lens in different directions with a "prompt". The user then gives those pieces meaning and determines whether the result is useful or not.
This makes certain things more clear: notions of "truth" are not in play beyond statistical happenstance, certain efforts to make outputs uniform are more trouble than they're worth, and valuable use cases are strongly correlated with the ability and convenience of the user to confirm the usefulness of the result.
K0balt
I think this is relevant and adjacent at least: https://open.substack.com/pub/ctsmyth/p/the-generative-ai-re...
kridsdale1
I see the models as oracular seeing-stones like a wizard might use.
Ponder the orb! Probe its secrets!
Holographically, all our text is encoded in there. If you know how to query.
xpe
Using various metaphors carefully and fluidly is key. No one is sufficient. Not this one, nor any other.
I say: go back to basics. One good foundational point is dispelling confusion and conflation around "intelligence". So many people have woefully narrow and unexamined notions of "intelligence". It wouldn't be unfair to say many people have broken definitions. Broken because they just aren't good enough to make meaningful progress in a modern world where many kinds of agents display many different kinds of intelligence. Such broken definitions are often too specific; too arbitrary; too rooted in binary thinking.
Many of our current language patterns are liabilities. Not to mention corporate and organizational cultures where hazy definitions slide around and few people will admit that they don't really know what others mean by the term. Sometimes it feels like a big charade where no one wants to hurt anyone's feelings nor appear uninformed. And so it goes, some kind of elaborate mystical ritual where the confused participants lead each other further into madness.
With this in mind, I find tremendous value in Stuart Russell's definition of intelligence: the ability of an agent to solve some task. An agent is anything that makes a decision: a human, an animal, a system of any kind. This definition intentionally leaves out any notion of (a) humans; (b) consciousness; (c) some arbitrary quality line. This usage cuts through so much bullsh*t. I highly recommend finding a way to shift conversations towards it wherever possible. This isn't easy in my experience. We have so much baggage and crufty thinking, even we're able to put aside our baser instincts.
One might say that Russell's definition just "kicks the can down the road". I don't think so. It encourages people to define their metrics a bit more clearly -- hopefully out loud or on paper -- for a particular context. It is one step closer to clarifying things. One step in the right direction -- to stop pretending like we all know each other means -- and instead actually pose an answerable question.
Now, what about "general" intelligence you say? Well, one step at a time. Wait until a group of people have demonstrated some ability to find some kind of consensus on particular tasks. It is hard work to socialize these ideas. Defining general intelligence in meaningful ways is really hard and contentious. It often becomes a lightning rod for all number of other disagreements.
As one example, look at the shitstorm around various sociological attempts to measure the general aspects of intelligence in humans. Without attempting to summarize it in any detail, there has been a huge dumpster fire involving: poor statistical understanding, shoddy research, tone-deaf communication, willful misinterpretation, accusations of racism, and so on. There are pockets of truth in there, but even trying finding the core nuggets of useful truth something makes everything radioactive, depending on the context. A typical person in modern culture is usually unable to calmly make sense of these issues, and who can blame them? Statistical understanding doesn't grow on trees. The same goes for understanding machine learning theory.
aamar
I would ask anyone making these kinds of deflationary arguments to explain if the same argument can be applied to the best of human creative work. Humans also use the raw materials of others, whether that’s words, musical scales, genres, idioms, or anecdotes.
Where is the line between recapitulation and innovation? Is it a line that we think current LLMs are definitely not crossing, and definitely will not cross in the near future? If so, make that argument.
sdwr
> Good artists copy, great artists steal
beepbooptheory
From TFA:
> To be fair, although the story is intended to be debunking, the folktale also has a positive moral that applies to AI. The collective resources of many humans can make something that no individual could, and that really is magical.
Its not deflationary, its just about reframing, reattributing what is so impressive about LLMs. We get so caught up in the tech itself, that it exists at all, understandably considering the way the discourse goes, we don't stop to appreciate how its even possible at all; that is, all of us (broadly).
So many people just cant get past Sci-Fi mentality, they make the current AI into a kind of weird but promising baby, but we can also, much more easily and nicely, consider it a beautiful reflection of human writing at large.
And whats even with all this constant pressure for it be more than that? All the arguments, philosophical gotchas, weird Skinnerism... Its like you're given a perfectly good hamburger and all you can say is "this is pretty much a steak if you squint".
wbakst
i like this so much
"stone soup" could be seen as a trick (to get the villagers to provide that which they were previously unwilling), but i like that it's multiple different villagers who provide individual ingredients -- it's the coming together of everyone and their individual contributions that ultimately makes the soup so good
htrp
This was one of the talks at Neurips 24 in Dec, highly recommend
kridsdale1
The thesis appears to be that the CEOs are hoodwinking the populace in to giving up their cultural wealth to build proprietary systems.
But TFA also mentioned Wikipedia. Crowd-RLHF trained models are the same. The people know they are volunteering their own labor and information to improve the model because the model gives them value and they want to share the value with humankind.
Everybody enjoys the soup.
xerox13ster
We gave Wiki the info. AI took the info. These things are not the same.
bigfishrunning
AI is only stone soup if a) you get charged for the soup after adding your carrots and b) they heat the water by burning your house down
pzh
This comparison overlooks the fact that, in the original folktale, the stone soup remains a soup —- it never turns into a ribeye steak. Similarly, in the AI version, an LLM will always remain an LLM.
Hasu
A couple of thoughts:
1) The story of stone soup is the story of how some grifters got a free meal. I don't think it's moral instruction, or an example to be learned from, unless you are a grifter.
2) In the stone soup example and in cases like Wikipedia, the soup is freely shared with everyone, regardless of their contributions. Is AI like that, or in the AI stone soup story, are the travelers charging everyone for a bowl of soup? Doesn't that change the story quite a bit?
sdwr
If you take off your cynicism-tinted glasses, it's the story of how community is more than the sum of its parts, and how it sometimes needs a "beautiful lie" as a catalyst (like justice, or freedom!)
Hasu
If you think that community needs a group of strangers to con them into coming together and being more than the sum of its parts, you are more cynical than I am.
debo_
I thought this was going to be about the NPC/monster AI in Dungeon Crawl Stone Soup.
card_zero
It made me search to see if an actual AI has been trained to play DCSS, and inevitably yes.
jncfhnb
I assumed the same
rezmason
I think that if we tried, we could come up with a pretty large cookbook of stone soups.
kridsdale1
Every soup is a Stone soup if you consider the metal pot as a stone.
In the soup story the villagers freely gave up their carrots and onions and the travelers didn't give any guarantees that they wouldn't be consumed.
In the AI analogy, it is a bit closer in my mind if the travelers would say "Don't worry your onions and carrots and garnishes won't be consumed by us! Put them in the pot and we will strain them out, they are still yours to keep!"
We, the villagers, are dumping our data into the AI soup with a promise that it won't be used when we are using the API or check a little "private mode" box.