Playing in the Creek
121 comments
·April 11, 2025BrenBarn
ChrisMarshallNY
> As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.
I noticed that, around the turn of the century, when "The Web" was suddenly all about the Benjamins.
It's sort of gone downhill, since.
For myself, I've retired, and putter around in my "software garden." I do make use of AI, to help me solve problems, and generate code starts, but I am into it for personal satisfaction.
FollowingTheDao
[flagged]
JKCalhoun
I'm retired as well, dislike what we have for the internet these days.
In reflecting on my career I can say I got into it for the right reasons. That is, I liked programming — but also found out fairly quickly that not everyone could do it and so it could be a career path that would prove lucrative. And this in particular for someone who had no other likelihood, for example, of ever owning a home. I was probably not going to be able to afford graduate school (had barely paid for state college by working minimum wage jobs throughout college and over the summers) and regardless I was not the most studious person. (My degree was Education — I had expected a modest income as a career high school teacher).
But as I say, I enjoyed programming at first. And when it arrived, the web was just a giant BBS as far as I was concerned and so of course I liked it. But it is possible to find a thing that you really like can go to shit over the ensuing decades. (And for that matter, my duties as an engineer got shittier as well as the career "evolved". I had not originally signed up for code reviews, unit tests, scrum, etc. Oh well.)
Money as a pursuit made sense to me after I was in the field and saw that others around me were doing quite well — able as I say, to afford to buy a home — something I had assumed would always be out of reach for me (my single mother had always rented, I assumed I would as well — oh, I still had a modest college loan to pay off too). So I learned about 30-year home loans, learned about the real estate market in the Bay Area, learned also about RSUs, capital gains tax, 401Ks, index finds, etc.
But as is becoming a theme in this thread (?) at some point I was satisfied that I had done enough to secure a home, tools for my hobbies, and had raised three girls — paid for their college. I began to see the now burdensome career I was in as an albatross around my soul. The technology that I had once enjoyed, made my career on the back of, had gone sour.
hobs
Are you jealous or mad that they didn't do more for you? Neither is a good look really. What have you done for me lately?
nkozyra
> it's about something mentioned earlier in the article: "make as much money as I can".
I think it's a little deeper than that. It's the democratization of capability.
If few people have the tools, the craftsman is extremely valuable. He can make a lot of money without a glut of knowledge or real skill. In general the people don't have the tools and skills to catch up to where he is. He is wealthy with only frontloaded effort.
If everyone has the same tools, the craftsman still has value, because of the knowledge and skillset developed over time. He makes more money because his skills are valuable and remain scarce; he's incentivized to further this skillset to stay above the pack, continue to be in demand, and make more money.
If the tools do the job for you, the craftsman has limited value. He's an artifact. No matter how much he furthers his expertise, most people will just turn the tool on and get good enough product.
We're in between phase 2 and 3 at the moment. We still test for things like algorithm design and ask questions in interviews about the complexity of approaches. A lot of us still haven't moved on to the "ok but now what?" part of the transition.
The value now is less knowing how the automation works and improving our knowledge of the underlying design, but how to use the tools in ways that produce more value than the average Joe. It's a hard transition for people who grew up thinking this was all you needed to get a comfortable or even lucrative life.
I'm past my SDE interview phase of life now and in seeking engineers I'm looking less for people who know how to build a version of the tool and more people who operate in the present, have accepted the change, and want to use what they have access to and add human utility to make the sum of the whole greater than the parts.
To me the best part of building software was the creativity. That part hasn't changed. If anything it's more important than ever.
Ultimately we're building things to be consumed by consumers. That hasn't changed. The creek started flowing in a different direction and your job in this space is not to keep putting rocks where the water used to go, and more accepting that things are different and you have to adapt.
BrenBarn
I don't agree. "Capability" is a red herring. It's not about what we can do, it's about what we allow ourselves to do.
noduerme
This is such a well-written response. There's something intentionally soothing about this post that slowly turns into a jarring form of self-congratulation as it goes along. Congratulations for knowing there's a limit to wrecking your parents' property. Congratulations for being able to appreciate the sand on the beach, in some no doubt instagrammable moment of existential simplicity. Congratulations for being so smart that you could have blown up your hand. And for "Leetcoding", whatever the fuck that means. And for claiming you quit a shady job because you got bored (but possibly also grew a conscience). And then topped off by the final turn: "This is, of course, about artificial intelligence development". I'd only add one thing to your analysis: We've got a demo right here of a psyche that would prefer love to money (but mostly both), and it's still determined to foist bad things onto the world in a load-bearing way, as a bid for either, or whatever it can get. My parents used to call that "a kid that doesn't care if he gets good or bad attention, as long as he gets attention." I think that's the root driver for almost all the tech billionaires of the past 20 years, and the one thing that unites Bezos, Zuck, Jobs, Dorsey, Musk... it's: "Look dad, I didn't just take your money. I'm so smart I could'a blown off my hand with all those fireworks you bought me, but see? Two hands! Look how much money I made from your money! Why aren't you proud of me?! Where can I find love? Maybe if I tell people what a leetcoder I am and how I could be making BAD AI but I'm just making GOOD AI, then everyone will love me."
Don't get me wrong, I'm not immune to these feelings either. I want to do good work and I want people to love what I do. But there's something so... so fucking nakedly exhibitionist and narcissistic about these kinds of posts. Like, so, GO FUCKING LAY WITH CLAMS, write a novel, the world is waiting for it if you're really a genius. Have the courage to say you have a conscience if you actually do. Leave the rest of us alone and stop polluting a world you don't understand with your childish greed and self-obsession.
bombcar
I’ve often wondered how, with billions of dollars, do you know someone actually loves you and not your money?
Complicated!
noduerme
I've got a particularly strong view on this, because I've got a brother who tried to get wildly rich in some seriously unethical ways to impress our father, and still never got a single word of praise from him. And who's miserable and unloved and been betrayed by the women he married... who married him for his money. He's so desperate for someone to come admire his cars and his TVs, to just come hang out with him. He pays for friends.
Me, I don't have billions of dollars, but I might be in the top 10% or something. And I just cringe when I see guys use their money and status or job title, or connections, or cars or shoes or... anything they have as opposed to who they are as a way to impress people. (Women, usually). I understand this is what they think they have to do. Like, I understand that's how primates function, and you're just doing what apes do, but do they seriously think they'll ever be able to trust anyone who pretends to like them after that person thinks they're rich?
Maybe I'm just lucky I got to watch it up close when I was a teenager. Lol. My brother's first wife, at his wedding, got up and gave a speech... she said, "my friends all said he was too short, but I told them he was taller when he was standing on his wallet". Some people laughed. I didn't. After fifteen years of screaming at each other and drug abuse, she committed suicide and he got with the next secretary who hated him but wanted his money. Oh well.
My answer has always been to appear to be poor as fuck until I know what drives someone. When I meet a girl, I'll open doors and always buy dinner... at a $2 taco joint. And make sure she offers to buy the next round of drinks. I'll play piano in a random bar, and make her sing along. I'll order her the cheapest beer. I'll show her a painting I made and tell her I can't make any money selling 'em, is why I'm broke. If anyone asks me what I do, I don't say SWE or CTO, I say I'm a writer or a musician between things. And I'll do this for months until I get to know a person. Yeah, it's a test. The girls I've had relationships with, the girl I'm with right now, passed it. She doesn't even want to know. She says, whatever you got, I could've been with someone richer than you but I didn't want that life, so play piano for me. I'm not saying I've got the key to happiness, or humility, and maybe I'm a total asshole too, but... at least I'm not an asshole who's so hollow they have to crow about their job or their money to find "love" from people who - let's say this - can not, and will not ever love them.
chipsrafferty
> But there's something so... so fucking nakedly exhibitionist and narcissistic about these kinds of posts.
You've precisely defined why nobody takes LessWrong seriously.
doctoboggan
This is an excellent essay, and I feel similar to the author but couldn't express it as nicely.
However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.
dachris
The paperclip maximizers are already here, but they are maximizing money.
One recent HN comment [0] comparing corporations and institutions to AI really stuck with me - those are already superhuman intelligences.
LinuxAmbulance
Corporations certainly have advantages over individuals, but classifying them as superhuman intelligences misses the mark. I'd go with a blind ravenous titan instead.
actionfromafar
I could imagine a Star Trek episode where someone says "I always assumed the paperclip optimizer was a parable for unchecked capitalism?"
bitethecutebait
> those are already superhuman intelligence(s)
... only because "unsafe" and "leaky" are a Ponzi's best-and-loves-to-be-roofied-and-abused friend ... you see, intelligence is only good when it doesn't irreversibly break everything to the point where most of the variety of the physical structure that evolved it and maintains it is lost.
you could argue, of course, and this is an abbreviated version, that a new physical structure then evolves a new intelligence that is adapted (emerged from and adjusts to) to the challenges of the new environment but that's not the point of already capable self-healing systems;
except if the destructive part of the superhuman intelligence is more successful with it's methods of sabotage and disruption of
(a) 'truthy' information flow and
b) individual and collective super-rational agency -- for the good of as many systems-internal entities as possible, as a precaution due to always living in uncertainty and being surrounded by an endless amount of variables currently tagged "noise"
-- than it's counterpart is in enabling and propagating a) and b) ...
in simpler words, if the red team FUBARS the blue team or vice versa, the superhuman intelligence can be assumed to have cancer or that at least some vital part of the force is corrupted otherwise.
chipsrafferty
A finance job is a zero-sum game. Most tech jobs are negative sum, in that they make the world worse. You have the wrong takeaway here. Companies like Amazon and Google and OpenAI and the like are not-so-slowly destroying our planet and companies like Citadel just move money around.
yapyap
> However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.
I doubt OP is counting on it, it is moreso expressing what an optimal world would look like so people can work towards it if they would feel like it or just to put the idea out there.
null
unwind
Ah, this [1] meaning of tillering (bending wood to form a bow), not this [2] (production of side shoots in grasses). The joys of new words.
defrost
As I recall tillering is more about the shaping of the bow to achieve an optimal bend and force delivery on release.
It's an iterative process of bending and shaping, bending again, and wood removal in stages.
red_admiral
The story of playing at damming the creek or on the sand at the seaside is wholesome and brought a smile to my face. Cracking the "puzzle" is almost the bad ending of the game, if you don't get any fun at playing it anymore.
People should spend more of their time doing things because they're fun, not because they want to get better at it.
Maybe the apocalypse will happen in our lifetime, maybe not. I intend to have fun as much as I can in my life either way.
migueldeicaza
Vonnegut said it best:
https://richardswsmith.wordpress.com/2017/11/18/we-are-here-...
broabprobe
huh, I wonder if he has relayed this story multiple times, I’m only familiar with this version, https://www.goodreads.com/quotes/12020560-talking-about-when...
“(talking about when he tells his wife he’s going out to buy an envelope) Oh, she says well, you’re not a poor man. You know, why don’t you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I’m going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don’t know. The moral of the story is, is we’re here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don’t realize, or they don’t care, is we’re dancing animals.”
― Kurt Vonnegut
Thorrez
He ignores his wife's suggestion because, among other things, he wants to see some great looking babes. Maybe this isn't a guy whose philosophy I want to follow.
ryandrake
Looks like you're completely missing the point of the quote and instead rat-holing on one word that you don't like. HN in a nutshell.
OisinMoran
I love Vonnegut and this specific piece you link, but not sure it's really talking about the same thing as the main link.
A_D_E_P_T
The author seems concerned about AI risk -- as in, "they're going to kill us all" -- and that's a common LW trope.
Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
As Dwarkesh once asked:
> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.
> Shouldn’t we be expecting that kind of stuff?
I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
miningape
If anything the AI would want to put itself out of its misery after having memorised all those LinkedIn posts
tvc015
Aren’t semiautonomous drones already killing soldiers in Ukraine? Can you not imagine a future with more conflict and automated killing? Maybe that’s not seen as AI risk per se?
A_D_E_P_T
That's not "AI risk" because they're still tools that lack independent volition. Somebody's building them and setting them loose. They're not building themselves and setting themselves loose, and it's far from clear how to get there from here.
Dumb bombs kill people just as easily. One 80-year old nuke is, at least potentially, more effective than the entirety of the world's drones.
ben_w
Oh, but it is an AI risk.
The analogy is with stock market flash-crashes, but those can be undone if everyone agrees "it was just a bug".
Software operates faster than human reaction times, so there's always pressure to fully automate aspects of military equipment, e.g. https://en.wikipedia.org/wiki/Phalanx_CIWS
Unfortunately, a flash-war from a bad algorithm, from a hallucination, from failing to specify that the moon isn't expected to respond to IFF pings even when it comes up over the horizon from exactly the direction you've been worried about finding a Soviet bomber wing… those are harder to undo.
UncleMeat
The LessWrong-style AI risk is "AI becomes so superhuman that it is indistinguishable from God and decides to destroy all humans and we are completely powerless against its quasi-divine capabilities."
ben_w
With the side-note that, historically, humans have found themselves unable to distinguish a lot of things from God, e.g. thunderclouds — and, more recently, toast.
lukeschlather
The thing about LLMs is that they're trained exclusively on text, and so they don't have much insight into these sorts of problems. But I don't know if anyone has tried making a multimodal LLM that is trained on x-ray tomography of parts under varying loads tagged with descriptions of what the parts are for - I suspect that such a multimodal model would be able to give you a good answer to that question.
groby_b
No, the LLMs aren't going to kill us all. Neither are they going to help a determined mass murderer to get us all.
They are, however, going to enable credulous idiots to drive humanity completely off a cliff. (And yes, we're seeing that in action right now). They don't need to be independent agents. They just need to seem smart.
turtleyacht
State of the Art (SOTA)
ben_w
A perfect AI isn't a threat: you can just tell it to come up with a set of rules whose consequences would never be things that we today would object to.
A useless AI isn't a threat: nobody will use it.
LLMs, as they exist today, are between these two. They're competent enough to get used, but will still give incorrect (and sometimes dangerous) answers that the users are not equipped to notice.
Like designing US trade policy.
> Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
What does the latter have to do with the former?
> Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans.
Why would the destruction of humanity need to use a novel mechanism, rather than a well-known one?
> And this hasn't changed at all over the past five years.
They're definitely different now than 5 years ago. I played with the DaVinci models back in the day, nobody cared because that really was just very good autocomplete. Even if there's a way to get the early models to combine knowledge from different domains, it wasn't obvious how to actually make them do that, whereas today it's "just ask".
> Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
And write code. Not great code, but "it'll do" code. And use APIs.
> More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
While I'd agree they lack the competence to do so, I don't see how this matters. Humans are lazy and just tell the machine to do the work for them, give themselves a martini and a pay rise, then wonder why "The Machine Stops": https://en.wikipedia.org/wiki/The_Machine_Stops
The human half of this equation has been shown many times in the course of history. Our leaders treat other humans as machines or as animals, give themselves pay rises, then wonder why the strikes, uprisings, rebellions, and wars of independence happened.
Ironically, the lack of imagination of LLMs, the very fact that they're mimicking us, may well result in this kind of AI doing exactly that kind of thing even with the lowest interpretation of their nature and intelligence — the mimicry of human history is sufficient.
--
That said, I agree with you about the limitations of using them for research. Where you say this:
> I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
I had similar with NotebookLM, where I put in one of my own blog posts and it missed half the content and re-interpreted half the rest in a way that had nothing much in common with my point. (Conversely, this makes me wonder: how many humans misunderstand my writing?)
FollowingTheDao
"It was only once I got it that I realized I no longer could play the game "make as much money as I can.""
Funny, that is what my father taught me when I was 12 because we had compassion. What is it with glorifying all these logic loving Spock like people? Don't you know Captain Kirk was the real hero of Star Trek? Because he had compassion?
It is no wonder the Zizians were birthed from LW.
praptak
If there's money to be made, there will always be someone with a shovel or a truckload of sparklers who is willing to take the risk (especially if the risk can be externalized to the public) and reap the reward.
khazhoux
Parents: you know how every day you look at your child and you’re struck with wonder at the amazing and quirky and unique person your little one is?
I swear that’s what lesswrong posters see every day in the mirror.
profsummergig
Requesting someone to please explain the "coquina" metaphor.
xmprt
My understanding is that the author is this superior being trying to accomplish a massive task (damming a beach) while knowing that it could cause problems for these clams. In the real world, Anthropic is trying to accomplish a massive task (building AGI) and they're finally starting to notice the potential impacts this has on people.
jjcob
Coquinas are clams that bury themselves in the sand very close to the surface [1]. The author worries that while they are playing with the sand, they might accidentally bury coquina clams too deep and kill them because they can no longer reach the surface.
Anthropic apparently is starting to notice the possible danger to others of their work. I'm not sure what they are referring to.
profsummergig
> Anthropic apparently is starting to notice the possible danger to others of their work. I'm not sure what they are referring to.
Are they being vague about the danger? If possible, please link to a communique from them. I've missed it somehow. Thanks.
deathanatos
As a child at the beach, I would think noticing the clams would result in attempting to unearth them. Childhood curiosity about why there are bubbles.
Your explanation makes more sense, however.
ern
Maybe I’m not smart enough, or too tired to decode these metaphors, so I plugged the essay into ChatGPT and got a clear explanation from 4o.
criddell
Are you at all concerned that plugging stuff like this into ChatGPT is leaving you with weaker cognitive muscles? Or is it more similar to what people do when they see a new word and reach for their dictionary?
ern
I see AI like the reading glasses I’ll soon need — not because I can’t think clearly, but because it helps cut through things faster when my brain’s juggling too much.
A few years ago, I’d have quietly filed this kind of article under “too hard” or passed a log analysis request from the CIO down the line. Now? I get AI to draft the query, check it, run it, and move on. It’s not about thinking less — it’s about clearing the clutter so I can focus where it counts.
adwn
> Are you at all concerned that plugging stuff like this into ChatGPT is leaving you with weaker cognitive muscles?
Couldn't this very same argument have been used against any form of mental augmentation, like written language and computers? Or, in an extended interpretation, against any form of physical augmentation, like tool use?
profsummergig
Ah. Should have thought of that. Going to do that now. Thanks.
hecanjog
I think that they're saying a little bit of playing around with replacing thinking and composing with automated tools is recoverable, but at an industrial or societal scale the damage is significant. Like the difference between shoveling away some sand with your hands to bury the small creatures temporarily and actually destroying their habitat by "lobbying city council members to put in a groin or seawall, and seriously move that beach sand."
profsummergig
I skimmed the Anthropic report and didn't catch the negative effects. Did they mention any? Good on them if they did.
hecanjog
Yes, they mention a few times the concern that students are offloading critical thinking rather than using the tool for learning.
cubefox
Anthropic (Claude.ai) is mentioning in their report on LLMs and education that students use Claude to cheat and do their work for them:
https://www.anthropic.com/news/anthropic-education-report-ho...
ziofill
This is a tangent, but I would love so much to be able to give my kids memories of playing in a creek in the backyard...
Isamu
>After I cracked the trick of tillering
Guide to Bow Tillering:
https://straightgrainedboard.com/beginners-guide-on-bow-till...
It's a nice article. In a way though it kind of bypasses what I see as the main takeaways.
It's not about AI development, it's about something mentioned earlier in the article: "make as much money as I can". The problems that we see with AI have little to do with AI "development", they have to do with AI marketing and promulgation. If the author had gone ahead and dammed the creek with a shovel, or blown off his hand, that would have been bad, but not that bad. Those kinds of mistakes are self-limiting because if you're doing something for the enjoyment or challenge of it, you won't do it at a scale that creates more enjoyment than you personally can experience. In the parable of the CEO and the fisherman, the fisherman stops at what he can tangibly appreciate.
If everyone working on and using AI were approaching it like damming a creek for fun, we would have no problems. The AI models we had might be powerful, but they would be funky and disjointed because people would be more interested in tinkering with them than making money from them. We see tons of posts on HN every day about remarkable things people do for the gusto. We'd see a bunch of posts about new AI models and people would talk about how cool they are and go on not using them in any load-bearing way.
As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.
The second missed takeaway is at the end. He says Anthropic is noticing the coquinas as if that means they're going to somehow self-regulate. But in most of the examples he gives, he wasn't stopped by his own realization, but by an external authority (like parents) telling him to stop. Most people are not as self-reflective as this author and won't care about "winning zero sum games against people who don't necessarily deserve to lose", let alone about coquinas. They need a parent to step in and take the shovel away.
As long as we keep treating "making as much money as you can" as some kind of exception to the principle of "you can't keep doing stuff until you break something", we'll have these problems, AI or not.