Thoughts on thinking
185 comments
·May 16, 2025abathologist
fennecbutt
99% if not 100% of human thought and general output is derivative. Everything we create or do is based on something we've experienced or seen.
Try to think of an object that doesn't exist, and isn't based on anything you've ever seen before, a completely new object with no basis in our reality. It's impossible.
Writers made elves by adding pointy ears to a human. That's it.
musicale
> Writers made elves by adding pointy ears to a human. That's it.
Thousands of years of mythology about supernatural beings might have something to do with it too.
Even the word "elf" dates back at least to Old English, Old Norse, etc. and seems to have roots in oral tradition.
don_neufeld
Completely agree.
From all of my observations, the impact of LLMs on human thought quality appears largely corrosive.
I’m very glad my kid’s school has hardcore banned them. In some class they only allow students to turn in work that was done in class, under the direct observation of the teacher. There has also been a significant increase in “on paper” work vs work done on computer.
Lest you wonder “what does this guy know anyways?”, I’ll share that I grew up in a household where both parents were professors of education.
Understanding the effectiveness of different methods of learning (my dad literally taught Science Methods) were a frequent topic. Active learning (creating things using what you’re learning about) is so much more effective than passive, reception oriented methods. I think LLMs largely are supporting the latter.
zdragnar
Anyone who has learned a second language can tell you that you aren't proficient just by memorizing vocabulary and grammar. Having a conversation and forming sentences on the fly just feels different- either as a different skill or using a different part of the brain.
I also don't think the nature of LLMs being a negative crutch is new knowledge per se; when I was in school, calculus class required a graphing calculator but the higher end models (TI-92 etc) that had symbolic equation solvers were also banned, for exactly the same reason. Having something that can give an answer for you fundamentally undermines the value of the exercise in the first place, and cripples your growth while you use it.
fennecbutt
Feels different, comes naturally, without conscious thought, just like we don't focus on beating our hearts.
And agree about learning by practicing a skill being best. But you and I both know the school system has worked on rote memorisation for hundreds of years at least and still is now.
JackFr
Well I can extract a square root by hand. We all had to learn it and got tested on it.
No one to day learns that anymore. The vast, vast majority have no idea and I don’t think people are dumber because of it.
That is to say, I think it’s not cut-and-dried. I agree you need to learn something, but something’s it’s okay use a tool.
zdragnar
Comparing extracting a square root my hand is rather different in scope than reducing / simplifying equations entirely. The TI-92 could basically do all of your coursework for you up to college level, if memory serves.
The real question isn't "is it okay to use a tool" but "how does using a tool affect what you learn".
In the cases of both LLMs and symbolic solving calculators, I believe the answer is "highly detrimental".
null
smcleod
I very much agree with your sentiment here.
I tried to encapsulate that to some degree when writing something (perhaps poorly?) recently actually - https://smcleod.net/2025/03/the-democratisation-paradox-what...
skydhash
Same with drawing which is easy to teach, but hard to master because of the coordination between eyes and hand. You can trace a photograph, but that just bypass the whole point and you don’t exercise any of the knowledge.
flysand7
Another case in point is that memorizing vocabularies and grammar, although could seem like an efficient way to learn a language, is incredibly unrewarding. I've been learning japanese from scratch, using only real speech to absorb new words, without using dictionaries and anything else much. The first feeling of reward came immediately when I learned that "arigatou" means thanks (although I terribly misheard how the word sounded, but hey, at least I heard it). Then after 6 month, when I could catch and understand some simple phrases. After 6-7 years I can understand about 80% of any given speech, which is still far, but I gotta say it was a good experience.
With LLM's giving you ready-made answers I feel like it's the same. It's not as rewarding because you haven't obtained the answer yourself. Although it did feel rewarding when I was interrogating an LLM about how CSRF works and it said I asked a great question when I asked whether it only applies to forms because it seems like fetch has a different kind of browser protection.
layer8
How much hours would you estimate did you watch (I assume it was video, not just audio) in those years? What kind of material? Just curious.
hammock
> I’m very glad my kid’s school has hardcore banned them.
What does that mean, I’m curious?
The schools and university I grew up in had a “single-sanction honor code” which meant if you were caught lying or cheating even once you would be expelled. And you signed the honor code at the top of every test.
My more progressive friends at other schools who didn’t have an honor code happily poo-pooed it as a repugnantly harsh old fashioned standard. But I don’t see a better way today of enforcing “don’t use AI” in schools, than it.
don_neufeld
The school has an academic honestly policy which explicitly bans it, under “Cheating”, which includes:
“Falsifying or inventing any academic work, including the use of AI (ChatGPT, etc)”
Additionally, as mentioned, the school is taking actions to change how work is done to ensure students are actually doing their own work - such as requiring written assignments be completed during class time, or giving homework on physical paper that is to be marked up by hand and returned.
Apparently this is the first year they have been doing this, as last year they had significant problems with submitted work not being authored by students.
This is in an extremely competitive Bay Area school, so there can be a lot of pressure from parents on students to make top grades, and sometimes that has negative side effects.
garrickvanburen
I don’t see the problem.
I’m not sure how LLMs output is indistinguishable from Wikipedia or World Book.
Maybe? and if the question is “did the student actually write this?” (which is different than “do they understand it?” there are lots of different ways to assess if a given student understands the material…that don’t involve submitting typed text but still involve communicating clearly.
If we allow LLMs- like we allow calculators, just how poor LLMs are will become far more obvious.
hammock
If LLMs are allowed then sure. However, when LLMs are explicitly banned from use, is the case I am talking about.
avaika
This reminds me how back in my school days I was not allowed to use the internet to prepare research on some random topics (e g. history essay). It was the late 90s when the internet started to spread. Anyway teachers forced us to use offline libraries only.
Later in the university I was studying engineering. And we were forced to prepare all the technical drawings manually in the first year of study. Like literally with pencil and ruler. Even though computer graphics were widely used and we're de facto standard.
Personally I don't believe hardcore ban will help with any sort of thing. It won't stop the progress either. It's much better to help people learn how to use things instead of forcing them to deal with "old school" stuff only.
don_neufeld
I was expecting some response like this, because schools have “banned” things in the past.
While this is superficially similar, I believe we are talking about substantially different things.
Learning (the goal) is a process. In the case of an assignment, the resulting answer / work product, while it is what is requested, is critically not the goal. However, it is what is evaluated, so many confuse it with the goal (“I want to get a good grade”)
Anything which bypasses the process makes the goal (learning) less likely to be achieved.
So, I think it is fine to use a calculator to accelerate your use of operations you have already learned and understand.
However, I don’t think you should give 3rd graders calculators that just give them the answer to a multiplication or division when they are learning how those things work in the first place.
Similarly, I think it’s fine to do research using the internet to read sources you use to create your own work.
Meanwhile, I don’t think it’s fine to do research using the internet to find a site where you can buy a paper you can submit as your own work.
Right now, LLMs can be used to bypass a great deal of process, which is why I support them not being used.
It’s possible, maybe even likely that we’ll end up with a “supervised learning by AI” approach where the assignment is replaced by “proof of process”, a record of how the student explored the topic interactively. I could see that working if done right.
null
johnisgood
You can learn a lot from LLMs though, same with, say, Wikipedia. You need curiosity. You need the desire to learn. If you do not have it, then of course you will get nowhere, LLMs or no LLMs.
layer8
From the article:
“The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. The output from AI answers questions. It teaches me facts. But it doesn’t really help me know anything new.”
I think the thesis is that with AI there is less need and incentive to “put the work in” instead of just consuming what the AI outputs, and that in consequence we do the needed work less and atrophy.
azinman2
Never underestimate laziness, or willingness to take something 80% as good for 1% of the work.
So most are not curious. So what do you do for them?
johnisgood
You have to somehow figure out the root cause of the laziness, or if it really is laziness, and not something else, e.g. a mental health issue.
Plus, many kids fail school not because of laziness, but because of their toxic environment.
snackernews
Can you learn a lot? Or do you get instant answers to every question without learning anything, as OP suggests?
calebkaiser
You can learn an incredible amount. I do quite a bit of research as a core part of my job, and LLMs are amazing at helping me find relevant research to help me explore ideas. Something like "I'm thinking of X. Does this make sense and do you know of any similar research?" I also mentor some students whose educational journey has been fundamentally changed by them.
Like any other tool, it's more a question of how they're used. For example, I've seen incredible results for students who use ChatGPT to interrogate ideas as they synthesize them. So, for example, "I'm reading this passage PASSAGE and I'm confused about phrase X. The core idea seems similar to Y, which I am familiar with. if I had to explain X, I'd put it like this ATTEMPT Can you help me understand what I'm missing?"
The results are very impressive. I'd encourage you to try it out if you haven't.
johnisgood
You can learn a lot, if you want to. I can ask it a question with regarding to pharmacodynamics of some medication, and I can ask more and more questions, and learn. Similarly, I could pick up a book on pharmacology, but LLMs can definitely make learning easier.
jebarker
> nothing I make organically can compete with what AI already produces—or soon will.
No LLM can ever express your unique human experience (or even speak from experience), so on that axis of competition you win by default.
Regurgitating facts and the mean opinion on topics is no replacement for the thoughts of a unique human. The idea that you're competing with AI on some absolute scale of the quality of your thought is a sad way to live.
steamrolled
More generally, prior to LLMs, you were competing with 8 billion people alive (plus all of our notable dead). Any novel you could write probably had some precedent. Any personal story you could tell probably happened to someone else too. Any skill you wanted to develop, there probably was another person more capable of doing the same.
It was never a useful metric to begin with. If your life goal is to be #1 on the planet, the odds are not in your favor. And if you get there, it's almost certainly going to be unfulfilling. Who is the #1 Java programmer in the world? The #1 topologist? Do they get a lot of recognition and love?
harrison_clarke
a fun thing about having a high-dimensional fitness function is that it's pretty easy to not be strictly worse than anyone
bconsta
pareto adequate
computerthings
[dead]
taylorallred
Among the many ways that AI causes me existential angst, you've reminded me of another one. That is, the fact that AI pushes you towards the most average thoughts. It makes sense, given the technology. This scares me because creative thought happens at the very edge. When you get stuck on a problem, like you mentioned, you're on the cusp of something novel that will at the very least grow you as a person. The temptation to use AI could rob you of the novelty in favor what has already been done.
MoonGhost
> AI pushes you towards
That's interesting point. But here is the thing: you are supposed to drive. Not AI god. Look at it as at an assistant whom you can interrupt, instruct, correct, ask to redo. While focusing on 'what' you can delegate it some 'how' problems.
worldsayshi
I feel there's a glaring counter point to this. I have never felt more compelled to try out whatever coding idea that pops into my head. I can make Claude write a poc in seconds to make the idea more concrete. And I can write into a good enough tool in a few afternoons. Before this all those ideas would just never materialize.
I mean I get the existential angst though. There's a lot of uncertainty about where all this is heading. But, and this is really a tangent, I feel that the direction of it all is in the intersection between politics, technology and human nature. I feel like "we the people" leave walkover to powerful actors if we do not use these new powerful tools in service of the people. For one - to enable new ways to coordinate and organise.
perrygeo
Good point. It's not that AI is "pushing us" towards anything. AI can be a muse that elevates our creativity. IF we use it that way. But do we use it that way? I think there will be some who do.
The majority of users seem to want convenience at any expense. Most are unconcerned with a loss of agency, almost enthusiastic about it if it removes the labor of thinking.
abletonlive
[dead]
ay
Very strange. Either the author uses some magic AI, or I am holding it wrong. I used LLMs since a couple of years, as a nice tool.
Besides that:
I have tried using LLMs to create cartoon pictures. The first impression is “wow”; but after a bunch of pictures you see the evidently repetitive “style”.
Using LLMs to write poetry results is also quite cool at first, but after a few iterations you see the evidently repetitive “style”, which is bland and lacks depth and substance.
Using LLMs to render music is amazing at first, but after a while you can see the evidently repetitive style - for both rhymes and music.
Using NotebookLM to create podcasts at first feels amazing, about to open the gates of knowledge; but then you notice that the flow is very repetitive, and that the “hosts” don’t really show enough understanding to make it interesting. Interrupting them with questions somewhat dilutes this impression, though, so jury is out here.
Again, with generating the texts, they get a distant metallic taste that is hard to ignore after a while.
The search function is okay, but with a little bit of nudge one can influence the resulting answer by a lot, so I wary if blindly taking the “advice”, and always recheck it, and try to make two competing where I would influence LLM into taking the competing viewpoints and learn from both.
Using the AI to generate code - simple things are ok, but for non-trivial items it introduces pretty subtle bugs, which require me to ensure I understand every line. This bit is the most fun - the bug quest is actually entertaining, as it is often the same bugs humans would make.
So, I don’t see the same picture, but something close to the opposite of what the author sees.
Having an easy outlet to bounce the quick ideas off and a source of relatively unbiased feedback brought me back to the fun of writing; so literally it’s the opposite effect compared to the article author…
fennecbutt
>evidently repetitive “style”.
Use LORAs, write better prompts. I've done a lot of diffusion and especially in 2025 it's not difficult to get out something quite good.
Repetitive style is funny, because that's what human artists do for the most part. I'm a furry, I look at a lot of art and individual styles are a well established fact.
jstummbillig
Maybe you are not that great at using the most current LLMs or you don't want to be? I find that increasingly to be the most likely answer, whenever somebody makes sweeping claims about the impotence of LLMs.
I get more use out of them every single day and certainly with every model release (mostly for generating absolutely not trivial code) and it's not subtle.
ay
Could totally be the case, that, as I wrote in the very first sentence, I am holding it wrong.
But I am not saying LLMs are impotent - the other week Claude happily churned me ~3500 lines of C code that allowed to implement a prototype capture facility for network packets with flexible filters and saving the contents into pcapng files. I had to fix a couple of bugs that it made, but overall it was certainly at least 5x-10x productivity improvement compared to me typing these lines of code by hand. I don’t dispute that it’s a pretty useful tool in coding, or as a thinking assistant (see the last paragraph of my comment).
What I challenged is the submissive self deprecating adoration across the entire spectrum.
abathologist
What kind of problems are you solving day-to-day where the LLMs are doing heavy lifting?
steamrolled
I think the article describes a real problem in that AI discourages thought. So do other things, but what's new about AI is that it removes an incentive to try.
It used to be that if you spent your day doomscrolling instead of writing a blog post, that blog post wouldn't get written and you wouldn't get the riches and fame. But now, you can use AI to write your blog post / email / book. If you don't have an intrinsic motivation to work your brain, it's a lot easier to wing it with AI tools.
At the same time... gosh. I can't help but assume that the author is just depressed and that it has little to do with AI. The post basically says that AI made his life meaningless. But you don't have to use AI tools if they're harming you. And more broadly, life has no meaning beyond what we make of it... unless your life goal is to crank out text faster than an LLM, there's still plenty of stuff to focus on. If you genuinely think you can't possibly write anything new and interesting, then dunno, pick a workshop craft?
smcleod
For me it decreases the barrier to try and test new thoughts, never have I felt more empowered to try out new avenues that in the past might have been too time consuming or expensive to dispose of.
xigency
Humans are social creatures. The existence of a tool that can replace humans is not nearly so depressing as the realization that a loud and powerful group of people are zealous and joyful to use it to such ends. The assumption that people come first is rapidly becoming a logical fallacy in a world that seeks to optimize paperclips first.
Anyway, the pendulum will swing the other way eventually, but it's a rough ride hanging on until then.
Glad to see stimulating discussion here falling on both sides.
paintboard3
I've been finding a lot of fulfillment in using AI to assist with things that are (for now) outside of the scope of one-shot AI. For example, when working on projects that require physical assembly or hands-on work, AI feels more like a superpower than a crutch, and it enables me to tackle projects that I wouldn't have touched otherwise. In my case, this was applied to physical building, electronics, and multimedia projects that rely on simple code that are outside of my domain of expertise.
The core takeaway for me is that if you have the desire to stretch your scope as wide as possible, you can get things done in a fun way with reduced friction, and still feel like your physical being is what made the project happen. Often this means doing something that is either multidisciplinary or outside of the scope of just being behind a computer screen, which isn't everyone's desire and that's okay, too.
sanderjd
Yeah I haven't found the right language for this yet, but it's something like: I'm happy and optimistic about LLMs when I'm the one doing something, and more anxious about them when I'm supporting someone else in doing something. Or: It makes me more excited to focus on ends, and less excited to focus on means.
Like, in the recent past, someone who wanted to achieve some goal with software would either need to learn a bunch of stuff about software development, or would need to hire someone like me to bring their idea to life. But now, they can get a lot further on their own, with the support of these new tools.
I think that's good, but it's also nerve-wracking from an employment perspective. But my ultimate conclusion is that I want to work closer to the ends rather than the means.
apsurd
Interesting, I just replied to this post recommending the exact opposite: to focus on means vs ends.
The post laments how everything is useless when any conceivable "end state" a human can do will be inferior to what LLMs can do.
So an honest attention toward the means of how something comes about—the process of the thinking vs the polished great thought—is what life is made of.
Another comment talks about hand-made bread. People do it and enjoy it even though "making bread is a solved problem".
sanderjd
I saw that and thought it was an interesting dichotomy.
I think a way to square the circle is to recognize that people have different goals at different times. As a person with a family who is not independently wealthy, I care a lot about being economically productive. But I also separately care about the joy of creation.
If my goal in making a loaf of bread is economic productivity, I will be happy if I have a robot available that helps me do that quickly. But if my goal is to find joy in the act of creation, I will not use that robot because it would not achieve that goal.
I do still find joy in the act of creating software, but that was already dwindling long before chatgpt launched, and mostly what I'm doing with computers is with the goal of economic productivity.
But yeah I'll probably still create software just for the joy of it from time to time in the future, and I'm unlikely to use AIs for those projects!
But at work, I'm gonna be directing my efforts toward taking advantage of the tools available to create useful things efficiently.
curl-up
> The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will.
So the fun, all along, was not in the process of creation itself, but in the fact that the creator could somehow feel superior to others not being able to create? I find this to be a very unhealthy relationship to creativity.
My mixer can mix dough better than I can, but I still enjoy kneading it by hand. The incredibly good artisanal bakery down the street did not reduce my enjoyment of baking, even though I cannot compete with them in quality by any measure. Modern slip casting can make superior pottery by many different quality measures, but potters enjoy throwing it on a wheel and producing unique pieces.
But if your idea of fun is tied to the "no one else can do this but me", then you've been doing it wrong before AI existed.
ebiester
Let's frame it more generously: The reward is based on being able to contribute something novel to the world - not because nobody else can but because it's another contribution to the world's knowledge.
fennecbutt
Let's be honest, humans have been creating slop for much longer then machines. Not a bad thing, but don't put it all on a pedestal.
curl-up
If the core idea that was intended to be broadcasted to the world was a "contribution", and LLM simply expanded on it, then I would view LLMs simply a component in that broadcasting operation (just as the internet infrastructure would be), and the author's contribution would still be intact, and so should his enjoyment.
But his argument does not align with that. His argument is that he enjoys the act of writing itself. If he views his act of writing (regardless of the idea being transmitted) as his "contribution to world's knowledge", then I have to say I disagree - I don't think his writing is particularly interesting in and of itself. His ideas might be interesting (even if I disagree), but he obviously doesn't find the formation of ideas enjoyable enough.
null
mionhe
It sounds as if the reward is primarily monetary in this case.
As some others have commented, you can find rewards that aren't monetary to motivate you, and you can find ways to make your work so unique that people are willing to pay for it.
Technology forces us to use the creative process to more creatively monetize our work.
Viliam1234
Now you can contribute something novel to the world by pressing a button. Sounds like an improvement.
drdeca
If one merely presses a button (the same button, not choosing what button to push based on context), I don’t see what it is that one has contributed? One of those tippy bird toys can press a button.
lo_zamoyski
The primary motivation should be wisdom. No one can become wise for you. You don't become any wiser yourself that way. And a machine isn't even capable of being wise.
So while AI might remove the need for human beings to engage in certain practical activities, it cannot eliminate the theoretical, because by definition, theory is done for its own sake, to benefit the person theorizing by leading them to understanding something about the world. AI can perhaps find a beneficial place here in the way books or teachers do, as guides. But in all these cases, you absolutely need to engage with the subject matter yourself to profit from it.
kelseyfrog
> So the fun, all along, was not in the process of creation itself, but in the fact that the creator could somehow feel superior to others not being able to create? I find this to be a very unhealthy relationship to creativity.
People realize this at various points in their life, and some not at all.
In terms the author might accept, the metaphor of the stoic archer comes to mind. Focusing on the action, not the target is what relieves one of the disappointment of outcome. In this cast, the action is writing while the target is having better thoughts.
Much of our life is governed by the success at which we hit our targets, but why do that to oneself? We have a choice in how we approach the world, and setting our intentions toward action and away from targets is a subtle yet profound shift.
A clearer example might be someone who wants to make a friend. Let's imagine they're at a party and they go in with the intention of making a friend, they're setting themselves up for failure. They have relatively little control over that outcome. However, if they go in with the intention of showing up authentically - something people tend to appreciate, and something they have full control over - the changes of them succeeding increase dramatically.
Choosing one's goals - primarily grounded in action - is an under-appreciated perspective.
sifar
>> Focusing on the action, not the target is what relieves one of the disappointment of outcome.
The primary reason is not that it relieves us of the disappointment, but that worrying about the outcome increases our anxiety and impacts our action which hampers the outcome.
wcfrobert
I think the article is getting at the fact that in a post-AGI world, human skill is a depreciating asset. This is terrifying because we exchange our physical and mental labor for money. Consider this: why would a company hire me if - with enough GPU and capital - they can copy-and-paste 1,000 of AI agents much smarter to do the work?
With AGI, Knowledge workers will be worth less until they are worthless.
While I'm genuinely excited about the scientific progress AGI will bring (e.g. curing all diseases), I really hope there's a place for me in the post-AGI world. Otherwise, like the potters and bakers who can't compete in the market with cold-hard industrial machines, I'll be selling my python code base on Etsy.
No Set Gauge had an excellent blog post about this. Have a read if you want a dash of existential dread for the weekend: https://www.nosetgauge.com/p/capital-agi-and-human-ambition.
9dev
That seems like a very narrow perspective. For one, it is neither clear we will end up with AGI at all—we could have reached or soon reach a plateau with the possibilities of the LLM technology—or whether it’ll work like what you’re describing; the energy requirements might not be feasible, for example, or usage is so expensive it’s just not worth applying it to every mundane task under the sun, like writing CRUD apps in Python. We know how to build flying cars, technically, but it’s just not economically sustainable to use them. And finally, you never know what niches are going to be freed up or created by the ominous AGI machines appearing on the stage.
I wouldn’t worry too much yet.
Animats
> With AGI, Knowledge workers will be worth less until they are worthless.
"Knowledge workers" being in charge is a recent idea that is, perhaps, reaching end of life. Up until WWII or so, society had more smart people than it had roles for them. For most of history, being strong and healthy, with a good voice and a strong personality, counted for more than being smart. To a considerable extent, it still does.
In the 1950s, C.P. Snow's "Two Cultures" became famous for pointing out that the smart people were on the way up.[1] They hadn't won yet; that was about two decades ahead. The triumph of the nerds took until the early 1990s.[2] The ultimate victory was, perhaps, the collapse of the Soviet Union in 1991. That was the last major power run by goons. That's celebrated in The End of History and the Last Man (1992).[3] Everything was going to be run by technocrats and experts from now on.
But it didn't last. Government by goons is back. Don't need to elaborate on that.
The glut of smart people will continue to grow. Over half of Americans with college educations work in jobs that don't require a college education. AI will accelerate that process. It doesn't require AI superintelligence to return smart people to the rabble. Just AI somewhat above the human average.
[1] https://en.wikipedia.org/wiki/The_Two_Cultures
[2] https://archive.org/details/triumph_of_the_nerds
[3] https://en.wikipedia.org/wiki/The_End_of_History_and_the_Las...
senordevnyc
This is only terrifying because of how we’ve structured society. There’s a version of the trajectory we’re on that leads to a post-scarcity society. I’m not sure we can pull that off as a species, but even if we can, it’s going to be a bumpy road.
GuinansEyebrows
the barrier to that version of the trajectory is that "we" haven't structured society. what structure exists, exists as a result of capital extracting as much wealth from labor as labor will allow (often by dividing class interests among labor).
agreed on the bumpy road - i don't see how we'll reach a post-scarcity society unless there is an intentional restructuring (which, many people think, would require a pretty violent paradigm shift).
patcon
Yeah, I think you're onto something. I'm not sure the performative motivation is necessarily bad, but def different
Maybe AI is like Covid, where it will reveal that there were subtle differences in the underlying humans all along, but we just never realized it until something shattered the ability for ambiguity to persist.
I'm inclined to so that this is a destabilising thing, regardless of my thoughts on the "right" way to think about creativity. Multiple ways could coexist before, and now one way no longer "works".
gibbitz
I think the point is that part of the value of a work of art to this point is the effort or lack of effort involved in its creation. Evidence of effort has traditionally been a sign of the quality of thought put into a work as a product of time spent in its creation. LLMs short circuit this instinct in evaluation making some think works generated by AI are better than they are while simultaneously making those who create work see it as devaluation of work (which is the demotivator here).
I'm curious why so any people see creators and intellectuals as competitive people trying to prove they're better than someone else. This isn't why people are driven to seek knowledge or create Art. I'm sure everyone has their reasons for this, but it feels like insecurity from the outside.
Looking at debates about AI and Art outside of IP often brings out a lot of misunderstandings about what makes good Art and why Art is a thing man has been compelled to make since the beginning of the species. It takes a lifetime to select techniques and thought patterns that define a unique and authentic voice. A lifetime of working hard on creating things adds up to that voice. When you start to believe that work is in vain because the audience doesn't know the difference it certainly doesn't make it feel rewarding to do.
quantumgarbage
I think you are way past the argument the writer is making.
movpasd
Sometimes the fun is in creating something useful, as a human, for humans. We want to feel useful to our tribe.
garrettj
Yeah, there’s something this person needs to embrace about the process rather than being some kind of modern John Henry, comparing themselves to a machine. There’s still value in the things a person creates despite what AI can derive from its training model of Reddit comments. Find peace in the process of making and you’ll continue to love it.
tutanosh
I used to feel the same way about AI, but my perspective has completely changed.
The key is to treat AI as a tool, not as a magic wand that will do everything for you.
Even if AI could handle every task, leaning on it that way would mean surrendering control of your own life—and that’s never healthy.
What works for me is keeping responsibility for the big picture—what I want to achieve and how all the pieces fit together—while using AI for well-defined tasks. That way I stay fully in control, and it’s a lot more fun this way too.
wuj
A good analogy is lifting. We lift to build strength, not because we need that extra strength to lift things in real life. There are plenty machinery to do that for us. But we do so for the sense of accomplishment of hitting our goals when we least expect it, seeing physical changes, and the feeling that we are getting healthier rather than chasing the utility benefits. If we perceive lifting as an utility, we realize its futile and meaningless. Instead, if we see it as a routine with positive externalities sprinkled on top, we feel a lot less pressured to do so.
As kelseyfrog commented already, the key is to focus on the action, not the target. Lifting is not just about hitting a number or getting bigger muscles (though they are great extrinsic motivators), its more of an action that we derive growth from. I have internalized the act of working out that those targets are baked into the unconscious. I don't overthink when I'm lifting. My unconscious take the lead, and I just follow. I enjoy seeing the results show up unexpectedly. It lets me grow without feeling the constant pressure of my conscious mind.
The lifting analogy can be applied to writing and other effortful pursuits. We write for the pleasure of reconciling internal conflicts and restoring order to our chaotic mind. Writing is the lifting of our mind. If we do it for comparison, then there's no point in lifting, or writing, or many other things we do after all our technological breakthroughs. Doing what we do is a means to an end, not the other way around.
dvrp
What a coincidence! I think we both commented noting the phenomenon of abundance and the repercussions for us humans at the individual level. Especially from a fulfillment and autonomy point of view.
iamwil
OP said something similar about writing blog posts when he found himself doing twitter a lot, back in 2013. So whatever he did to cope with tweeting, he can do the same with LLMs, since it seems like he's been writing a lot of blog posts since.
> I’ve been thinking about this damn essay for about a year, but I haven’t written it because Twitter is so much easier than writing, and I have been enormously tempted to just tweet it, so instead of not writing anything, I’m just going to write about what I would have written if Twitter didn’t destroy my desire to write by making things so easy to share.
and
> But here’s the worst thing about Twitter, and the thing that may have permanently destroyed my mind: I find myself walking down the street, and every fucking thing I think about, I also think, “How could I fit that into a tweet that lots of people would favorite or retweet?”
smcleod
Perhaps they simply are someone who struggles with finding identity and value in change and adaptation.
montebicyclelo
My thoughts are that it's key that humans know they will still get credit for their contributions.
E.g. imagine it was the case that you could write a blog post, with some insight, in some niche field – but you know that traffic isn't going to get directed to your site. Instead, an LLM will ingest it, and use the material when people ask about the topic, without giving credit. If you know that will happen, it's not a good incentive to write the post in the first place. You might think, "what's the point".
Related to this topic - computers have been superhuman at chess for 2 decades; yet good chess humans still get credit, recognition, and I would guess, satisfaction, from achieving the level they get to. Although, obviously the LLM situation is on a whole other level.
I guess the main (valid) concern is that LLMs get so good at thought that humans just don't come up with ideas as good as them... And can't execute their ideas as well as them... And then what... (Although that doesn't seem to be the case currently.)
datpuz
> I guess the main (valid) concern is that LLMs get so good at thought
I don't think that's a valid concern, because LLMs can't think. They are generating tokens one at a time. They're calculating the most likely token to appear based on the arrangements of tokens that were seen in their training data. There is no thinking, there is no reasoning. If they they seem like they're doing these things, it's because they are producing text that is based on unknown humans who actually did these things once.
montebicyclelo
> LLMs can't think. They are generating tokens one at a time
Huh? They are generating tokens one at a time - sure that's true. But who's shown that predicting tokens one at a time precludes thinking?
It's been shown that the models plan ahead, i.e. think more than just one token forward. [1]
How do you explain the world models that have been detected in LLMs? E.g. OthelloGPT [2] is just given sequences of games to train on, but it has been shown that the model learns to have an internal representation of the game. Same with ChessGPT [3].
For tasks like this, (and with words), real thought is required to predict the next token well; e.g. if you don't understand chess to the level of Magnus Carlsen, how are you going to predict Magnus Carlsen's next move...
...You wouldn't be able to, even just from looking at his previous games; you'd have to actually understand chess, and think about what would be a good move, (and in his style).
[1] https://www.anthropic.com/research/tracing-thoughts-language...
[2] https://www.neelnanda.io/mechanistic-interpretability/othell...
[3] https://adamkarvonen.github.io/machine_learning/2024/01/03/c...
datpuz
Yes, let's cite the most biased possible source: the company that's selling you the thing, which is banking on a runway funded on keeping the hype train going as long as possible...
I think we are going to be seeing a vast partitioning in society in the next months and years.
The process of forming expressions just is the process of conceptual and rational articulation (as per Brandom). Those who misunderstand this -- believing that concepts are ready made, then encoded and decoded from permutations of tokens, or, worse, who have no room to think of reasoning or conceptualization at all -- they will be automated away.
I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic. A robot is just a "person whose work or activities are entirely mechanical" (https://www.etymonline.com/search?q=robot).
I'm afraid far too many are captive to the ideology of productionism (which is just a corollary of consumerism). Creative activity is not about content production. The aim of our creation is communication and mutual-transformation. Generation of digital artifacts may be useful for these purposes, but most uses seem to assume content production is the point, and that is a dark, sad, dead end.