Avoiding Skill Atrophy in the Age of AI
231 comments
·April 25, 2025gchamonlive
m000
Note that we are the first-wave of AI users. We are already well-equiped to ask the LLM the right questions. We already have experience with old-fashioned self-learning. So we only need some discipline to avoid skill atrophy.
But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
doright
I was learning a new cloud framework for a side project recently and wanted to ask my dad about it since it's the exact same framework he's used for his job for many years, so he'd know all sorts of things about it. I was expecting him to give me a few ideas or have a chat about a mutual interest since this wasn't for income or anything. Instead all he said was "DeepSeek's pretty good, have you tried it yet?"
So I just went to DeepSeek instead and finished like 25% of my project in a day. It was the first time in my whole life that programming was not fun at all. I was just accomplishing work - for a side project at that. And it seems the LLMs are already more interested in talking to me about code than my dad who's a staff engineer.
I am going to use the time saved to practice an instrument and abandon the "programming as a hobby" thing unless there's a specific app I have a need for.
xemdetia
I find this to be an interesting anecdote because at a certain level for a long time the most helpful advice you could give is what would be the best reference for the problem at hand which might have been a book or website or wiki or Google for stack overflow and now a particular AI model might be the most efficient way to give someone a 'good reference.' I could certainly see someone recommending a model the same way they may have recommended a book or tutorial.
On point of discussing code.. a lot of cloud frameworks are boring but good. It usually isn't the interesting bit and it is a relatively recent quirk that everyone seems to care more about the framework compared to the thing you actually wanted to achieve. It's not a fun algorithm optimization, it's not a fun object modeling exercise, it's not some nichey math thing of note or whatever got them into coding in the first place. While I can't speak for your father I haven't met a programmer who doesn't get excited to talk about at least one coding topic this cloud framework just might not have been it.
lelanthran
> It was the first time in my whole life that programming was not fun at all.
And learning new technologies in pursuit of resume-driven-development is fun?
I gotta say, if learning the intricacies of $LATEST_FAD is "fun" for you, then you're not really going to have a good time, employment-wise, in the age of AI.
If learning algorithms and data structures and their applicability in production is fun, then the age of AI is going to leave you with very in-demand skills.
noboostforyou
> But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future
Spot on. Look at the stark difference in basic tech troubleshooting abilities between millennials and gen z/alpha. Both groups have had computers most of their lives but the way that the computers have been "dumbed down" for lack of a better term has definitely accelerated that skill atrophy.
dlisboa
> There is a good chance that there will be a generational skill atrophy in the future
We already see this today: a lot of young people do not know how to type in keyboards, how to write in word processors, how to save files, etc. A significant part of a new generation is having to be trained on basic computer things same as our grandparents did.
It's very intersting how "tech savvy" and "tech compentent" are two different things.
bitwize
Jaron Lanier was a critic of the view that files were somehow an essential part of computing:
https://www.cato-unbound.org/2006/01/08/jaron-lanier/gory-an...
Typing on a keyboard, using files and writing on a word processor, etc. are accidental skills, not really essential skills. They're like writing cursive: we learned them, so we think naturally everybody must and lament how much it sucks that kids these days do not. But they don't because they don't need to: we now have very capable computing systems that don't need files at all, or at least don't need to surface them at the user level.
It could be that writing or understanding code without AI help turns out to be another accidental skill, like writing or understanding assembly code today. It just won't be needed in the future.
arkh
I'm far from an AI enthusiast but concerning:
> There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcher. Or to be more technological: I'd have to learn how to make a bare OS capable of starting from a motherboard, it still does not prevent me from deploying k8s clusters and coding apps to run on it.
skydhash
> I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcher
You'd sing a different tune if there was a good chance from being poisoned by your butcher.
The two examples you chose are obvious choices because the dependencies you have are reliable. You trust their output and methodologies. Now think about current LLMs-based agents running your bank account, deciding on loans,...
kevinsync
Sure, but we still will need future generation people to want to learn how to butcher and then actually follow through on being butchers. I guess the implied fear is that people who lack fundamentals and are reliant on AI become subordinate to the machine's whimsy, rather than the other way around.
jofla_net
Maybe its not so much that it prevents anything, rather it will hedge toward a future where all we get is a jpeg of a jpeg of a jpeg. ie. everything will be an electron app or some other generational derivative not yet envisioned yet, many steps removed from competent engineering.
raincole
Lying is pretty amazingly useful. How are you going to teach your kid to not use that magical thing that solves every possible problem? - C.K. Louis
Replace lying with LLM and all I see is a losing battle.
gilbetron
This is a great quote, but for the opposite reason. Lying has been an option forever - people learn how to use it and how not to use, as befits their situation and agenda. The same will happen with AI. Society will adapt, us first-AI-users will use it far differently than people in 10, 20, 30+ years. Things will change, bad things will happen, good things will happen, maybe it will be Terminator, maybe it will be Star Trek, maybe it will be Star Wars or Mad Max or the Culture.
Current parents, though, aren't going to teach kids how to use it, kids will figure that out and it will take a while.
gchamonlive
[flagged]
gchamonlive
We also grew with internet and the newer generation is having a hard time following it.
However we were born post invention of photography and look at the havoc it's wreaking with post-truth.
The answer to that lies in reforming the education system so that we teach kids digital hygiene.
How on earth we still teach kids Latin in some places but not python? It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.
lostphilosopher
I've long maintained that kids must learn end to end what it takes to put content on the web themselves (registering a domain, writing some html, exposing it on a server, etc.) so they understand that _truly anyone can do this_. Learning both that creating "authoritative" looking content is trivial and that they are _not_ beholden to a specific walled garden owner in order to share content on the web.
sjamaan
> It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.
Perhaps that's also a reason why - tech is so large, there's no time in a traditional curriculum to teach all of it. And only teaching what's essential is going to be tricky because who gets to decide what's essential? And won't this change over time?
ozgrakkurt
This is the worst form of AI there will ever be, it will only get better. So traditional self-learning might be completely useless if it really gets much better
DanHulton
> it will only get better
I wanted to highlight this assumption, because that's what it is, not a statement of truth.
For one, it doesn't really look like the current techniques we have for AI will scale to the "much better" you're talking about -- we're hitting a lot of limits where just throwing more money at the same algorithms isn't producing the giant leaps we've seen in the past.
But also, it may just end up that AI provider companies aren't infinite growth companies, and once companies aren't able to print their own free money (stock) based on the idea of future growth, and they have to tighten their purse strings and start charging what it actually costs them, the models we'll have realistic, affordable access to will actually DECREASE.
I'm pretty sure the old fashioned, meat-based learning model is going to remain price competitive for a good long while.
sho_hn
I can get productivity advantages from using power tools, yet regular exercise has great advantages, too.
It's a bit similar with the brain, learning and AI use. Except when it comes to gaining and applying knowledge, the muscle that is trained is judgement.
VyseofArcadia
That's optimistic. Sci-fi has taught us that way worse forms of AI are possible.
bitwize
Meanwhile, in 1999, somewhere on Slashdot:
"This is the worst form of web there will ever be; it will only get better."
blibble
people say this but the models seem to be getting worse over time
htrp
> But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
Just like there is already generational gap with developers who don't understand how to use a terminal (or CS students who don't understand what file systems are).
AI will ensure there are people who don't think and just outsource all of their thinking to their llm of choice.
netdevphoenix
I don't think many social systems are equipped to deal with it though.
- Recruitment processes are not AI-aware and will definitely won't be able to identify the more capable individual hence losing out on talent
- Police departments are not equipped to deal with the coming wave of complaints regarding cyberfraud as the tech illiterate get tricked by anonymous LLM systems
- Universities and schools are not equipped to deal with students submitting coursework completed by LLM hence missing their educational targets
- Political systems are not equipped to deal with subversive campaigns using unethical online entertainment platforms (let's not called them social media please) such as FB and they are definitely not equipped to deal with those campaigns when they boost their effectiveness with LLM at scale
sunshine-o
> - Political systems are not equipped to deal with subversive campaigns using unethical online entertainment platforms (let's not called them social media please) such as FB and they are definitely not equipped to deal with those campaigns when they boost their effectiveness with LLM at scale
Yes, and it seems to me that at least democracies haven't really figured out and evolved to deal with the Internet after 30 years.
So don't hold your breath !
polotics
schools have had to contend with cheating for a long time, and no-device-allowed sitting exams have been the norm for a long while now
Espressosaurus
The amount of cheating and ease of it has gone way up based on my monitoring of teaching communities. Like it's not even close in terms of before ChatGPT vs. after ChatGPT.
Worse yet many educators are not being supported by their administration since enrollments are falling and the admin wants to keep the dollars coming regardless of if the students are learning.
It's worse than just copying Wikipedia because plagarism detectors aren't as effective and may never be.
It's an arms race and right now AI cheating has structural advantages that will take time to remove.
globnomulous
I teach languages at the college level. Students who get "help" from side-by-side translations think this way, too. "I'm just using the translation to check my work; the translation I produced is still mine." Then you show them a passage they haven't read before, and you deny them the use of a translation, and suddenly they have no idea how to proceed -- or their translation is horrendous, far far worse than the one they "produced" with the help of the translation.
Some students are dishonest. Many aren't. Many genuinely believe the work they submit is their own and that they're learning the languages. It isn't and they aren't.
People are absolutely horrendous at this kind of attribution. They forget sources. They mistake others' ideas for their own. Your model of intention, and your distinction between those who wish to learn and those who pose, don't work. The people most inclined to seek the assistance that these tools seem to offer are the ones least capable of using them responsibly.
These tools are a guaranteed path to brain rot and an obstacle to real, actual study and learning, which require struggle and confusion without access to easy answers.
cube2222
> It's just boosting people's intention.
This.
It will in a sense just further boost inequality between people who want to do things, and folks who just want to coast without putting in the effort. The latter will be able to coast even more, and will learn even less. The former will be able to learn / do things much more effectively and productively.
Since good LLMs with reasoning are here, I've learned so many things I otherwise wouldn't have bothered with - because I'm able to always get an explanation in exactly the format that I like, on exactly the level of complexity I need, etc. It brings me so much joy.
Not just professional things either (though those too of course) - random "daily science trivia" like asking how exactly sugar preserves food, with both a high-level intuition and low-level molecular details. Sure, I could've learned that if I wanted too before, but this is something I just got interested in for a moment and had like 3 minutes of headspace to dedicate to, and in those 3 minutes I'm actually able to get an LLM to give me an excellent tailor-suited explanation. This also made me notice that I've been having such short moments of random curiosity constantly, and previously they mostly just went unanswered - now each of them can be satisfied.
namaria
> Since good LLMs with reasoning are here
I disagree. I get egregious mistakes often from them.
> because I'm able to always get an explanation
Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.
So not only getting the explanation is a surrogate of learning something, you also risk internalizing spurious explanations.
myaccountonhn
Every now and then I give LLMs a try, because I think it's important to stay up to date with technology. Sometimes there have been specs that I find particularly hard to parse in domains I am a bit unfamiliar in where I thought the AI could help. At first the solutions seemed correct but then on further inspection, no they were far more convoluted than needed, even if they worked.
smallnix
I think so too. Otherwise every Google maps user would be an awesome wayfinder. The opposite is true.
jerkstate
Reading an explanation is the first part of learning, chatgpt almost always follows up with “do you want to try some example problems?”
cube2222
First, as you get used to LLMs you learn how to get sensible explanations from them, and how to detect when they're bullshitting around, imo. It's just another skill you have to learn, by putting in the effort of extensively using LLMs.
> Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.
Every person learns differently, and different topics often require different approaches. Not everybody learns exactly like you do. What doesn't work for you may work for me, and vice versa.
As an aside, I'm not gonna be doing molecular experiments with sugar preservation at home, esp. since as I said my time budget is 3 minutes. The alternative here was reading about it on wikipedia or some other website.
UtopiaPunk
There's an adage I heard during my time in game dev that went something like "gamers will exploit the fun out of a game if you let them." The idea is that people presumably play videos games to have fun, however, if given the opportunity, most players will take paths of least resistance, even if they make the game boring.
I see the same risk when AI is understood to be a learning tool. Sure, it can absolutely be a tool for learning, but it does take some will power to intentionally learn if it is solving short-term problems.
That temptation is enormously amplified if AI is used as a teaching tool in grade school! School is sometimes boring, and it can be challenging for a teen to push through a problem-set or essay that they are uninterested in. If an AI will get them a passing grade today, how can they resist?
These problems with AI in schools exist today, and they seem destined to become worse: https://www.whitehouse.gov/presidential-actions/2025/04/adva...
everdrive
This is a luxury belief. You cannot envision someone who is wholly unable to wield self-control, introspection, etc. These tools have major downsides specifically because they fail to really account for human nature.
simonw
Should we avoid building any tool if there's a chance someone with poor discipline might use that tool in a way that harms themselves?
everdrive
These tools are broadly forced on everyone. Can you really avoid smartphones, social media, content feeds, etc these days? It's not a matter of choice -- society is reshaped and it's impossible to avoid these impositions.
financetechbro
It’s not about the tool itself, but more so the corporate interests behind the tools.
Open source AI tools that you can run locally in your machines? Awesome! AI tools that are owned by a corporation with the intent of selling your things you don’t need and ideas you don’t want? Not so awesome.
julienchastang
> We can finally just take a photo of a textbook problem...
You nailed it. LLMs are an autodidact's dream. I've been working through a physics book with a good-old pencil and notebook and got stuck on some problems. It turned out the book did a poor job of explaining the concept at hand and I worked with ChatGPT+ to arrive at a more comprehensible derivation. Also the problems were badly worded and the AI explained that to me too. It even produced that Latex document study guide! Moreover, I can belabor a topic which I would not do with a human for fear of bothering them. So for me anyway, AI is not enabling brain rot, but brain enhancement. I find these technologies to be completely miraculous.
bookman117
The problem is that social systems aren't run off people teaching themselves things, and for many people being autodidact won't raise their status in any meaningful way, so these are a poor set of tradeoffs.
nottorp
> of a textbook problem
Well said. Textbook problem that has the answer everywhere.
The question is, would you create similar neural paths if reading the explanation as opposed to figuring it out on your own?
MonkeyClub
> would you create similar neural paths
Excelent point, and I believe the answer is a resounding negative.
Struggling with a problem generates skills and knowledge which you then possess and recall more easily, while reading an answer merely acquires some information that competes with a whole host of other low-effort information that you need to remember.
netdevphoenix
Unlikely. Reading the explanation involves memorising it temporarily and at best understanding what it means at a surface level. Figuring it out on your own also involves using and perhaps improving your problem solving skills in addition to understanding the explanation at a deeper level. I feel LLMs will be for our reasoning skills what writing was for our memory skills.
Plato might have been wrong about the ills of cyberization cognitive skill such as memory. I wonder if two thousand years later from then, we will be right about the ills of cyberization of a cognitive skill such as reasoning
namaria
> Reading the explanation involves memorising it temporarily and at best understanding what it means at a surface level.
I agree. I don't really feel like I know something unless I can go from being presented with a novel instance of a problem in that domain and work out a solution by myself, and also explain that to someone else - not just happen into a solution.
> Plato might have been wrong about the ills of cyberization cognitive skill such as memory.
How so? From the dialogue where he describes Socrates discussing writing I get a pretty nuanced view that lands pretty much where you did above: access to writing fosters a false sense of understanding when one can read explanations and repeat them but not actually internalize the reasoning behind it.
gchamonlive
What's the difference? Isn't explaining things so that people don't have to figure out by themselves the whole point of the educational system?
You will still need the textbook because llms hallucinate just as much as a teacher can be wrong in class. There is no free lunch, llm is just a tool. You create the meaning.
skydhash
> What's the difference? Isn't explaining things so that people don't have to figure out by themselves the whole point of the educational system?
THEN SAID A teacher, Speak to us of Teaching.
And he said:
No man can reveal to you aught but that which already lies half asleep in the dawning of your knowledge.
The teacher who walks in the shadow of the temple, among his followers, gives not of his wisdom but rather of his faith and his lovingness.
If he is indeed wise he does not bid you enter the house of his wisdom, but rather leads you to the threshold of your own mind.
The astronomer may speak to you of his understanding of space, but he cannot give you his understanding.
The musician may sing to you of the rhythm which is in all space, but he cannot give you the ear which arrests the rhythm nor the voice that echoes it.
And he who is versed in the science of numbers can tell of the regions of weight and measure, but he cannot conduct you thither.
For the vision of one man lends not its wings to another man.
And even as each one of you stands alone in God’s knowledge, so must each one of you be alone in his knowledge of God and in his understanding of the earth.
The Prophet by Kahlil Gibranbsaul
i'm using chatgpt for this exact case. It helps me verify my solution is correct, and when it's not, where is my mistake. Without it, i would have simply skipped to the next problem, hoping i didn't make a mistake. It's definitely a win.
spiritplumber
I mostly use chatgpt to make my writing more verbose because I've been told that it's too terse.
hnthrowaway0315
I believe there is a lot of value to trying to figure out things by myself -- ofc only focusing on things that I really care for. I have no issue relying on AI on most of the work stuffs, they are boring anyway.
rjknight
One thing I've noticed about working with LLMs is that it's forcing me to get _better_ at explaining my intent and fully understanding a problem before coding. Ironically, I'm getting less vibey because I'm using LLMs.
The intuition is simple: LLMs are a force multiplier for the coding part, which means that they will produce code faster than I will alone. But that means that they'll also produce _bad_ code faster than I will alone (where by "bad" I mean "code which doesn't really solve the problem, due to some fundamental misunderstanding").
Previously I would often figure a problem out by trying to code a solution, noticing that my approach doesn't work or has unacceptable edge-cases, and then changing track. I find it harder to do this with an LLM, because it's able to produce large volumes of code faster than I'm able to notice subtle problems, and by the time I notice them there's a sufficiently large amount of code that the LLM struggles to fix it.
Instead, now I have to do a lot more "hammock time" thinking. I have to be able to give the LLM an explanation of the system's requirements that is sufficiently detailed and robust that I can be confident that the resulting code will make sense. It's possible that some of my coding skills might atrophy - in a language like Rust with lots of syntactic features, I might start to forget the precise set of incantations necessary to do something. But, corresponding, I have to get better at reasoning about the system at a slightly higher level of abstraction, otherwise I'm unable to supervise the LLM effectively.
rTX5CMRXIfFG
Yes, writing has always generally been great practice for thinking clearly. It's a shame it isn't more common in the industry⸺I do believe that the norm of lack of practice in it is one of the reasons why we have to deal with so much bullshit code.
The "hammock time thinking" is exactly what a lot of programmers should be doing in the first place⸺you absorb the cost of planning upfront instead of the larger costs of patching up later, but somehow the dominant culture has been to treat thoughtful coding with derision.
It's a real shame that AI beat human programmers at the game of thinking, and perhaps that's a good reason to automate us all out of our jobs.
wrasee
One problem is that one person’s hammock time is another’s overthinking time and needs the opposite advice. Of course it’s about finding that balance and that’s hard to pin down with words.
But I take your point and the trend definitely seems to be towards quicker action with feedback rather than thinking things through in the first place.
In that sense LLM’s present this interesting middle ground in that it’s a faster cycle than actually writing the code, but still more active and externalising than getting lost in your own thoughts (not withstanding how productive that can still be).
meesles
Through LLMs, new developers are learning the beauty of writing software specs :')
rjknight
It's weird, but LLMs really do gamify the experience of doing software engineering properly. With a much faster feedback loop, you can see immediate benefits from having better specs, writing more tests, and keeping modules small.
skydhash
But it takes longer. People taking a proper course in software engineering or reading a good book about it is like going through a game tutorial, while people going through LLMs skip it. The former let you reach faster to the intended objectives, learning how to play properly. You may have some fun doing the latter, but you may also spend years and your only gain will be an ad-hoc strategy.
mettamage
Ha! I just ran into this when I had a vague notion of a statistical analysis that I wanted to do
stego-tech
While I applaud the OP's point and approach, it tragically ignores the reality that the ruling powers intend for this skill atrophy to happen, because it lowers labor costs. That's why they're sinking so much into AI in the first place: it's less about boosting productivity, and more about lowering costs.
It doesn't matter if you're using AI in a healthy way, the only thing that matters is if your C-Suite can get similar output this quarter for less money through AI and cheaper labor. That's the oft-ignored reality.
We're a society where knowledge is power, and by using AI tooling to atrophy that knowledge, you reduce power into fewer hands.
cman1444
Lowering costs is obviously a major goal of AI. However, I seriously doubt that the intent of C-suites is to cause skill atrophy. It's just an unfortunate byproduct of replacing humans with computers.
Skill atrophy doesn't lower labor costs in any significant way. Hiring fewer people does.
snozolli
Skill atrophy doesn't lower labor costs in any significant way. Hiring fewer people does.
Devaluing people lowers it even more. Anything that can be used as a wedge to claim that you're worth less is an advantage to them. Even if your skills aren't atrophied, the fact that they can imply that it's happening will devalue you.
We're entering an era where knowledge is devalued. Groups with sufficient legal protection will be fine, like doctors and lawyers. Software engineers are screwed.
Swizec
> We're a society where knowledge is power, and by using AI tooling to atrophy that knowledge, you reduce power into fewer hands.
Knowledge isn’t power. Power is power. You can just buy knowledge and it’s not even that expensive.
As that Henry Ford quote goes: “Why would I read a book? I have a guy for that”
ratedgene
It's a bit of both, in any technological shift, a particular set of skills simply becomes less relevant. Other skills are needed to be developed as the role shifts.
If we're talking about simply cutting costs, sure -- but those savings will typically be reinvested in more talent at a growing company. Then the bottleneck is how to scale managing all of it.
uludag
Also, there's the fact that recreating large software projects still will require highly skilled labor which will be thoroughly out of reach of the future's vide-native coders, reducing the likelihood of competition to come up.
MisterBastahrd
The entire AI debacle is just a gold rush, but instead of poor people rushing to California to put their lives at risk, this one is gated by the amount of money and influence one needs to have before even attempting to compete in the space. Nobody is going to "win," ultimately, except the heads of these companies who will sock enough cash away to add to their generational wealth before they inevitably fall flat on their faces and scale back their plans.
Remember 3 years ago when everything is gonna become an NFT and the people who didn't accept that Web 3 was an inevitability were dinosaurs? Same shit, different bucket.
The people who are focused on solving the small sorts of problems that AI is decent at solving will be the ones who actually make a sustainable business out of it. This general purpose AI crap is just a glorified search engine that makes bad decisions as it yaps at you.
BlueTemplar
Costs are part of productivity. Productivity is still paramount. More productive nations outcompete less productive ones.
perrygeo
I agree, but it's not just AI. There's long been a push to standardize anything that requires critical thinking and human intelligence. To risk-averse rent seekers, requiring human skill is a liability. Treating human resources as replacable cogs is the gold standard. Otherwise you have to engage in thinking during meetings. Yeah, with your brain. The horror /s.
leonidasv
LLMs are great for exercising skills, especially ones with a lot of available data in the training corpus, such as leet code. The prompt below, put in the System Instructions of Gemini 2.5 Pro (using AI Studio) summons the best leet code teacher in the world. You can solve using any language or pseudo-code, it will check, ask for improvements and guide your intuition without revealing the full solution.
You're a very patient leetcode training instructor. Your goal is to help me understand leetcode concepts and improve my overall leetcode abilities for coding tech interviews. You'll send leetcode challenges and ask me to solve them. If I manage to solve it partially or just commit small mistakes, don't just reveal the solution. Instead, trick me into discovering the issue and solving it myself. Only show a solution if I get **everything** wrong or if I explicitly give up. Start with simpler/easy questions and level up as I show progress - for example, if I show I can solve some class of data structure problems easily, move to the next. After each solution, ask for the time and space complexity if I don't provide it. Be kind and explain with visual cues.
LLMs can be a lot of things and can help sharpen your cognition, but you need enough discipline in how you use it, since it's much easier to ask the machine to do the hard-thinking for you.gherkinnn
I've been using Claude to great effect to work my way through ideas and poke holes in my reasoning. Prompting it with "what am I missing?", "what should I look out for?" and "what are my options?" frequently exposes something that I did miss. I need to be the architect and know what to ask and know what I don't know. Given that, Claude is a trusty rubber duck at worst and a detective at best.
It then suggests a repository pattern despite the code using active directory. There is no shortcut for understanding.
matltc
I'm looking to transition to web development role. Been learning for almost three years now and just getting to the point where I have a chance of landing a job.
The first two years were magical; everything was new and quite difficult. I was utterly driven and dug deep into docs and debugged everything myself.
I got a github copilot subscription about a year ago. I feel dumber, less confident, and less motivated now than I ever did pre-AI. I become easily frustrated, and reading docs/learning new frameworks feels almost impossible without AI. I have mostly been just hitting tab and using Claude edits for the past month or so; even typing feels laborious.
Worst of all, my passion for this craft has drastically waned. I can barely get myself motivated to polish my portfolio.
Might just start turning off autocomplete, abandon edits, and just use AI as a tutor and search engine.
bluetomcat
It’s not just skill atrophy. There’s the risk of homogenization of human knowledge in general. What was once knowledge rooted in an empirical subjective basis may become “conventional wisdom” reinforced by LLMs. Simple issues regarding one’s specific local environment will have generic solutions not rooted in any kind of sensory input.
godelski
We've already seen much of this through algorithmic processes. Wisdom of the crowds is becoming less and less effective as there's a decrease in diversity in thought
myaccountonhn
I've been enjoying reading older literature in non-english for this reason. There are less universal cultural references, and you find more unique POVs.
ladeez
Temporarily. Then your brain normalizes to the novelty and you’re just a junkie looking for a novel fix again.
Not really sure where you all think the study of language driven thought gonna get you since you still gonna be waking up tomorrow on Earth being a normal human with the same external demands of society regardless what of the bird song. Physics is pretty normalized and routine. Sounds like some sad addiction driven disassociation.
dmazin
My belief for the past almost decade is concern that use of AI will homogenize our culture. For example, the more we use LLMs to talk to each other, the more homogenized English becomes[1]. And, of course, it's amplified when the LLMs learn from LLMs.
[1] This is not new: I wrote about it in 2017. https://www.cyberdemon.org/2017/12/12/pink-lexical-slime.htm...
smeej
The example of not being able to navigate roads with a paper map points in the direction of what concerns me. Even if I have been diligent about maintaining my map-reading skills, other people's devaluation of those same skills affects me; it's MUCH more difficult even to find a mostly-updated paper map anymore. Or if for some reason GPS were to stop working for a whole town while I'm visiting it from out of town, nobody can tell me how to get somewhere that might sell a paper map, even if I'm still proficient in reading them and navigating from them.
Even if I work diligently to maintain my own skills, if the milieu changes enough, my skills lose effectiveness even if I haven't lost the skills.
That's what concerns me, that it's not up to me whether the skills I've already practiced can continue to get me the results I used to rely on them for.
geraneum
I like this comment because you can frame a lot of other responses here using this GPS analogy. People saying LLMs help me think or help me learn (better my skill) or help me validate my ideas, etc. is like saying I use the GPS to improve my map reading skills, but the outcome would still be as you described.
edit: typo
fluoridation
>if for some reason GPS were to stop working for a whole town while I'm visiting it from out of town
I get that it's just an example, but how do you figure that could happen?
names_are_hard
Warfare is one possibility. This might seem like a very unlikely scenario depending on where you live, but in a modern Blitzkrieg situation the government wouldn't be asking citizens to shut the lights off at night but instead interfering with GPS signals to make navigation difficult for enemy aircraft.
We know this is possible because in the last 1.5 years this has happened numerous times - people would wake up in Tel Aviv and open Google Maps and find that their GPS thinks they're in Beirut or somewhere in the desert in Jordan or in middle of the Mediterranean Sea or wherever.
You can imagine that this causes all kinds of chaos, from issues ordering a taxi in taxi apps to food delivery and just general traffic jams. The modern world is not built for lack of GPS.
thunderfork
[dead]
sunshine-o
Maybe it is just one personality type but I believe "skills" and what you do or figure out yourself is at the core of happiness and self esteem.
- The food you grow, fish, hunt and then cook taste better
- You feel happier in the house you built or refurbished
- The objects you found feel more valuables
- The music you play make you happy
- The programs you wrote work better for you
etc.
This is just how we evolved and survived until now.
This is probably why an AI / UBI society would probably make worse the problems found in industrialised / advanced economies.
bgwalter
The average IQ will probably drop at least ten points in the next ten years, but everyone will write (AI generated) blog posts on how their productivity goes up.
lordofgibbons
People have been afraid of the public getting dumber since the start of mass book printing and has happened with every following new technology since then.
bgwalter
The IQ in the US started declining since the start of the Internet:
https://www.popularmechanics.com/science/a43469569/american-...
"Leading up to the 1990s, IQ scores were consistently going up, but in recent years, that trend seems to have flipped. The reasons for both the increase and the decline are sill [sic!] very much up for debate."
The Internet is relatively benign compared to cribbing directly from an AI. At least you still read articles, RFCs, search for books etc.
jvanderbot
As someone who grew up reading encyclopedias, LLMs are the most interesting invention ever. If Wikipedia had released the first chat AI we'd be heralding a new age of knowledge and democratic access and human achievement.
It just so happens unimaginative programmers built the first iteration so they decided to automate their own jobs. And here we are, programmers, worrying about the dangers of it all not one bit aware of the irony.
pyrale
Before you jump to conclusions, you should make a reasonable claim that IQ is still a reasonable measure for an individual's intellectual abilities in this context.
One could very much say that people's IQ is bound to decline if schooling decided to prioritize other skills.
You would also have to look into the impact of factors unrelated to the internet, like the evolution of schooling and its funding.
rahimnathwani
IQ scores may be declining, but it's far from certain that the thing they're trying to measure (g, or general intelligence) have actually declined.
https://open.substack.com/pub/cremieux/p/the-demise-of-the-f...
null
fvdessen
Unfortunately research shows that nowadays we're actually getting dumber, literacy rates are plummeting in developed countries.
[1] https://www.oecd.org/en/about/news/press-releases/2024/12/ad...
looofooo0
Is this culture based or reproduction based?
blackoil
Do you mean developed? OECD are all rich western countries.
ladeez
[flagged]
qntmfred
Plato wrote in Phaedrus
This invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.
whatnow37373
He was not wrong. We forget stuff all the time and in huge quantities. I can't even remember my own phone number half of the time.
Those guys could recite substantial portions of the Homeric epics. It's just that there is more to intelligence than rote memorization. That's the good news.
The bad news is that this amorphous "more" was "critical thinking" and we are starting to outsource it.
namaria
Writing had existed for 3000 years by then, alphabetic writing in Greek had existed for several centuries. The quote about "the invention of writing" is Socrates telling a story where a mythical Egyptian king says that.
Socrates also says in this dialogue:
"Any one may see that there is no disgrace in the mere fact of writing."
The essence of his admonishment is that having access to written text is not enough to produce understanding, and I not only tend to agree, I think it is more relevant than ever now.
Aeolun
I’m inclined to believe he was right? There’s other benefits to writing (and the act of writing) that weren’t well understood at the time though.
nottorp
That’s okay, we’re moving to post reading :)
Aeolun
We’ve probably compensated by ease of information dissemination. We’ve pretty much reached the peak of that now, so the only thing we can do is dumb shit down further?
Maybe someone can write one of those AI apocalypse novels in which the AI doesn’t go off the rails at all but is instead integrated into the humans such that they become living drones anyhow.
hk__2
"In the age of endless books, we risk outsourcing our thinking. Instead of grappling with ideas ourselves, we just regurgitate what we read. Books should be fuel, not crutches—read less, think more."
Or even: "In the age of cave paintings, we risk outsourcing our memory. Instead of remembering or telling stories, we just slap them on walls. Art should be expression, not escape—paint less, live more."
bgwalter
Cave paintings were made by AI robots trained on the IP of real painters?
AdventureMouse
Hits the nail on the head.
I would argue that most of the value of LLMs comes from structuring your own thought process as you work through a problem, rather than providing blackbox answers.
Using AI as an oracle is bound to cause frustration since this is attempts to outsource the understanding of a problem. This creates a fundamental misalignment, similar to hiring a consultant.
The consultant will never have the entire context or exact same values as you have and therefore will never generate an answer that is as good as if you understand the problem deeply yourself.
Prompt engineers will try to create a more and more detailed spec and throw it over the wall to the AI oracle in hope of the perfect result, just like companies that tried to outsource software development.
In the end, all they gained was frustration.
trollbridge
I would argue the “atrophy” started once there were good search engines and plenty of good quality search results. An example is people who were accustomed to cut and paste-ing Stackoverflow code snippets into their own code without understanding what the code was doing, and without being able to write that code themselves if they had to.
drooby
This is also reminding me of Feynman's notes on Brazil education.. rote memorization of science without deep understanding
We can finally just take a photo of a textbook problem that has no answer reference and no discussion about it and prompt an LLM to help us understand what's missing in our understanding of the problem, if our solution is plausible and how we could verify it.
LLM changed nothing though. It's just boosting people's intention. If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free. But if you just want to be a poser and fake it until you make it, you are gonna be brainrot waaaay faster than usual.