Developing our position on AI
68 comments
·July 23, 2025nicholasjbs
itwasntandy
Thank you Nick.
As a recurse alum (s14 batch 2) I loved reading this. I loved my time at recurse and learned lots. This highlight from the post really resonates:
“ Real growth happens at the boundary of what you can do and what you can almost do. Used well, LLMs can help you more quickly find or even expand your edge, but they risk creating a gap between the edge of what you can produce and what you can understand.
RC is a place for rigor. You should strive to be more rigorous, not less, when using AI-powered tools to learn, though exactly what you need to be rigorous about is likely different when using them.”
PaulHoule
Kinda funny but my current feeling about it is different from a lot of people.
I did a lot of AI assisted coding this week and I felt, if anything, it wasn't faster but it led to higher quality.
I would go through discussions about how to do something, it would give me a code sample, I would change it a bit to "make it mine", ask if I got it right, get feedback, etc. Sometimes it would use features of the language or the libraries I didn't know about before so I learned a lot. With all the rubber ducking I thought through things in a lot of depth and asked a lot of specific questions and usually got good answers -- I checked a lot of things against the docs. It would help a lot if it could give me specific links to the docs and also specific links to code in my IDE.
If there is some library that I'm not sure how to use I will load up the source code into a fresh copy of the IDE and start asking questions in that IDE, not the one with my code. Given that it can take a lot of time to dig through code and understand it, having an unreliable oracle can really speed things up. So I don't see it as a way to gets things done quickly, but like pairing with somebody who has very different strengths and weaknesses from me, and like pair programming, you get better quality. This week I walked away with an implementation that I was really happy with and I learned more than if I'd done all the work myself.
andy99
> I did a lot of AI assisted coding this week
Are you new to it? There's a pretty standard arc that starts with how great it is and ends with all the "giving up on AI" blog posts you see.
I went through it to. I still use a chatbot as a better stack overflow, but I've stopped actually having AI write any code I use - it's not just the quality, it's the impact on my thinking and understanding that ultimately doesn't improve outcomes over just doing it myself.
PaulHoule
I've been doing it for a while. I never really liked stack overflow though it always seemed like a waste of time versus learning how to look up the real answers in the documentation. I never really liked agents because they go off for 20 minutes and come back with complete crap But if I can ask a question get an answer in 20 seconds and iterate again I find that's pretty efficient.
I’ve usually been skeptical about people who get unreasonably good results and not surprised when they wake up a few weeks later and are disappointed. One area where I am consistently disappointed is when there are significant changes across versions: I had one argue over where I could write a switch in Java that catches null (you can in JDK 21) and lots of trouble with SQLAlchemy in Python which changed a lot between versions. I shudder to think what would happen if you asked questions about react-router but actually I shudder to think about react-router at all.
whynotminot
When's the last time you "went through the loop" ? I feel like with this stuff I have to update my priors about every three or four months.
I've been using AI regularly since GPT 4 first came out a couple years ago. Over that time, various models from Sonnet to Gemini to 4o have generally been good rubber ducks. Good to talk to and discuss approaches and tradeoffs, and better in general than google + stack overflow + pouring over verbose documentation.
But I couldn't really "hand the models the wheel." They weren't trustworthy enough, easily lost the plot, failed to leverage important context right in front of them in the codebase, etc. You could see that there was potential there, but it felt pretty far away.
Something changed this spring. Gemini 2.5 Pro, Claude 4 models, o3 and o4-mini -- I'm starting to give the models the wheel now. They're good. They understand context. They understand the style of the codebase. And they of course bring the immense knowledge they've always had.
It's eerie to see, and to think about what comes with the next wave of models coming very soon. And if the last time you really gave model-driven programming a go was 6 months or more ago, you probably have no idea what's about to happen.
ryandrake
Just one person's opinion: I can't get into the mode of programming where you "chat" with something and have it build the code. By the time I have visualized in my head and articulated into English what I want to build and the data structures and algorithms I need, I might as well just type the code in myself. That's the only value I've found from AI: It's a great autocomplete as you're typing.
To me, programming is a solo activity. "Chatting" with someone or something as I do it is just a distraction.
andy99
Interesting point, I agree that things change so fast that experience from a few months ago is out of date. I'm sceptical there has been a real step change (especially based on the snippets I see claude 4 writing in answer to questions) but it never hurts to try again.
My most recent stab at this was Claude code with 3.7, circa March this year.
To be fair though, a big part of the issue for me is that having not done the work or properly thought through how a project is structured and how the code works, it comes back to bite later. A better model doesn't change this.
otabdeveloper4
> just try a newer model bro
Fundamentally none of the issues inherent in LLMs will be fixed by increasing parameter count or better weights.
Shilling for the newest model is a dead end, better to figure out how to best put LLMs to use despite their limitations.
resonious
I've been back and forth, and currently heavily relying on AI-written code. It all depends on knowing what the AI can and can't do ahead of time. And what it can do often overlaps with grunt work that I don't enjoy doing.
positron26
> There's a pretty standard arc that starts with how great it is and ends with all the "giving up on AI" blog posts you see.
Wouldn't be shocked if this is related to getting ramped up.
Switch to a language you don't know. How will that affect your AI usage? Will it go up or down over time?
Had similar experiences during the SO early days or when diving into open source projects. Suddenly you go from being stuck in your bubble to learning all kinds of things. Your code gets good. The returns diminish and you no longer curiously read unless the library is doing something you can't imagine.
gerdesj
"I would go through discussions about how to do something"
Have you compared that to your normal debugging thought processes? I get that you might be given another way to think about the problem but another human might be best for that, rather than a next token guesser.
I have a devil of a time with my team and wider, the younger ones mainly, getting them to pick up a phone instead of sending emails or chats or whatever. A voice chat can solve a problem within minutes or even seconds instead of the rather childish game of email ping pong. I do it myself too (email etc) and I even encourage it, despite what I said earlier - effective use of comms is a skill but you do need to understand when to use each variety.
furyofantares
This is great, it's so easy to get into the "go fast" mode that this potential gets overlooked a lot.
x86x87
viewing it as an assistant is the way to go. it's there to help you - like an overpowered autocomplete - but not there to think for you.
dbtc
> It would help a lot if it could give me specific links to the docs
Just a super quick test: "what are 3 obscure but useful features in python functools. Link to doc for each."
GPT 4o gave good links with each example.
(its choices were functools.singledispatch, functools.total_ordering, functools.cached_property)
Not sure about local code links.
steveklabnik
I've had this return great results, and I've also had this return hallucinated ones.
This is one area where MCPs might actually be useful, https://context7.com/ being one of them. I haven't given it enough of a shot yet, though.
Shorel
Some people copy and paste snippets of code without knowing what it does, and in a sense, they spread technical debt around.
LLMs lower the technical debt spread by the clueless, to a lower baseline.
You were part of the clueless, so an LLM improved your code and lowered the technical debt you would have spread.
vouaobrasil
> RC is a place for rigor. You should strive to be more rigorous, not less, when using AI-powered tools to learn, though exactly what you need to be rigorous about is likely different when using them.
This brings about an important point for a LOT of tools, which many people don't talk about: namely, with a tool as powerful as AI, there will always be minority of people with healthy and thoughtful attitude towards its use, but a majority who use it improperly because its power is too seductive and human beings on average are lazy.
Therefore, even if you "strive to be more rigorous", you WILL be a minority helping to drive a technology that is just too powerful to make any positive impact on the majority. The majority will suffer because they need to have an environment where they are forced not to cheat in order to learn and have basic competence, which I'd argue is far more crucial to a society that the top few having a lot of competence.
The individualistic will say that this is an inevitable price for freedom, but in practice, I think it's misguided. Universities, for example, NEED to monitor the exam room, because otherwise cheating would be rampant, even if there is a decent minority of students who would NOT cheat, simply because they want to maximize their learning.
With such powerful tools as AI, we need to think beyond our individualistic tendencies. The disciplined will often tout their balanced philosophy as justification for that tool use, such as this Recurse post is doing here, but what they are forgetting is that by promoting such a philosophy, it brings more legitimacy into the use of AI, for which the general world is not capable of handling.
In a fragile world, we must take responsibility beyond ourselves, and not promote dangerous tools even if a minority can use them properly.
This is why I am 100% against AI – no compromise.
ctoth
Wait, you're literally advocating for handicapping everyone because some people can't handle the tools as well as others.
"The disciplined minority can use AI well, but the lazy majority can't, so nobody gets to use it" I feel like I read this somewhere. Maybe a short story?
Should we ban calculators because some students become dependent on them? Ban the internet because people use it to watch cat videos instead of learning?
You've dressed up "hold everyone back to protect the incompetent" as social responsibility.
I never actually thought I would find someone who read Harrison Bergeron and said "you know what? let's do that!" But the Internet truly is a vast and terrifying place.
vouaobrasil
A rather shallow reply, because I never implied that there should be enforced equality. For some reason, I get these sorts of "false dichotomy" replies constantly here, where the dichotomy is very strong exaggerated. Maybe it's due to the computer scientist's constant use of binary, who knows.
Regardless, I only advocate for restricting technologies that are too dangerous, much in the same way as atomic weapons are highly restricted by people can still own knives and even use guns in some circumstances.
I have nothing against the most intelligent using their intelligence wisely and doing more than the less intelligent, if only wise use is even possible. In the case of AI, I submit that it is not.
jononor
Why is "AI" (current LLM based systems) a danger on the level comparable to nukes? Not saying that it is not, just would like to understand your reasoning.
ctoth
Who decides what technologies are too dangerous? You, apparently.
AI isn't nukes - anyone can train a model at home. There's no centralized thing to restrict. So what's your actual ask? That nobody ever trains a model? That we collectively pretend transformers don't exist?
You're dressing up bog-standard tech panic as social responsibility. Same reaction to every new technology: "This tool might be misused so nobody should have it."
If you can't see the connection between that and Harrison Bergeron's "some people excel so we must handicap everyone," then you've missed Vonnegut's entire point. You're not protecting the weak - you're enforcing mediocrity and calling it virtue.
usernamed7
Why are you putting down a well reasoned reply as being shallow? Isn't that... shallow? Is it because you don't want people to disagree with you or point out flaws in your arguments? Because you seem to take an absolutist black/white approach and disregard any sense of nuanced approach.
atq2119
> Wait, you're literally advocating for handicapping everyone because some people can't handle the tools as well as others.
No, they're arguing on the grounds that the tools are detrimental to the overwhelming majority in a way that also ends up being detrimental to the disciplined minority!
I'm not sure I agree, but either way you aren't properly engaging their actual argument.
smohare
[dead]
vouaobrasil
Second reply to your expanded comment: I think in some cases, some technologies are just versions of the prisoner's dilemma where no one is really better off with the technology. And one must decide on a case by case basis, similar to how the Amish decide what is best for their society on a case by case basis.
Again, even your expanded reply shrieks with false dichotomy. I never said ban every possible technology, only ones that are sufficiently dangerous.
jononor
I agree with your reasoning. But the conclusion seems to be throwing the baby out with the bathwater?
The same line of thought can be used for any (new) tool, say a calculator, a computer or the internet. Shouldn't we try to find responsible ways of adopting LLMs, that empower the majority?
seabass
> One particularly enthusiastic user of LLMs described having two modes: “shipping mode” and “learning mode,” with the former relying heavily on models and the latter involving no LLMs, at least for code generation.
Crazy that I agreed with the first half of the sentence and was totally thrown off by the end. To me, “learning mode” is when I want the LLM. I’m in a new domain and I might not even know what to google yet, what libraries exist, what key words or concepts are relevant. That’s where an LLM shines. I can see basic generic code that’s well explained and quickly get the gist of something new. Then there’s “shipping mode” where quality is my priority, and subtle sneaky bugs really ought to be avoided—the kind I encounter so often with ai written code.
JSR_FDED
The e-bike analogy in the article is a good one. Paraphrasing: Use it if you want to cover distance with low effort. But if your goal is fitness then the e-bike is not the way to go.
tonyedgecombe
[delayed]
tokioyoyo
But there’s also something in between, an e-assisted bike, which covers a lot of distance, but you still have to put some extra effort to it. And helps a bit with fitness so. That’s how I would categorize AI-assisted coding right now.
ben-schaaf
That's what an E-Bike is. If the motor is doing all of the work it's called a motor cycle.
lazyasciiart
There are some that can switch now: pedal and it will e-assist you, or just hold the lever and it will run without pedaling.
viccis
It is a good one. I'm going to keep it in my pocket for future discussions about AI in education, as I might have some say in how a local college builds policy around AI use. My attitude has always been that it should be proscribed in any situation in which the course is teaching what the AI is doing (Freshman writing courses, intro to programming courses, etc.) and that it should be used as little as possible for later courses in which it isn't as clearly "cheating". My rationale is that, for both examples of writing and coding, one of the most useful aspects of a four year degree is that you gain a lot from constantly exercising these rudimentary skills.
layer8
The analogy doesn’t work too well, in my opinion. An e-bike can basically get you with low effort anywhere a regular bike can. The same is not true for AI vs. non-AI, in its current state. AI is limited in which goals you can reach with it with low effort, and using AI will steer you towards those goals if you don’t want to expend much effort. There’s a quality gradient with AI dependent on how much extra effort you want to spend, that isn’t there in the e-bike analogy of getting from A to B.
brunooliv
It’s a thin line to walk for me, but I feel that the whole “skill atrophy” aspect of it is the hardest to not slip into. What I’ve personally liked about these tools is that they give me ample room to explore and experiment with different approaches to a particular problem because then translating a valid one into “the official implementation” is very easy.
I’m a guy who likes to DO to validate assumptions: if there’s some task about how something should be written concurrently to be efficient and then we need some post processing to combine the results, etc, etc, well, before Claude Code, I’d write a scrappy prototype (think like a single MVC “slice” of all the distinct layers but all in a single Java file) to experiment, validate assumptions and uncover the unknown unknowns.
It’s how I approach programming and always will. I think writing a spec as an issue or ticket about something without getting your hands dirty will always be incomplete and at odds with reality. So I write, prototype and build.
With a “validated experiment” I’d still need a lot of cleaning up and post processing in a way to make it production ready. Now it’s a prompt! The learning is still the process of figuring things out and validating assumptions. But the “translation to formal code” part is basically solved.
Obviously, it’s also a great unblocking mechanism when I’m stuck on something be it a complex query or me FEELING an abstraction is wrong but not seeing a good one etc.
Karrot_Kream
(Full disclosure: I have a lot of respect for RC and have thought about applying to attend myself. This will color my opinion.)
I really enjoyed this article. The numerous anecdotes from folks at RC was great. In particular thanks for sharing this video of voice coding [1].
This line in particular stood out to me that I use to think about LLMs myself:
"One particularly enthusiastic user of LLMs described having two modes: “shipping mode” and “learning mode,” with the former relying heavily on models and the latter involving no LLMs, at least for code generation."
Sometimes when I use Claude Code I either put it in Plan Mode or tell it to not write any code and just rubber duck with it until I come up with an approach I like and then just write the code myself. It's not as fast as writing the plan with Claude and asking it to write the code, but offers me more learning.
tqi
> Thoughtful, extremely capable programmers disagree on what models can do today, and whether or not they’re currently useful.
Is anyone John Henry-ing this question and having parallel teams build the same product at the same time?
npinsker
Such a thoughtful and well-written article. One of my biggest worries about AI is its impact on the learning process of future professionals, and this feels like a window into the future, hinting at the effect on unusually motivated learners (a tiny subset of people overall, of course). I appreciated the even-handed, inquisitive tone.
foota
I really want to spend some time at the Recurse Center, but the opportunity cost feels so high
betterhealth12
right now, the opportunity cost is probably as high as it's ever been (unrelated, but same also applies to people considering business school etc). What got you looking into it?
pyb
In what sense?
zoky
The problem is that in order to spend time at the Recurse Center, you first have to spend time at the Recurse Center.
null
entaloneralie
I feel like John Holt, author of Unschooling, who is quoted numerous times in the article, would not be too keen on seeing his name in a post legitimizes a technology that uses inevitabilism to insert itself in all domains of life.
--
"Technology Review," the magazine of MIT, ran a short article in January called "Housebreaking the Software" by Robert Cowen, science editor of the "Christian Science Monitor," in which he very sensibly said: "The general-purpose home computer for the average user has not yet arrived.
Neither the software nor the information services accessible via telephone are yet good enough to justify such a purchase unless there is a specialized need. Thus, if you have the cash for a home computer but no clear need for one yet, you would be better advised to put it in liquid investment for two or three more years." But in the next paragraph he says "Those who would stand aside from this revolution will, by this decade's end, find themselves as much of an anachronism as those who yearn for the good old one-horse shay." This is mostly just hot air.
What does it mean to be an anachronism? Am I one because I don't own a car or a TV? Is something bad supposed to happen to me because of that? What about the horse and buggy Amish? They are, as a group, the most successful farmers in the country, everywhere buying up farms that up-to-date high-tech farmers have had to sell because they couldn't pay the interest on the money they had to borrow to buy the fancy equipment.
Perhaps what Mr. Cowen is trying to say is that if I don't learn how to run the computers of 1982, I won't be able later, even if I want to, to learn to run the computers of 1990. Nonsense! Knowing how to run a 1982 computer will have little or nothing to do with knowing how to run a 1990 computer. And what about the children now being born and yet to be born? When they get old enough, they will, if they feel like it, learn to run the computers of the 1990s.
Well, if they can, then if I want to, I can. From being mostly meaningless, or, where meaningful, mostly wrong, these very typical words by Mr. Cowen are in method and intent exactly like all those ads that tell us that if we don't buy this deodorant or detergent or gadget or whatever, everyone else, even our friends, will despise, mock, and shun us the advertising industry's attack on the fragile self-esteem of millions of people. This using of people's fear to sell them things is destructive and morally disgusting.
The fact that the computer industry and its salesmen and prophets have taken this approach is the best reason in the world for being very skeptical of anything they say. Clever they may be, but they are mostly not to be trusted. What they want above all is not to make a better world, but to join the big list of computer millionaires.
A computer is, after all, not a revolution or a way of life but a tool, like a pen or wrench or typewriter or car. A good reason for buying and using a tool is that with it we can do something that we want or need to do better than we used to do it. A bad reason for buying a tool is just to have it, in which case it becomes, not a tool, but a toy.
On Computers Growing Without Schooling #29 September 1982
by John Holt.
nicholasjbs
I don't agree with your characterization of my post, but I do appreciate your sharing this piece (and the fun flashback to old, oversized issues of GWS). Thanks for sharing it! Such a tragedy that Holt died shortly after he wrote that, I would have loved to hear what he thought of the last few decades of computing.
entaloneralie
Same, after reading your post, it sent me down reading all sorts of guest articles he did left and right, and it really made me wonder what he'd think of all this. I feel like his views on technology changed over his lifetime. He got more.. I dunno, cynical over time?
viccis
>author of Unschooling
You say this like it should give him more credibility. He created a homeschooling methodology that scores well below structured homeschooling in academic evaluations. And that's generously assuming it's being practiced in earnest rather than my experience with people doing it (effectively just child neglect with high minded justification)
I have absolutely no doubt that a quack like John Holt would love AI as a virtual babysitter for children.
(Author here.)
This was a really fascinating project to work on because of the breadth of experiences and perspectives people have on LLMs, even when those people all otherwise have a lot in common (in this case, experienced programmers, all Recurse Center alums, all professional programmers in some capacity, almost all in the US, etc). I can't think of another area in programming where opinions differ this much.