Skip to content(if available)orjump to list(if available)

Cursor told me I should learn coding instead of asking it to generate it

andai

This isn’t just about individual laziness—it’s a systemic arms race towards intellectual decay.[0]

With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.

So the problem seems to boil down to, how can we convince everyone to go against the basic human (animal?) instinct to take the path of least resistance.

So it seems to be less about specific techniques and technologies, and more about a basic approach to life itself?

In terms of integrating that approach into your actual work (so you can stay sharp through your career), it's even worse than just laziness, it's fear of getting fired too, since doing things the human way doubles the time required (according to Microsoft), and adding little AI-tutor-guided coding challenges to enhance your understanding along the way increases that even further.

And in the context of "this feature needs to be done by Tuesday and all my colleagues are working 2-3x faster than me (because they're letting AI do all the work)... you see what I mean! It systemically creates the incentive for everyone to let their cognitive abilities rapidly decline.

[0] GPT-4.5

sycren

I think this boils the problem down solely to the individual. What if companies realised this issue and made a period of time during the day devoted solely to learning, knowledge management and innovation. If their developers only use AI to be more productive, then the potential degradation of intellect could stifle innovation and make the company less competitive in the market. It would be interesting if we start seeing a resurgence of Google's mythical 10% rule with companies more inclined to let any employee create side projects using AI (like Cursor) that could benefit the company.

TeMPOraL

> With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.

That's not true. Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.

That's literally the entirety of human technological advancement, in a nutshell. We'd ideally avoid all effort that's incidental to the goal, but if we can't (we usually can't), we invent tools that reduce this effort - and iterate on them, reducing the effort further, until eventually, hopefully, the human effort goes to 0.

Is that a "basic human (animal?) instinct to take the path of least resistance"? Perhaps. But then, that's progress, not a problem.

DanHulton

_Kinda sorta?_

There's actually two things going on when I'm coding at work:

1) I'm writing the business code my company needs to enable a feature/fix a bug/etc. 2) I'm getting better as a programmer and as someone who understands our system, so that #1 happens faster and is more reliable next time.

Using AI codegen can (arguably, the jury is still out on this if we include total costs, not just the costs of this one PR) help with #1. But it _appreciably bad_ at #2.

In fact, the closest parallel for #2 I can think of us plagiarism in an academic setting. You didn't do the work, which means you didn't actually learn the material, which means this isn't actually progress (assuming we continue to value #2, and, again, arguably #1 as well), it is just a problem in disguise.

TeMPOraL

> In fact, the closest parallel for #2 I can think of us plagiarism in an academic setting. You didn't do the work, which means you didn't actually learn the material, which means this isn't actually progress (assuming we continue to value #2 (...)

Plagiarism is a great analogy. It's great because what we call plagiarism in academic setting, in the real world work we call collaboration, cooperation, standing on the shoulders of giants, not reinventing the wheel, getting shit done, and such.

So the question really is how much we value #2? Or, which aspects of it we value, because I see at least two:

A) "I'm getting better as a programmer"

B) "I'm getting better as a someone who understands our system"

As much as I hate it, the brutal truth is, you and me are not being paid for A). The business doesn't care, and they only get so much value out of it anyway. As for B), it's tricky to say whether and when we should care - most software is throwaway, and the only thing that happens faster than that is programmers working on it changing jobs. Long-term, B) has very little value; short-term, it might benefit both business and the programmer (the latter by virtue of making the job more pleasant).

I think the jury is still out on how LLMs affect A). I feel that it's not making me dumber as a programmer, but I'm from the cohort of people with more than a decade of programming experience before even touching a language model, so I have a different style of working with LLMs than people who started using them with less experience, or people who never learned to code without them.

alonsonic

It's ironic to see people say this type of things and not think about old software engineer practices that are now obsolete because overtime we have created more and more tools to simplify the craft. This is yet another step in that evolution. We are no longer using punch cards or writing assembly code, and we might not write actual code in the future anymore and just instruct ais to achieve goals. This is progress

boredhedgehog

> We are no longer using punch cards or writing assembly code

I have done some romhacks, so I have seen what compilers have done to assembly quality and readability. When I hear programmers complain that having to debug AI written code is harder than just writing it yourself, that's probably exactly how assembly coders felt when they saw what compilers produce.

One can regret the loss of elegance and beauty while accepting the economic inevitability.

TeMPOraL

Elsewhere in this discussion thread[0], 'ChrisMarshallNY compares this to feelings of insecurity:

> It’s really about basic human personal insecurity, and we all have that, to some degree. Getting around it, is a big part of growing up (...)

I believe he's right.

It makes me think back to my teenage years, when I first learned to program because I wanted to make games. Within the amateur gamedev community, we had this habit of sneering at "clickers" - Klik&Play & other kinds of software we'd today call "low-code", that let you make games with very little code (almost entirely game logic, and most of it "clicked out" in GUI), and near-zero effort on the incidental aspects like graphics, audio, asset management, etc. We were all making (or pretending to make) games within scope of those "clickers", but using such tools felt like cheating compared to doing it The Right Way, slinging C++ through blood, sweat and tears.

It took me over a decade to finally realize how stupid that perspective was. Sure, I've learned a lot; a good chunk of my career skills date back to those years. However, whatever technical arguments we levied against "clickers", most of them were bullshit. In reality, this was us trying to feel better, special, doing things The Hard Way, instead of "taking shortcuts" like those lazy people... who, unlike us, actually released some playable games.

I hear echoes of this mindset in a lot of "LLMs will rot your brain" commentary these days.

--

[0] - https://news.ycombinator.com/item?id=43351486

lr4444lr

Things like memory safe languages and JS DOM managed frameworks are limited scoped solved problems for most business computing needs outside of some very marginal edge cases.

AI generated code? That seems a way off from being a generalized solved problem in an iterative SDLC at a modern tech company trying to get leaner, disrupt markets, and survive in a complex world. I for one am very much in support of it for engineers with the unaided experience under their belt to judge the output, but the idea that we're potentially going to train new devs at unfamiliar companies on this stuff? Yikes.

HeatrayEnjoyer

Progress is more than just simplistic effort reduction. The attitude of more efficient technology = always good is why society is quickly descending into a high-tech dystopia.

cursor_it_all

> that’s progress, not a problem.

Agreed. Most of us aren’t washing our clothes with a washboard, yet the washboard not long ago was a timesaver. Technology evolves.

Now, if AI rises up against the government and Cursor becomes outlawed, then maybe your leet coder skills will matter again.

But when a catastrophic solar storm takes out the grid, a washboard may be more useful, to give you something to occupy yourself with while the hard spaghetti of your dwindling food supply slowly absorbs tepid rainwater, and you wish that you’d actually moved to Siberia and learned to live off the land as you once considered while handling a frustrating bug in production that you could’ve solved with Claude.

fransje26

> [..] but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.

No. In learning, there is no substitution to practice. Once you jump to the conclusion, you stop understanding the "why".

Throughout various stages of technological advancement, we've come with tools to help relieve us of tedious efforts, because we understood the "why", the underlying reason of what the tools were helping us solve in the first place. Those that understood the "why" could build upon those tools to further advance civilization. The others were left behind to parrot about something they did not understand.

But -and, ironically in this case- as with most things in life, it is about the journey and not the destination. And in the grand scheme of things, it doesn't really matter if we take a more efficient road, or a less efficient road, to reach a life lesson.

Enjoy your journey!

rhdsgF

Indeed, taking the work of others, stripping the license and putting it on a photocopier is the most efficient way of "work". This is progress, not a problem.

brulard

> because they're letting AI do all the work

This is an unnecessary hyperbole. It's like saying that your reportees do all the work for you. You need to put in an effort to understand the strengths and weaknesses of AI and put it to good work and make sure to double check its result. Low-skill individuals are not going to get great results for moderately complex tasks with AI. It's absurd to think it will do "all the work". I believe we are on the point of SW engineering skills shifting from understanding all the details of programming language and tooling more to higher level thinking and design.

Although I see without proper processes (code reviews, guidelines, etc.) use of AI can get out of hand to the point of a very bloated and unmaintainable code base. Well, as with any powerful technology it has to be handled with care.

moffkalast

Those damn kids, compilers and linters doing all the work for them. Back in my day we punched bits into a card by hand. /s

It's just people ranting about another level of abstraction, like always.

4ndrewl

Your cognitive abilities to do programming may cognitively decline, but that's not the aim here is it? The aim is to solve a problem for a user, not to write code? If we solve that using different tools maybe our cognitive abilities will just focus on something else?

brulard

Exactly. If there is a tool that can do a lot of the low level work for us, we are free to do more of the higher level tasks and have higher output overall. Programming should be just a means to an end, not the end itself.

GoblinSlayer

The purpose of the least resistance instinct is to conserve resources of the organism due to scarcity of food. Consequently in absence of scarcity of food this instinct is suboptimal.

TeMPOraL

Even in abundance, least-resistance instinct is strictly beneficial when applied to things you need to do, as opposed to things you want to do.

HPsquared

Time is always scarce.

GoblinSlayer

For short term wins you pay with long term losses.

42lux

Are you going to take my washing machine next? AI is a gateway to spend more time, to do whatever you want. It's your decision to let your brain rot away or not.

raincole

> effectiveness is a function of effort

In other words, the Industrial Revolution was a mistake?

williamcotton

Is humanity itself a mistake? The fall of man himself? Are we to be redeemed through blog posts in the great 21st century AI flamewars?

There's a reason these technologies are so controversial. They question our entire existence.

These threads of conversation border on religious debate.

nonrandomstring

> against the basic human (animal?) instinct to take the path of least resistance.

Fitness. Nobody takes a run because they're needing to waste a half hour. Okay, some people just have energy and time to burn, and some like to code for the sake of it (I used to). We need to do things we don't like from time to time in order to stay fresh. That's why we have drills and exercises.

larodi

Interestingly many here fail to note that development of code is a lot about debugging, not only about writing. Is also about being able to dig/search/grok the code, which is like... reading it.

It is the debugging part to me, not only the writing, that actually teaches you what IS right, and what not. Not the architectural work, not the LLM spitting code part, not the deployment, but the debugging of the code and integration. THIS is what teaches you, writing alone teaches you nothing... you can copy by hand programs and understand zero of what they do unless inspecting intermediate results.

To hand-craft a house is super romantic and nice, etc. Is a thing people did for a lifetime for ages, not alone usually - with family and friends. But people today live in houses/apartments that had their foundations produced by automated lines (robots) - the steel, the mixture for the concrete, etc. And people yet live in the houses built this way, designed with computer which automated the drawing. I fail to understand while this is bad?

jeandesuis

Sheesh, I didn't expect my post to go viral. Little explanation:

I downloaded and run Cursor for the first time when this "error" happened. Turned out I was supposed to use agent instead of inline Cmd+K command because inline has some limitations while agent not so much.

Nevertheless, I was surprised that AI could actually say something like that so just in case I screenshotted it - some might think it's fake, but it's actually real and makes me think if in future AI will start giving attitudes to their users. Oh, welp. For sure I didn't expect it to blow up like this since it was all new to me so I thought it maybe was an easter egg or just a silly error. Turned out it wasn't seen before so there we are!

Cheers

cm2187

It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves. Not even just about coding, about any other knowledge.

motorest

> It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves.

I am from the generation whose only options on the table were RTFM and/or read the source code. Your blend of comment was also directed at the likes of Google and StackOverflow. Apparently SO is not a problem anymore, but chatbots are.

I welcome chatbots. They greatly simplify research tasks. We are no longer bound to stake/poorly written docs.

I think we have a lot of old timers ramping up on their version of "I walked 10 miles to school uphill both ways". Not a good look. We old timers need to do better.

usrbinbash

> Your blend of comment was also directed at the likes of Google and StackOverflow.

No, it wasn't.

What such comments were directed at, and with good reason, where 'SO-"Coders"', aka. people when faced with any problem, just googled a vague description of it, copypasted the code from the highest scoring SO answer into their project, and called it a day.

SO is a valueable resource. AI Systems are a valueable resource. I use both every day, same as I almost always have one screen dedicated to some documentation page.

The problem is not using the tools available. The problem is relying 100% on these tools, with no skill or care of ones own.

jstummbillig

But skill will be needed. It's everything that is still necessary between nothing and (good) software existing. It will just rapidly be something that we are not used to, and the rate of change will be challenging, specially for those with specialized, hard earned skills and knowledge that become irrelevant.

strken

Yes, they were, for reasons that have turned out to be half-right and half-wrong. At least by some people. Ctrl-c ctrl-v programming was widely derided, but people were also worried about a general inability to RTFM.

I had the good fortune to work with a man who convinced me to go read the spec of some of the programming languages I used. I'm told this was reasonably common in the days of yore, but I've only rarely worked on a team with someone else who does it.

Reading a spec or manual helps me understand the language and aids with navigating the text later when I use it as documentation. That said, if all the other programmers can do their jobs anyway, is it really so awful for them to learn from StackOverflow and Google? Probably not.

I imagine the same is true of LLMs.

johnisgood

I agree. Claude is very useful to me because I know what I am doing and I know what I want it to do. Additionally, I keep telling my friend who is studying data science to use LLMs to his advantage. He could learn a lot and be productive.

ookblah

at risk of sounding like a grandpa this is nothing like SO. AI is just a tool for sure, one that "can" behave like an super enhanced SO and Google, but for the first time ever it can actually write for you and not just piddly lines but entire codebases.

i think that represents a huge paradigm shift that we need to contend with. it isn't just "better" research. and i say this as someone who welcomes all of this that has come.

IMO the skill gap just widens exponentially now. you will either have the competent developers who use these tools accelerate their learning and/or output some X factor, and on the other hand you will have literally garbage being created or people who just figure out they can now expend 1/10 the effort and time to do something and just coast, never bother to even understand what they wrote.

just encountered that with some interviews where people can now scaffold something up in record time but can't be bothered to refine it because they don't know how. (ex. you have someone prompting to create some component and it does it in a minute. if you request a tweak, because they don't understand it they just keep trying re-prompt and micromanage the LLM to get the right output when it should only take another minute for someone experienced.)

vaylian

> this is nothing like SO

Strong agree. There have been people who blindly copied answers from Stack Overflow without understanding the code, but most of us took the time to read the explanations that accompanied the answers.

While you can ask the AI to give you additional explanations, these explanations might be hallucinations and no one will tell you. On SO other people can point out that an answer or a comment is wrong.

TiredOfLife

> this is nothing like SO

Agree. Ai answers actually work and are less out of date

opan

The issues I see are that private chats are information blackholes whereas public stuff like SO can show up in a search and help more than just the original asker (can also be easily referenced / shared). Also the fact that these chatbots are wrong / make stuff up a lot.

fowlie

Are you saying that black holes don't share any information? xD

andyferris

I had never thought about that - I guess the privacy cuts both ways.

block_dagger

LLMs are trained on public data though.

nerdponx

> Your blend of comment was also directed at the likes of Google and StackOverflow. Apparently SO is not a problem anymore

And it kind of was a problem. There was an underclass of people who simply could not get anything done if it wasn't already done for them in a Stackoverflow answer or a blog post or (more recently and bafflingly) a Youtube video. These people never learned to read a manual and spent their days flailing around wondering why programming seemed so hard for them.

Now with AI there will be more of these people, and they will make it farther in their careers before their lack of ability will be noticeable by hiring managers.

eMPee584

.. for the brief period of time before machines will take care of the whole production cycle.

Which is a great opportunity btw to drive forward a transition to a post-monetary, non-commercial post-scarcity open-source open-access commons economy.

oneeyedpigeon

I think it's unwise to naively copy code from either stack overflow or an AI, but if I had to choose, I'd pick the one that had been peer-reviewed by other humans every time.

eMPee584

Ooh, the quality of "review" on SO also varies a whole lot..

dijksterhuis

> We are no longer bound to stake/poorly written docs.

from what i gather, the training data often contains those same poorly written docs and often a lot of poorly written public code examples, so… YMMV with this statement as it is often fruit from the same tree.

(to me LLMs are just a different interface to the same data, with some additional issues thrown in, which is why i don’t care for them).

> I think we have a lot of old timers ramping up on their version of "I walked 10 miles to school uphill both ways". Not a good look. We old timers need to do better.

it’s a question of trust for me. with great power (new tools) comes great responsibility — and juniors ain’t always learned enough about being responsible yet.

i had a guy i was doing arma game dev with recently. he would use chatgpt and i would always warn him about not blindly trusting the output. he knew it, but i would remind him anyway. several of his early PRs had obvious issues that were just chatgpt not understanding the code at all. i’d point them out at review, he’d fix them and beat himself up for it (and i’d explain to him it’s fine don’t beat yourself up, remember next time blah blah).

he was honest about it. he and i were both aware he was very new to coding. he wanted to learn. he wanted to be a “coder”. he learned to mostly use chatgpt as an expensive interface for the arma3 docs site. that kind of person using the tools i have no problem with. he was honest and upfront about it, but also wanted to learn the craft.

conversely, i had a guy in a coffee shop recently claim to want to learn how to be a dev. but after an hour of talking with him it became increasingly clear he wanted me to write everything for him.

that kind of short sighted/short term gain dishonesty seems to be the new-age copy/pasting answers from SO. i do not trust coffee shop guy. i would not trust any PR from him until he demonstrates that he can be trusted (if we were working together, which we won’t be).

so, i get your point about doom and gloom naysaying. but there’s a reason for the naysaying from my perspective. and it comes down whether i can trust individuals to be honest about their work and how they got there and being willing to learn, or whether they just want to skip to end.

essentially, it’s the same copy/pasting directly from SO problem that came before (and we’re all guilty of).

ChrisMarshallNY

Oh, heck. We didn’t need AI to do that. That’s been happening forever.

It’s not just bad optics; it’s destructive. It discourages folks from learning.

AI is just another tool. There’s folks that sneer at you if you use an IDE, a GUI, a WYSIWYG editor, or a symbolic debugger.

They aren’t always boomers, either. As a high school dropout, with a GED, I’ve been looking up noses, my entire life. Often, from folks much younger than me.

It’s really about basic human personal insecurity, and we all have that, to some degree. Getting around it, is a big part of growing up, so a lot of older folks are actually a lot less likely to pull that crap than you might think.

dagw

Apparently from what I've read, universities are already starting to see this. More and more students are incapable of acquiring knowledge from books. Once they reach a point where the information they need cannot be found in ChatGPT or YouTube videos they're stuck.

thinkingemote

I wonder if Google Gemini is trained on all the millions of books that were scanned and that Google were not able to be used for their original purposes?

https://en.m.wikipedia.org/wiki/Google_Books

As other AI companies argue, copyright doesn't apply when training, it should give Google a huge advantage to be able to use all the worlds books they scanned.

harvey9

Interesting if that's literally true since you have to _search_ YouTube, unless maybe people ask chatgpt what search terms to use.

nerdponx

It's about putting together individual pieces of information to come up with an idea about something. You could get 5 books from the library and spend an afternoon skimming them, putting sticky notes on things that look interesting/relevant, or you could hope that some guy on Youtube has already done that for you and has a 7 minute video summarizing whatever it is you were supposed to be looking up.

roygbiv2

If AI can't find it, how do we?

pif

The actual intelligence in artificial intelligence is zero. Even idiots can do better than AI, if they want. Lazy idiots, on the other end...

zekica

By using your brain and a web search engine / searchable book index in your library / time or even asking a question somewhere public?

Inviz

I am unable to acquire knowledge from the books since 35 years ago. Had to get by with self-directed learning. The result is patchy understanding, but lot of faith in myself

buckyfuller

How does this make sense when you can put any book inside of an LLM?

ljm

Learning isn't just about rote memorisation of information but the continuous process of building your inquisitive skills.

An LLM isn't a mind reader so if you never learn how to seek the answers you're looking for, never curious enough to dig deeper, how would you ever break through the first wall you hit?

In that way the LLM is no different than searching Google back when it was good, or even going to a library.

dagw

Just because you can put information into an LLM, doesn't mean you can get it out again.

null

[deleted]

Beretta_Vexee

I recently interviewed a candidate with a degree in computer science. He was unable to explain to me how he would have implemented the Fibonacci sequence without chatGPT.

We never got to the question of recursive or iterative methods.

The most worrying thing is that the LLM were not very useful three years ago when he started university. So the situation is not going to improve.

IanCal

This is just the world of interviewing though, it was the same a decade ago.

The reason we ask people to do fizzbuzz is often just to weed out the shocking number of people who cannot code at all.

brulard

Yep, I still can not understand how programmers unable to do fizzbuzz still have a sw engineer career. I have never worked with one like that, but I have seen so many of them on interviews.

jstummbillig

But it is. Some knowledge and questions are simply increasingly outdated. If the (correct) answer on how to implement Fibonacci is one LLM query away, then why bother knowing? Why should a modern day web developer be able to write assembly code, when that is simply abstracted away?

I think it will be a hot minute before nothing has to be known and all human knowledge is irrelevant, but, specially in CS, there is going to be a tremendous amount of rethinking to do, of what is actually important to know.

Beretta_Vexee

Not everyone does web dev. There are many jobs where it is necessary to have a vague idea of memory architecture.

LLM are very poor in areas such as real time and industrial automation, as there is very little data available for training.

Even if the LLM were good, we will always need someone to carry out tests, formal validation, etc.

Nobody want to get on a plane or in a car whose critical firmware has been written by an LLM and proofread by someone incapable of writing code (don't give ideas to Boeing ).

The question about Fibonacci is just a way of gently bringing up other topic.

Shorel

The answer on how to implement Fibonacci is so simple that it is used for coding interviews.

Any difficult problem will take the focus out of coding and into the problem itself.

See also fizz-buzz, which it is even simpler, and people still fail those interview questions.

ghssds

Not outdated. If you know the answer on how to implement Fibonacci, you are doing it wrong. Inferring the answer from being told (or remembering) what is a Fibonacci number should be faster than asking an LLM or remembering it.

asddubs

the point is that it's an easy problem that basically demonstrates you know how to write a loop (or recursion), not the sequence itself

dherikb

Well, I found these candidates long before ChatGPT

protocolture

I interviewed a guy with a CCIE who couldnt detail what a port forward was. 3 years ago.

iddan

I’m 10 years in software and was never bothered to remember how to implement it the efficient way and I know many programmers who don’t know even the inefficient way but kick ass.

I once got that question in an interview for a small startup and told the interviewer: with all due respect what does that have to do with the job I’m going to do and we moved on to the next question (still passed).

sarchertech

You don’t need to memorize how to compute a Fibonacci number. If you are a barely competent programmer, you should be capable of figuring it out once someone tells you the definition.

If someone tells you not to do it recursively, you should be able to figure that out too.

Interview nerves might get in your way, but it’s not a trick question you need to memorize.

JaumeGreen

But I'm sure there would be some people that given the following question would not be able to produce any code by themselves:

"Let's implement a function to return us the Nth fibonnaci number.To get a fib (fibonacci) number you add the two previous numbers, so fib(N)=fib(N-1)+fib(N+2). The starting points are fib(0)=1 and fib(1)=1. Let's assume the N is never too big (no bigger than 20)."

And that's a problem if they can't solve it.

OTOH about 15 years ago I heard from a friend that interviewed candidates that some people couldn't even count all the instances of 'a' in a string. So in fact not much has changed, except that it's harder to spot these kind of people.

dijksterhuis

i’m around 10 years as well and i can’t even remember how the fibonacci sequence progress off hand. I’d have to wikipedia it to even get started.

genewitch

If some interviewer asked me what recursion was or how to implement it, I'd answer, and then ask them if they can think of a good use case for a duff's device.

dagw

Duff's device hasn't relevant for over 25+ years while recursion is still often the right answer.

brulard

Why? It looks like a reasonable interview question.

RGamma

Becoming the single point of failure for every societal function is the goal of VC. It's working brilliantly thus far.

escapecharacter

They’ll just mumble something about the context window needing to be larger.

arrowsmith

Should we call them the “AI generation generation”?

adammacleod

I vote for GenAI

mcny

Is GenAI after gen alpha? I think it depends on whether agents become a thing. Assuming agents become a thing before the end of this decade, we could see a divide between people born before we had ai agents and after.

globular-toast

I already consider reading to be a superpower. So few people seem capable these days. And we're talking about people born decades before ChatGPT.

datadeft

The biggest problem what I have with using AI for software engineering is that it is absolutely amazing for generating the skeleton of your code, boilerplate really and it really sucks for anything creative. I have tried to use the reasoning models as well but all of them give you subpar solutions when it comes to handling a creative challenge.

For example: what would be the best strategy to download 1000s of URLs using async in Rust. It gives you ok solutions but the final solution came from the Rust forum (the answer was written 1 year ago) which I assume made its way into the model.

There is also the verbosity problem. Calude without the concise flag on generates roughly 10x the required amount of code to solve a problem.

Maybe I am prompting incorrectly and somehow I could get the right answers from these models but at this stage I use these as a boilerplate generator and the actual creative problem solving remains on the human side.

gazereth

Personally I've found that you need to define the strategy yourself, or in a separate prompt, and then use a chain-of-thought approach to get to a good solution. Using the example you gave:

  Hey Chat,
  Write me some basic rust code to download a url. I'd like  to pass the url as an string argument to the file
Then test it and expand:

  Hey Chat,
  I'd like to pass a list of urls to this script and fetch them one by one. Can you update the code to accept a list of urls from a file?

Test and expand, and offer some words of encouragement:

  Great work chat, you're really in the zone today!

  The downloads are taking a bit too long, can you change the code so the downloads are asynchronous. Use the native/library/some-other-pattern for the async parts.

Test and expand...

MortyWaves

I agree completely with all you said however Claude solved a problem I had recently in a pretty surprising way.

So I’m not very experienced with Docker and can just about make a Docker Compose file.

I wanted to setup cron as a container in order to run something on a volume shared with another container.

I googled “docker compose cron” and must have found a dozen cron images. I set one up and it worked great on X86 and then failed on ARM because the image didn’t have an ARM build. This is a recurring theme with Docker and ARM but not relevant here I guess.

Anyway, after going through those dozen or so images all of which don’t work on ARM I gave up and sent the Compose file to Claude and asked it to suggest something.

It suggested simply use the alpine base image and add an entry to its crontab, and it works perfectly fine.

This may well be a skill issue but it had never occurred to me to me that cron is still available like that.

Three pages of Google results and not a single result anywhere suggesting I should just do it that way.

Of course this is also partly because Google search is mostly shit these days.

noisy_boy

Maybe you would have figured it out if you thought a bit more deeply about what you wanted to achieve.

You want to schedule things. What is the basic tool we use to schedule on Linux? Cron. Do you need to install it separately? No, it usually comes with most Linux images. What is your container, functionally speaking? A working Linux system. So you can run scripts on it. Lot of these scripts run binaries that come with Linux. Is there a cron binary available? Try using that.

Of course, hindsight is 20/20 but breaking objectives down to their basic core can be helpful.

heap_perms

What I also notice is that the very easily get stuck on a specific approach to solving a problem. One prompt that has been amazing for this is this:

> Act as if you're and outside observer to this chat so far.

This really helps in a lot of these cases.

TeMPOraL

Like, dropping this in the middle of the conversation to force the model out of a "local minimum"? Or restarting the chat with that prompt? I'm curious how you use it to make it more effective.

MortyWaves

That’s a cool tip; I usually just give up and start a new chat.

noisy_boy

For Claude, set up a custom prompt which should have whatever you want + this:

"IMPORTANT: Do not overkill. Do not get distracted. Stay focused on the objective."

khaledh

Some time circa late 1950s, a coder is given a problem and a compiler to solve it. The coder writes his solution in a high level language and asks the compiler to generate the assembly code from it. The compiler: I cannot generate the assembly code for you, that would be completing your work ... /sarcasm

On a more serious note: LLMs now are an early technology, much like the early compilers who many programmers didn't trust to generate optimized assembly code on par with hand-crafted assembly, and they had to check the compiler's output and tweak it if needed. It took a while until the art of compiler optimization was perfected to the point that we don't question what the compiler is doing, even if it generates sub-optimal machine code. The productivity gained from using a HLL vs. assembly was worth it. I can see LLMs progressing towards the same tradeoff in the near future. It will take time, but it will become the norm once enough trust is established in what they produce.

stuaxo

Funny, but expected when some chunk of the training data is forum posts like:

"Give me the code for"

"Do it yourself, this is homework for you to learn".

Prompt engineering is learning enough about a project to sound like an expert, them you will he closer to useful answers.

Alternatively - maybe if trying to get it to solve a homework like question, thus type of answer is more likely.

MrMcCall

I shudder to think that all these LLMs were trained on internet comments.

Of course, only the sub-intelligent would train so-called "intelligence" on the mostly less-than-intelligent, gut-feeling-without-logic folks' comments.

It's like that ancient cosmology with turtles all the way down, except this is dumbasses, very confident dumbasses who have lots of cash.

srvaroa

Well, this AI operates now at staff+ level

nextts

And is paid like one with today's token costs!

tymonPartyLate

I asked it once to simplify code it had written and it refused. The code it wrote was ok but unnecessary in my view.

Claude 3.7: > I understand the desire to simplify, but using a text array for .... might create more problems than it solves. Here's why I recommend keeping the relational approach: ( list of okay reasons ) > However, I strongly agree with adding ..... to the model. Let's implement that change.

I was kind of shocked by the display of opinions. HAL vibes.

pknerd

Claude is mostly opinionated and gives you feedback where it thinks it is necessary.

brulard

My experience is, that it very often reacts to a simple question with apologizing and completely flipping it's answer 180 degrees. I just ask for explanation like "is this a good way to do x,y,z?" and it goes "I apologize, you are right to point out flaw in my logic. Lets do it the opposite way."

jumperabg

This is quite a lot of code to handle in 1 file. The recommendation is actually good in the past(month - feels like 1 year of planning) I've made similar mistakes with tens of projects - having files larger than 500-600 lines of code - Claude was removing some of the code and I didn't have coverage on some of them and the end result was missing functionality.

Good thing that we can use .cursorrules so this is something that partially will improve my experience - until a random company releases the best AI coding model that runs on a Rassbery Pi with 4GB ram(yes this is a spoiler from the future).

maeln

> I've made similar mistakes with tens of projects - having files larger than 500-600 lines of code

Is it a mistake though ? Some of the best codebase I worked on were a few files with up to a few thousands LoC. Some of the worst were the opposite, thousands of files with less than a few hundred LoC in each of them. With the tool that I use, I often find navigating and reading through a big file much simpler than having to have 20 files open to get the full picture of what I am working on.

At the end of the day, it is a personal choice. But if we have to choose something we find inconvenient just to be able to fit in the context window of an LLM, then I think we are doing things backward.

johnisgood

Claude seems to be somewhat OK with 1500 LOC in one file. It may miss something, mess something up, sure, that is why you should chunk it up.

davidwritesbugs

I'm using Cursor & Claude/R1 on a file with 5000 loc, seems to cope OK

Havoc

I guess that's straight out of the training data.

Quite common on reddit to get responses that basically go "Is this a homework assignment? Do you own work".

doix

I wonder if this was real or if they set a custom prompt to try and force such a response.

If it is real, then I guess it's because LLMs have been trained on a bunch of places where students asked other people to do their homework.

thinkingemote

it's real, (but a reply on the forum suggests) Cursor has a few modes for chat, and it looks like he wasn't in the "agent" chat pane, but in the interactive, inline chat thingy? The suggestion is that this mode is limited to the size of what it can look at, probably a few lines around the caret?

Thus, speculating, a limit on context or a prompt that says something like "... you will only look at a small portion of the code that the user is concerned about and not look at the whole file and address your response to this..."

Other replies in the forum are basically "go RTFM and do the tutorial"!

gloxkiqcza

Sounds like something you would find on Stack Overflow