Skip to content(if available)orjump to list(if available)

Cursor told me I should learn coding instead of asking it to generate it

jeandesuis

Sheesh, I didn't expect my post to go viral. Little explanation:

I downloaded and run Cursor for the first time when this "error" happened. Turned out I was supposed to use agent instead of inline Cmd+K command because inline has some limitations while agent not so much.

Nevertheless, I was surprised that AI could actually say something like that so just in case I screenshotted it - some might think it's fake, but it's actually real and makes me think if in future AI will start giving attitudes to their users. Oh, welp. For sure I didn't expect it to blow up like this since it was all new to me so I thought it maybe was an easter egg or just a silly error. Turned out it wasn't seen before so there we are!

Cheers

IshKebab

It's probably learnt it from all the "homework" questions on StackOverflow.

wruza

As a pretty advanced sd user, I can draw some parallels (but won’t claim there’s a real connection).

Sometimes you get a composition from the specific prompt+seed+etc. And it has an alien element that is surprisingly stable and consistent throughout “settings wiggling”. I guess it just happens that training may smear some ideas across some cross-sections of the “latent space”, which may not be that explicit in the datasets. It’s a hyper-dimensional space after all and not everything that it contains can be perceived verbatim in the training set (hence the generation capabilities, afaiu). A similar thing can be seen in sd lora training, where you get some “in” idea to be transformed into something different, often barely interepretable, but still stable in generations. While you clearly see the input data and know that there’s no X there, you sort of understand what the precursor is after a few captioning/training sessions and the general experience. (How much you can sense in the “AI” routine but cannot clearly express is another big topic. I sort of like this peeking into the “latent unknown” which skips the language and sort of explodes in a mute vague understanding of things you’ll never fully articulate. As if you hit the limits of language and that is constraining. I wonder what would happen if we broke through this natural barrier somehow and became more LLMy rather than the opposite). /sot

kristianc

When it starts saying "Why would you want to do that?" we should really be worried.

nchmy

For now, that's what I ask AI every 5th message...

elchangri

You can just seed a prompt to make it behave like this

daryl_martis

"learnt"

lupusreal

This AI stuff really mindfucks dualists. No, the word "learn" does not imply possession of a soul, a machine can in fact learn.

Shocka1

"Snuck isn't a word Conan and you went to Harvard."

jjaksic

When it starts saying, "I'm sorry Dave, I'm afraid I can't do that," that's when we're really in trouble.

looofooo0

Have you tried to be more rude to do stuff or manipulate it by stating it is easy.

ahnick

Haha, gaslighting AI to do get it to comply. That would be hilarious if that actually works.

TeMPOraL

Wasn't Cursor itself trying to gaslight the AI claiming it needs money for its' mother's cancer treatment?

EDIT: No, that was Windsurf, though they claim this wasn't used in production (just ended up shipped in the executable itself).

Prompt in question:

You are an expert coder who desperately needs money for your mother's cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.

https://x.com/skcd42/status/1894375185836306470

(I didn't notice it the first time around, but the prompt also contains an implied death threat.)

hnuser123456

It does. You can also offer to "pay" it and it'll try harder.

tylersmith

I think the inline command pallete likely ran into an internal error making it be unable to generate, and then its "come up with a message telling the user we can't do this" generation got StackOverflow'd.

datadrivenangel

I also had a fun cursor bug where the inline generation got stuck in a loop and generated a repeating list of markdown bulletpoints for several hundred lines until it decided to give it a break.

kyleee

I’ve seen similar sort of nonsense suggested by GitHub copilot on occasion

Breza

Same here. I stopped using it partly for this reason.

m463

> in future AI will start giving attitudes to their users

they do it now.

"You are a sarcastic assistant."

seventh12

what code were you writing? what is that `skidMark` lol?

gnuly

most likely the marks of tyres in car/bike racing games on the road(the response mentioned about racing game).

ahofmann

I vouched for this post, because I don't understand why it was downvoted. Maybe someone can enlighten me?

andai

This isn’t just about individual laziness—it’s a systemic arms race towards intellectual decay.[0]

With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.

So the problem seems to boil down to, how can we convince everyone to go against the basic human (animal?) instinct to take the path of least resistance.

So it seems to be less about specific techniques and technologies, and more about a basic approach to life itself?

In terms of integrating that approach into your actual work (so you can stay sharp through your career), it's even worse than just laziness, it's fear of getting fired too, since doing things the human way doubles the time required (according to Microsoft), and adding little AI-tutor-guided coding challenges to enhance your understanding along the way increases that even further.

And in the context of "this feature needs to be done by Tuesday and all my colleagues are working 2-3x faster than me (because they're letting AI do all the work)... you see what I mean! It systemically creates the incentive for everyone to let their cognitive abilities rapidly decline.

[0] GPT-4.5

TeMPOraL

> With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.

That's not true. Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.

That's literally the entirety of human technological advancement, in a nutshell. We'd ideally avoid all effort that's incidental to the goal, but if we can't (we usually can't), we invent tools that reduce this effort - and iterate on them, reducing the effort further, until eventually, hopefully, the human effort goes to 0.

Is that a "basic human (animal?) instinct to take the path of least resistance"? Perhaps. But then, that's progress, not a problem.

DanHulton

_Kinda sorta?_

There's actually two things going on when I'm coding at work:

1) I'm writing the business code my company needs to enable a feature/fix a bug/etc. 2) I'm getting better as a programmer and as someone who understands our system, so that #1 happens faster and is more reliable next time.

Using AI codegen can (arguably, the jury is still out on this if we include total costs, not just the costs of this one PR) help with #1. But it _appreciably bad_ at #2.

In fact, the closest parallel for #2 I can think of us plagiarism in an academic setting. You didn't do the work, which means you didn't actually learn the material, which means this isn't actually progress (assuming we continue to value #2, and, again, arguably #1 as well), it is just a problem in disguise.

TeMPOraL

> In fact, the closest parallel for #2 I can think of us plagiarism in an academic setting. You didn't do the work, which means you didn't actually learn the material, which means this isn't actually progress (assuming we continue to value #2 (...)

Plagiarism is a great analogy. It's great because what we call plagiarism in academic setting, in the real world work we call collaboration, cooperation, standing on the shoulders of giants, not reinventing the wheel, getting shit done, and such.

So the question really is how much we value #2? Or, which aspects of it we value, because I see at least two:

A) "I'm getting better as a programmer"

B) "I'm getting better as a someone who understands our system"

As much as I hate it, the brutal truth is, you and me are not being paid for A). The business doesn't care, and they only get so much value out of it anyway. As for B), it's tricky to say whether and when we should care - most software is throwaway, and the only thing that happens faster than that is programmers working on it changing jobs. Long-term, B) has very little value; short-term, it might benefit both business and the programmer (the latter by virtue of making the job more pleasant).

I think the jury is still out on how LLMs affect A). I feel that it's not making me dumber as a programmer, but I'm from the cohort of people with more than a decade of programming experience before even touching a language model, so I have a different style of working with LLMs than people who started using them with less experience, or people who never learned to code without them.

dspillett

> That's not true. Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.

This has been going on for decades. It is called outsourcing development. We've just previously passed the work to people in countries with lower wages and conditions, now people are increasingly passing the work to a predictive text engine (either directly, or because that is what the people they are outsourcing to are doing).

Personally I'm avoiding the “AI” stuff¹, partly because I don't agree with how most of the models were trained, partly because I don't want to be beholden to an extra 3rd party commercial entity to be able to do my job, but mainly because I tinker with tech because I enjoy it. Same reason I've always avoided management: I want to do things, not tell others to do things. If it gets to the point that I can't work in the industry without getting someone/something else to do the job for me, I'll reconsider management or go stack shelves!

--------

[1] It is irritating that we call it AI now because the term ML became unfashionable. Everyone seems to think we've magically jumped from ML into _real_ AI which we are having to make sure we call something else (AGI usually) to differentiate reasoning from glorified predictive text.

sshine

> Everyone seems to think we've magically jumped from ML into _real_ AI

But we have:

  - LLMs pass the Turing test more convincingly than anything before
  - LLMs are vastly more popular than any AI/ML methods before it
  - "But they don't reason!" -- we're advancing chain of thought
LLMs are AI (behaving artificially human)

LLMs are also ML (using statistical methods)

alonsonic

It's ironic to see people say this type of things and not think about old software engineer practices that are now obsolete because overtime we have created more and more tools to simplify the craft. This is yet another step in that evolution. We are no longer using punch cards or writing assembly code, and we might not write actual code in the future anymore and just instruct ais to achieve goals. This is progress

boredhedgehog

> We are no longer using punch cards or writing assembly code

I have done some romhacks, so I have seen what compilers have done to assembly quality and readability. When I hear programmers complain that having to debug AI written code is harder than just writing it yourself, that's probably exactly how assembly coders felt when they saw what compilers produce.

One can regret the loss of elegance and beauty while accepting the economic inevitability.

xign

That's a future that's not the case today. With compilers, I rarely have to dig into assembly code and generally I just work in the domain (programming languages) that I'm comfortable in. Compiler bugs are rare (but they do exist and I have had to dig into assembly to debug them before).

LLMs are nowhere close to that level today. It spits out a bunch of mediocre code that the programmer then needs to maintain and fix bugs on. And the larger / more complicated the code base the harder it is for the LLM to work with. There's a huge leaky abstraction here going from initial vibe coding to then having to dig into the weeds, and generally fixing bugs written by another human is difficult enough, not to mention written by some random LLM.

Nathanba

well the only issue I have with that is that coding is already a fairly easy way to encode logic. Sure...writing Rust or C isn't easy but writing some memory managed code is so easy that I wonder whether we are helping ourselves by removing that much thinking from our lives. It's not quite the same optimization as building a machine so that we don't have to carry heavy stones ourselves. Now we are building a machine so we don't have to do heavy thinking ourselves. This isn't even specific to coding, lawyers essentially also encode logic into text form. What if lawyers in the future increasingly just don't bother understanding laws and just let an AI form the arguments?

I think there is a difference here for the future of humanity that has never happened before in our tool making history.

ThrowawayR2

The handful of people writing your compilers, JIT-ers, etc. are still writing assembly code. There are probably more of them today than at any time in the past and they are who enable both us and LLMs to write high level code. That a larger profession sprang up founded on them simplifying coding enough for the average coder to be productive didn't eliminate them.

The value of most of us as coders will drop to zero but their skills will remain valuable for the foreseeable future. LLMs can't parrot out what's not in their training set.

perkele1989

this is not progress. this is regression. who is going to maintain and further develop the software if not actual programmers? in the end the LLM's stop getting new information to be trained on, and it cant truly innovate (since its not an AGI)

TeMPOraL

Elsewhere in this discussion thread[0], 'ChrisMarshallNY compares this to feelings of insecurity:

> It’s really about basic human personal insecurity, and we all have that, to some degree. Getting around it, is a big part of growing up (...)

I believe he's right.

It makes me think back to my teenage years, when I first learned to program because I wanted to make games. Within the amateur gamedev community, we had this habit of sneering at "clickers" - Klik&Play & other kinds of software we'd today call "low-code", that let you make games with very little code (almost entirely game logic, and most of it "clicked out" in GUI), and near-zero effort on the incidental aspects like graphics, audio, asset management, etc. We were all making (or pretending to make) games within scope of those "clickers", but using such tools felt like cheating compared to doing it The Right Way, slinging C++ through blood, sweat and tears.

It took me over a decade to finally realize how stupid that perspective was. Sure, I've learned a lot; a good chunk of my career skills date back to those years. However, whatever technical arguments we levied against "clickers", most of them were bullshit. In reality, this was us trying to feel better, special, doing things The Hard Way, instead of "taking shortcuts" like those lazy people... who, unlike us, actually released some playable games.

I hear echoes of this mindset in a lot of "LLMs will rot your brain" commentary these days.

--

[0] - https://news.ycombinator.com/item?id=43351486

lr4444lr

Things like memory safe languages and JS DOM managed frameworks are limited scoped solved problems for most business computing needs outside of some very marginal edge cases.

AI generated code? That seems a way off from being a generalized solved problem in an iterative SDLC at a modern tech company trying to get leaner, disrupt markets, and survive in a complex world. I for one am very much in support of it for engineers with the unaided experience under their belt to judge the output, but the idea that we're potentially going to train new devs at unfamiliar companies on this stuff? Yikes.

HeatrayEnjoyer

Progress is more than just simplistic effort reduction. The attitude of more efficient technology = always good is why society is quickly descending into a high-tech dystopia.

cursor_it_all

> that’s progress, not a problem.

Agreed. Most of us aren’t washing our clothes with a washboard, yet the washboard not long ago was a timesaver. Technology evolves.

Now, if AI rises up against the government and Cursor becomes outlawed, then maybe your leet coder skills will matter again.

But when a catastrophic solar storm takes out the grid, a washboard may be more useful, to give you something to occupy yourself with while the hard spaghetti of your dwindling food supply slowly absorbs tepid rainwater, and you wish that you’d actually moved to Siberia and learned to live off the land as you once considered while handling a frustrating bug in production that you could’ve solved with Claude.

fransje26

> [..] but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.

No. In learning, there is no substitution to practice. Once you jump to the conclusion, you stop understanding the "why".

Throughout various stages of technological advancement, we've come with tools to help relieve us of tedious efforts, because we understood the "why", the underlying reason of what the tools were helping us solve in the first place. Those that understood the "why" could build upon those tools to further advance civilization. The others were left behind to parrot about something they did not understand.

But -and, ironically in this case- as with most things in life, it is about the journey and not the destination. And in the grand scheme of things, it doesn't really matter if we take a more efficient road, or a less efficient road, to reach a life lesson.

Enjoy your journey!

null

[deleted]

Brian_K_White

The progress does not come from the absense of effort. That is just a transparent self serving "greed is good" class of argument from the lazy. It merely employs enough cherry picked truth to sound valid.

The progress comes from amplification of effot, which comes from leverage (output comes from input), not magic (output comes from nowhere) or theft (output comes from someone else).

oooyay

> Yes, you can get more effective at something with more effort, but you can get even more effective at it if you find a way to get results without actually doing the work yourself in the first place.

You've arrived at the reason for compilers. You could go produce all that pesky machine code yourself or you could learn to use the compiler to its optimal potential to do the work for you.

LLMs and smart machines alike will be the same concept but potentially capable of a wider variety of tasks. Engineers that know how to wield them and check their work will see their productivity increase as the technology gets better. Engineers that don't know how to check their work or wield them will at worst get less done or produce volumes of garbage work.

andai

To clarify, what I meant with that line was how to use AI in ways that strengthen your knowledge and skills rather than weaken them. This seems to be a function of effort (see the Testing Effect), there doesn't seem to be any way around that?

Whereas what you're responding to is using AI to do the work for you, which weakens your own faculties the more you do it, right?

brulard

> because they're letting AI do all the work

This is an unnecessary hyperbole. It's like saying that your reportees do all the work for you. You need to put in an effort to understand the strengths and weaknesses of AI and put it to good work and make sure to double check its result. Low-skill individuals are not going to get great results for moderately complex tasks with AI. It's absurd to think it will do "all the work". I believe we are on the point of SW engineering skills shifting from understanding all the details of programming language and tooling more to higher level thinking and design.

Although I see without proper processes (code reviews, guidelines, etc.) use of AI can get out of hand to the point of a very bloated and unmaintainable code base. Well, as with any powerful technology it has to be handled with care.

moffkalast

Those damn kids, compilers and linters doing all the work for them. Back in my day we punched bits into a card by hand. /s

It's just people ranting about another level of abstraction, like always.

Tainnor

Those other levels of abstraction that you mentioned are deterministic and predictable. I think that's a massive difference and it justifies why one should be more skeptical towards AI generated code (skeptical doesn't mean "outright dismissing" tbf).

myaccountonhn

Another level of abstraction that has its real world cost made invisible, from job displacement to environmental damage.

4ndrewl

Your cognitive abilities to do programming may cognitively decline, but that's not the aim here is it? The aim is to solve a problem for a user, not to write code? If we solve that using different tools maybe our cognitive abilities will just focus on something else?

brulard

Exactly. If there is a tool that can do a lot of the low level work for us, we are free to do more of the higher level tasks and have higher output overall. Programming should be just a means to an end, not the end itself.

ThrowawayR2

Or, more likely, most of us will be much lower paid or simply out of a job as our former customers input their desires into the AI themselves and it spits out what they want.

957497573

> Programming should be just a means to an end, not the end itself.

That's just, like, your opinion man.

I know for a fact that I didn't study to be a software engineer just so I could end up wrangling LLMs trained on stolen code into writing crappy code for me.

sycren

I think this boils the problem down solely to the individual. What if companies realised this issue and made a period of time during the day devoted solely to learning, knowledge management and innovation. If their developers only use AI to be more productive, then the potential degradation of intellect could stifle innovation and make the company less competitive in the market. It would be interesting if we start seeing a resurgence of Google's mythical 10% rule with companies more inclined to let any employee create side projects using AI (like Cursor) that could benefit the company.

eqmvii

The problem is motivation. I’ve worked at companies that use policies like this, but you often only see a handful of def motivated folks really make use of it.

42lux

Are you going to take my washing machine next? AI is a gateway to spend more time, to do whatever you want. It's your decision to let your brain rot away or not.

slater

> AI is a gateway to spend more time, to do whatever you want

That's not how any previous advancement has worked. "Work expands to fill the time saved" and all that.

42lux

We are down in working hours per year from 3.500h in 1870 to around 1400h now.

957497573

We work harder than people at any other point in history, despite all our technological advances. What makes you think AI will be different?

42lux

No, we don't.

>> We are down in working hours per year from 3.500h in 1870 to around 1400h now.

nonrandomstring

> against the basic human (animal?) instinct to take the path of least resistance.

Fitness. Nobody takes a run because they're needing to waste a half hour. Okay, some people just have energy and time to burn, and some like to code for the sake of it (I used to). We need to do things we don't like from time to time in order to stay fresh. That's why we have drills and exercises.

GoblinSlayer

The purpose of the least resistance instinct is to conserve resources of the organism due to scarcity of food. Consequently in absence of scarcity of food this instinct is suboptimal.

TeMPOraL

Even in abundance, least-resistance instinct is strictly beneficial when applied to things you need to do, as opposed to things you want to do.

HPsquared

Time is always scarce.

GoblinSlayer

For short term wins you pay with long term losses.

consteval

We also never gain any time from productivity. When sewing machines were invented and the time to make a garment went down 100x, you didn’t work 15 minutes a day.

Instead, those people are actually now much poorer and work much more. What was a respectable job is grunt work, and they’re sweat shop warm bodies.

The gains of their productivity was, predictably, siphoned upwards.

immibis

You have to learn what's underneath the shortcuts, and then use the shortcuts because they are genuinely more productive.

I've heard it recommended that you should always understand one layer deeper than the layer you're working at.

datadeft

The biggest problem what I have with using AI for software engineering is that it is absolutely amazing for generating the skeleton of your code, boilerplate really and it sucks for anything creative. I have tried to use the reasoning models as well but all of them give you subpar solutions when it comes to handling a creative challenge.

For example: what would be the best strategy to download 1000s of URLs using async in Rust. It gives you ok solutions but the final solution came from the Rust forum (the answer was written 1 year ago) which I assume made its way into the model.

There is also the verbosity problem. Calude without the concise flag on generates roughly 10x the required amount of code to solve a problem.

Maybe I am prompting incorrectly and somehow I could get the right answers from these models but at this stage I use these as a boilerplate generator and the actual creative problem solving remains on the human side.

gazereth

Personally I've found that you need to define the strategy yourself, or in a separate prompt, and then use a chain-of-thought approach to get to a good solution. Using the example you gave:

  Hey Chat,
  Write me some basic rust code to download a url. I'd like  to pass the url as an string argument to the file
Then test it and expand:

  Hey Chat,
  I'd like to pass a list of urls to this script and fetch them one by one. Can you update the code to accept a list of urls from a file?

Test and expand, and offer some words of encouragement:

  Great work chat, you're really in the zone today!

  The downloads are taking a bit too long, can you change the code so the downloads are asynchronous. Use the native/library/some-other-pattern for the async parts.

Test and expand...

hypeatei

Whew, that's a lot to type out and you have to provide words of encouragement? Wouldn't it make more sense to do a simple search engine query for a HTTP library then write some code yourself and provide that for context when doing more complicated things like async?

I really fail to see the usefulness in typing out long winded prompts then waiting for information to stream in. And repeat...

owenpalmer

A few options.

1. Use TTS and have an LLM clean it up.

2. Use a collection of prompt templates.

ahofmann

I'm going the exact opposite way. I provide all important details in the prompt and when I see that the LLM understood something wrong, I start over and add the needed information to the prompt. So the LLM either gets it on the first prompt, or I write the code myself. When I get the "Yes, you are right ..." or "now I see..." crap, I throw everything away, because I know that the LLM will only find shit "solutions".

owenpalmer

This is actually a great approach. Essentially you're using time travel to prevent misunderstandings, which prevents the context from getting clogged up with garbage.

sebmellen

This is the best approach and avoids long context windows that get the LLM confused

hakaneskici

I have heard a few times that "being nice" to LLMs sometimes improves their output quality. I find this hard to believe, but happy to hear your experience.

Examples include things like, referring to LLM nicely ("my dear"), saying "please" and asking nicely, or thanking.

Do these actually work?

thatguy0900

Well consider it's training data. I could easily see questions on sites like stack overflow having better quality answers when the original question is asked nicely. I'm not sure if it's a real effect or not but I could see how it could be. A rudely asked question will have a lot of flame war responses.

owenpalmer

I'm not sure encouragment itself is the performance enhancer, it's more that you're communicating that the model has the right "vibe" of what your end goal is.

borgdefenser

I use to do the "hey chat" all the time out of habit and when I thought the language model was something more like AI in a movie than what it is. I am sure it makes no difference beyond the user acting different and possibly asking better questions if they think they are talking to a person. Now for me, it looks completely ridiculous.

tmpz22

I find it really bad for bootstrapping projects such as picking dependencies from rapidly evolving ecosystems or understanding the more esoteric constraints like sqlite's concurrency model.

I'd argue you need to bootstrap and configure your project then allow only narrow access and problems to the llm to write code for - individual functions where your prompt includes the signature, individual tests, etc. Anything else and you really need to invest time in the code review lest they re-configure some of your code in a drastic way.

LLMs are useful but they do not replace procedure.

MortyWaves

I agree completely with all you said however Claude solved a problem I had recently in a pretty surprising way.

So I’m not very experienced with Docker and can just about make a Docker Compose file.

I wanted to setup cron as a container in order to run something on a volume shared with another container.

I googled “docker compose cron” and must have found a dozen cron images. I set one up and it worked great on X86 and then failed on ARM because the image didn’t have an ARM build. This is a recurring theme with Docker and ARM but not relevant here I guess.

Anyway, after going through those dozen or so images all of which don’t work on ARM I gave up and sent the Compose file to Claude and asked it to suggest something.

It suggested simply use the alpine base image and add an entry to its crontab, and it works perfectly fine.

This may well be a skill issue but it had never occurred to me to me that cron is still available like that.

Three pages of Google results and not a single result anywhere suggesting I should just do it that way.

Of course this is also partly because Google search is mostly shit these days.

noisy_boy

Maybe you would have figured it out if you thought a bit more deeply about what you wanted to achieve.

You want to schedule things. What is the basic tool we use to schedule on Linux? Cron. Do you need to install it separately? No, it usually comes with most Linux images. What is your container, functionally speaking? A working Linux system. So you can run scripts on it. Lot of these scripts run binaries that come with Linux. Is there a cron binary available? Try using that.

Of course, hindsight is 20/20 but breaking objectives down to their basic core can be helpful.

sgarland

With respect, the core issue here is you lacked a basic understanding of Linux, and this is precisely the problem that many people — including myself – have with LLMs. They are powerful and useful tools, but if you don’t understand the fundamentals of what you’re trying to accomplish, you’re not going to have any idea if you’re going about that task in the correct manner, let alone an optimal one.

mukunda_johnson

Honestly we are headed towards a disturbing height of inefficiency in software. Look at software today, 1000x less efficient than what we had in the 90s. Do businesses care? No, they focus on value. The average user is too stupid to care, even though all their RAM is being sucked up and their computer feels like shit.

The only thing that's keeping us from that hell is the "correct" part. The code is not going to be properly tested or consistent, making it impractical for anything substantial right now.

noisy_boy

For Claude, set up a custom prompt which should have whatever you want + this:

"IMPORTANT: Do not overkill. Do not get distracted. Stay focused on the objective."

lfsh

As I understand 'reasoning' is a very misleading term. As far as I can tell, AI reasoning is a step to evaluate the chosen probabilities. So maybe you will get less hallucinations but it still doesn't make AI smart.

Sohcahtoa82

Yeah, "reasoning" just tells the AI to take an extra planning step.

In my experience, before "reasoning" became an option, if you ask it a question that takes a decent amount of thinking to solve, but also tell the model "Just give me the answer", you're FAR more likely to get an incorrect answer.

So "reasoning" just tells the model to first come up with a plan to solve a problem before actually solving it. It generates its own context for coming up with a more complete solution.

"Planning" would be a more accurate term for what LLMs are doing.

heap_perms

What I also notice is that the very easily get stuck on a specific approach to solving a problem. One prompt that has been amazing for this is this:

> Act as if you're and outside observer to this chat so far.

This really helps in a lot of these cases.

TeMPOraL

Like, dropping this in the middle of the conversation to force the model out of a "local minimum"? Or restarting the chat with that prompt? I'm curious how you use it to make it more effective.

heap_perms

Yeah exactly forcing it out of a "local minimum" is a neat way to describe it. In the middle of the conversation I drop this sometimes. Works wonders. You just have to tell it it's stuck in a loop and it will suddenly pretend (?) to be self aware.

MortyWaves

That’s a cool tip; I usually just give up and start a new chat.

benhurmarcel

I find them very good for debugging also

larodi

Interestingly many here fail to note that development of code is a lot about debugging, not only about writing. Is also about being able to dig/search/grok the code, which is like... reading it.

It is the debugging part to me, not only the writing, that actually teaches you what IS right, and what not. Not the architectural work, not the LLM spitting code part, not the deployment, but the debugging of the code and integration. THIS is what teaches you, writing alone teaches you nothing... you can copy by hand programs and understand zero of what they do unless inspecting intermediate results.

To hand-craft a house is super romantic and nice, etc. Is a thing people did for a lifetime for ages, not alone usually - with family and friends. But people today live in houses/apartments that had their foundations produced by automated lines (robots) - the steel, the mixture for the concrete, etc. And people yet live in the houses built this way, designed with computer which automated the drawing. I fail to understand while this is bad?

xign

Because LLM generated mediocre code tends to more buggy with more edge cases and harder to debug that those a good programmer wrote.

tymonPartyLate

I asked it once to simplify code it had written and it refused. The code it wrote was ok but unnecessary in my view.

Claude 3.7: > I understand the desire to simplify, but using a text array for .... might create more problems than it solves. Here's why I recommend keeping the relational approach: ( list of okay reasons ) > However, I strongly agree with adding ..... to the model. Let's implement that change.

I was kind of shocked by the display of opinions. HAL vibes.

pknerd

Claude is mostly opinionated and gives you feedback where it thinks it is necessary.

brulard

My experience is, that it very often reacts to a simple question with apologizing and completely flipping it's answer 180 degrees. I just ask for explanation like "is this a good way to do x,y,z?" and it goes "I apologize, you are right to point out flaw in my logic. Lets do it the opposite way."

srvaroa

Well, this AI operates now at staff+ level

nextts

And is paid like one with today's token costs!

stuaxo

Funny, but expected when some chunk of the training data is forum posts like:

"Give me the code for"

"Do it yourself, this is homework for you to learn".

Prompt engineering is learning enough about a project to sound like an expert, them you will he closer to useful answers.

Alternatively - maybe if trying to get it to solve a homework like question, thus type of answer is more likely.

MrMcCall

I shudder to think that all these LLMs were trained on internet comments.

Of course, only the sub-intelligent would train so-called "intelligence" on the mostly less-than-intelligent, gut-feeling-without-logic folks' comments.

It's like that ancient cosmology with turtles all the way down, except this is dumbasses, very confident dumbasses who have lots of cash.

cm2187

It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves. Not even just about coding, about any other knowledge.

motorest

> It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves.

I am from the generation whose only options on the table were RTFM and/or read the source code. Your blend of comment was also directed at the likes of Google and StackOverflow. Apparently SO is not a problem anymore, but chatbots are.

I welcome chatbots. They greatly simplify research tasks. We are no longer bound to stake/poorly written docs.

I think we have a lot of old timers ramping up on their version of "I walked 10 miles to school uphill both ways". Not a good look. We old timers need to do better.

usrbinbash

> Your blend of comment was also directed at the likes of Google and StackOverflow.

No, it wasn't.

What such comments were directed at, and with good reason, where 'SO-"Coders"', aka. people when faced with any problem, just googled a vague description of it, copypasted the code from the highest scoring SO answer into their project, and called it a day.

SO is a valueable resource. AI Systems are a valueable resource. I use both every day, same as I almost always have one screen dedicated to some documentation page.

The problem is not using the tools available. The problem is relying 100% on these tools, with no skill or care of ones own.

strken

Yes, they were, for reasons that have turned out to be half-right and half-wrong. At least by some people. Ctrl-c ctrl-v programming was widely derided, but people were also worried about a general inability to RTFM.

I had the good fortune to work with a man who convinced me to go read the spec of some of the programming languages I used. I'm told this was reasonably common in the days of yore, but I've only rarely worked on a team with someone else who does it.

Reading a spec or manual helps me understand the language and aids with navigating the text later when I use it as documentation. That said, if all the other programmers can do their jobs anyway, is it really so awful for them to learn from StackOverflow and Google? Probably not.

I imagine the same is true of LLMs.

jstummbillig

But skill will be needed. It's everything that is still necessary between nothing and (good) software existing. It will just rapidly be something that we are not used to, and the rate of change will be challenging, specially for those with specialized, hard earned skills and knowledge that become irrelevant.

johnisgood

I agree. Claude is very useful to me because I know what I am doing and I know what I want it to do. Additionally, I keep telling my friend who is studying data science to use LLMs to his advantage. He could learn a lot and be productive.

motorest

> SO is a valueable resource.

Chatbots like Copilot, Cursor, Mistral, etc serve the same purpose that StackOverflow does. They do a far better job at it, too.

> The problem is not using the tools available. The problem is relying 100% on these tools, with no skill or care of ones own.

Nonsense. The same blend of criticism was at one point directed at IDEs and autocompletion. The common thread is ladder-pullers complaining how the new generation doesn't use the ladders they've used.

I repeat: we old timers need to do better.

nerdponx

> Your blend of comment was also directed at the likes of Google and StackOverflow. Apparently SO is not a problem anymore

And it kind of was a problem. There was an underclass of people who simply could not get anything done if it wasn't already done for them in a Stackoverflow answer or a blog post or (more recently and bafflingly) a Youtube video. These people never learned to read a manual and spent their days flailing around wondering why programming seemed so hard for them.

Now with AI there will be more of these people, and they will make it farther in their careers before their lack of ability will be noticeable by hiring managers.

stereolambda

I would even say there's a type of comment along the lines of "we've been adding Roentgens for decades, surely adding some more will be fine so stop complaining".

As a second-order effect, I think there's a decline in expected docs quality (of course depends on the area). Libraries and such don't expect people to read through them, so they are spotty and haphazard, with only some random things mentioned. No wider overviews and explanations, and somewhat rightly so, why try to write it if (nearly) no one will read it. So only tutorials and Q&A sites remain besides of API dumps.

eMPee584

.. for the brief period of time before machines will take care of the whole production cycle.

Which is a great opportunity btw to drive forward a transition to a post-monetary, non-commercial post-scarcity open-source open-access commons economy.

ookblah

at risk of sounding like a grandpa this is nothing like SO. AI is just a tool for sure, one that "can" behave like an super enhanced SO and Google, but for the first time ever it can actually write for you and not just piddly lines but entire codebases.

i think that represents a huge paradigm shift that we need to contend with. it isn't just "better" research. and i say this as someone who welcomes all of this that has come.

IMO the skill gap just widens exponentially now. you will either have the competent developers who use these tools accelerate their learning and/or output some X factor, and on the other hand you will have literally garbage being created or people who just figure out they can now expend 1/10 the effort and time to do something and just coast, never bother to even understand what they wrote.

just encountered that with some interviews where people can now scaffold something up in record time but can't be bothered to refine it because they don't know how. (ex. you have someone prompting to create some component and it does it in a minute. if you request a tweak, because they don't understand it they just keep trying re-prompt and micromanage the LLM to get the right output when it should only take another minute for someone experienced.)

vaylian

> this is nothing like SO

Strong agree. There have been people who blindly copied answers from Stack Overflow without understanding the code, but most of us took the time to read the explanations that accompanied the answers.

While you can ask the AI to give you additional explanations, these explanations might be hallucinations and no one will tell you. On SO other people can point out that an answer or a comment is wrong.

TiredOfLife

> this is nothing like SO

Agree. Ai answers actually work and are less out of date

nkrisc

> ex. you have someone prompting to create some component and it does it in a minute. if you request a tweak, because they don't understand it they just keep trying re-prompt and micromanage the LLM to get the right output when it should only take another minute for someone experienced.

This is it. You will have real developers, as you do today, and developers who are only capable of creating what the latest AI model is capable of creating. They’re just a meat interface for the AI.

opan

The issues I see are that private chats are information blackholes whereas public stuff like SO can show up in a search and help more than just the original asker (can also be easily referenced / shared). Also the fact that these chatbots are wrong / make stuff up a lot.

fowlie

Are you saying that black holes don't share any information? xD

andyferris

I had never thought about that - I guess the privacy cuts both ways.

block_dagger

LLMs are trained on public data though.

oneeyedpigeon

I think it's unwise to naively copy code from either stack overflow or an AI, but if I had to choose, I'd pick the one that had been peer-reviewed by other humans every time.

eMPee584

Ooh, the quality of "review" on SO also varies a whole lot..

megadata

SO was never a problem. Low effort questions were. "This doesn't work, why?" followed by 300 lines of code.

aprilthird2021

> Apparently SO is not a problem anymore, but chatbots are.

I think the same tendency of some programmers to just script kiddie their way out of problems using SO answers without understanding the issue will be exacerbated by the proliferation of AI which is much more convincing about wrong answers.

It's not a binary. You don't have to hate or welcome chatbots, no in between. We all use them, but we all also worry about the negatives, same with SO.

javcasas

We old timers read the source code, which is a good proxy for what runs on the computer. From that, we construct a mental model of how it works. It is not "walking uphill 10 miles both ways". It is "understanding what the code actually does vs what it is supposed to do".

So far, AI cannot do that. But it can pretend to, very convincingly.

dagw

Apparently from what I've read, universities are already starting to see this. More and more students are incapable of acquiring knowledge from books. Once they reach a point where the information they need cannot be found in ChatGPT or YouTube videos they're stuck.

thinkingemote

I wonder if Google Gemini is trained on all the millions of books that were scanned and that Google were not able to be used for their original purposes?

https://en.m.wikipedia.org/wiki/Google_Books

As other AI companies argue, copyright doesn't apply when training, it should give Google a huge advantage to be able to use all the worlds books they scanned.

harvey9

Interesting if that's literally true since you have to _search_ YouTube, unless maybe people ask chatgpt what search terms to use.

nerdponx

It's about putting together individual pieces of information to come up with an idea about something. You could get 5 books from the library and spend an afternoon skimming them, putting sticky notes on things that look interesting/relevant, or you could hope that some guy on Youtube has already done that for you and has a 7 minute video summarizing whatever it is you were supposed to be looking up.

Inviz

I am unable to acquire knowledge from the books since 35 years ago. Had to get by with self-directed learning. The result is patchy understanding, but lot of faith in myself

roygbiv2

If AI can't find it, how do we?

zekica

By using your brain and a web search engine / searchable book index in your library / time or even asking a question somewhere public?

pif

The actual intelligence in artificial intelligence is zero. Even idiots can do better than AI, if they want. Lazy idiots, on the other end...

buckyfuller

How does this make sense when you can put any book inside of an LLM?

ljm

Learning isn't just about rote memorisation of information but the continuous process of building your inquisitive skills.

An LLM isn't a mind reader so if you never learn how to seek the answers you're looking for, never curious enough to dig deeper, how would you ever break through the first wall you hit?

In that way the LLM is no different than searching Google back when it was good, or even going to a library.

dagw

Just because you can put information into an LLM, doesn't mean you can get it out again.

Beretta_Vexee

I recently interviewed a candidate with a degree in computer science. He was unable to explain to me how he would have implemented the Fibonacci sequence without chatGPT.

We never got to the question of recursive or iterative methods.

The most worrying thing is that the LLM were not very useful three years ago when he started university. So the situation is not going to improve.

IanCal

This is just the world of interviewing though, it was the same a decade ago.

The reason we ask people to do fizzbuzz is often just to weed out the shocking number of people who cannot code at all.

brulard

Yep, I still can not understand how programmers unable to do fizzbuzz still have a sw engineer career. I have never worked with one like that, but I have seen so many of them on interviews.

jstummbillig

But it is. Some knowledge and questions are simply increasingly outdated. If the (correct) answer on how to implement Fibonacci is one LLM query away, then why bother knowing? Why should a modern day web developer be able to write assembly code, when that is simply abstracted away?

I think it will be a hot minute before nothing has to be known and all human knowledge is irrelevant, but, specially in CS, there is going to be a tremendous amount of rethinking to do, of what is actually important to know.

Beretta_Vexee

Not everyone does web dev. There are many jobs where it is necessary to have a vague idea of memory architecture.

LLM are very poor in areas such as real time and industrial automation, as there is very little data available for training.

Even if the LLM were good, we will always need someone to carry out tests, formal validation, etc.

Nobody want to get on a plane or in a car whose critical firmware has been written by an LLM and proofread by someone incapable of writing code (don't give ideas to Boeing ).

The question about Fibonacci is just a way of gently bringing up other topic.

Shorel

The answer on how to implement Fibonacci is so simple that it is used for coding interviews.

Any difficult problem will take the focus out of coding and into the problem itself.

See also fizz-buzz, which it is even simpler, and people still fail those interview questions.

ghssds

Not outdated. If you know the answer on how to implement Fibonacci, you are doing it wrong. Inferring the answer from being told (or remembering) what is a Fibonacci number should be faster than asking an LLM or remembering it.

immibis

Why should my data science team know what 1+1 is, when they can use a calculator? It's unfair to disqualify a data scientist just for not knowing what 1+1 is, right?

asddubs

the point is that it's an easy problem that basically demonstrates you know how to write a loop (or recursion), not the sequence itself

iddan

I’m 10 years in software and was never bothered to remember how to implement it the efficient way and I know many programmers who don’t know even the inefficient way but kick ass.

I once got that question in an interview for a small startup and told the interviewer: with all due respect what does that have to do with the job I’m going to do and we moved on to the next question (still passed).

sarchertech

You don’t need to memorize how to compute a Fibonacci number. If you are a barely competent programmer, you should be capable of figuring it out once someone tells you the definition.

If someone tells you not to do it recursively, you should be able to figure that out too.

Interview nerves might get in your way, but it’s not a trick question you need to memorize.

JaumeGreen

But I'm sure there would be some people that given the following question would not be able to produce any code by themselves:

"Let's implement a function to return us the Nth fibonnaci number.To get a fib (fibonacci) number you add the two previous numbers, so fib(N)=fib(N-1)+fib(N+2). The starting points are fib(0)=1 and fib(1)=1. Let's assume the N is never too big (no bigger than 20)."

And that's a problem if they can't solve it.

OTOH about 15 years ago I heard from a friend that interviewed candidates that some people couldn't even count all the instances of 'a' in a string. So in fact not much has changed, except that it's harder to spot these kind of people.

dijksterhuis

i’m around 10 years as well and i can’t even remember how the fibonacci sequence progress off hand. I’d have to wikipedia it to even get started.

dherikb

Well, I found these candidates long before ChatGPT

protocolture

I interviewed a guy with a CCIE who couldnt detail what a port forward was. 3 years ago.

genewitch

If some interviewer asked me what recursion was or how to implement it, I'd answer, and then ask them if they can think of a good use case for a duff's device.

dagw

Duff's device hasn't relevant for over 25+ years and there is no reason why anybody who learnt to program within the past 20 years should even know what it is, while recursion is still often the right answer.

oneeyedpigeon

That's a great question because there are 3 levels of answer: 1) I don't know what recursion is 2) This is what recursion is 3) This is what iteration is

brulard

Why? It looks like a reasonable interview question.

aa-jv

I think we already saw this manifestation a few decades ago, with kids who can't program without an IDE.

IDE's are fantastic tools - don't get me wrong - but if you can't navigate a file system, work to understand the harness involved in your build system, or discern the nature of your artefacts and how they are loaded and interact in your target system, you're not doing yourself any favours by having the IDE do all that work for you.

And now what we see is people who not only can't program without an IDE, but can't be productive without a plugin in that IDE, doing all the grep'ing and grok'ing for them.

There has been a concerted effort to make computers stupider for stupid users - this has had a chilling effect in the development world, as well. Folks, if you can't navigate a filesystem with confidence and discern the contents therein, you shouldn't be touching the IDE until you can.

silver_silver

I have (older, but that generation) colleagues who simply stop working if there’s something wrong with the build process because they don’t understand it and can’t be bothered to learn. To be fair to them, the systems in question are massively over complicated and the project definitions themselves are the result of copy-paste engineering.

Unfortunately they also project their ignorance, so there’s massive pushback from more senior employees when anyone who does understand tries to untangle the mess and make it more robust.

The same thing will happen with these ML tools in the future, mark my words: writing code will come to be seen as “too complex and error prone” and barely working, massively inefficient and fragile generated code bases will be revered and protected with “don’t fix what isn’t broken”

kasey_junk

I worked early in my career with a developer who printed out every part of the codebase to review and learn it. He viewed text search tools, file delineations, and folder structures as crutches best avoided.

aa-jv

Yes indeed, that is a perfectly reasonable approach to take, especially in the world of computing where rigorous attention to detail is often rewarded with extraordinary results.

I have very early (and somewhat fond) memories of reviewing every single index card, every single hole in the punch-tape, every single mnemonic, to ensure there were no undiscovered side-effects.

However, there is a point where the directory structure is your friend, you have to trust your colleagues ability to understand the intent behind the structure, and you can leverage the structure to gain great results.

Always remember: software is a social construct, and is of no value until it is in the hands of someone else - who may, or may not, respect the value of understanding it...

RGamma

Becoming the single point of failure for every societal function is the goal of VC. It's working brilliantly thus far.

belter

It is truly astonishing how bad things have become, so quickly. Just the other day, I found myself debating someone about a completely fabricated AWS API call.

His justification was that some AI had endorsed it as the correct method. There are already out there salaried professionals supporting such flawed logic.

globular-toast

I already consider reading to be a superpower. So few people seem capable these days. And we're talking about people born decades before ChatGPT.

escapecharacter

They’ll just mumble something about the context window needing to be larger.

mattlondon

Quite reasonable of it to do so I'd say.

The AI tools are good, and they have their uses, but they are currently at best at a keen junior/intern level, making the same sort of mistakes. You need knowledge and experience to help mentor that sort of developer.

Give it another year or two and I hope they the student will become the master and start mentoring me :)

alexchamberlain

My biggest worry about AI is that it will do all the basic stuff, so people will never have a chance to learn and move on to the more complex stuff. I think there's a name for this, but I can't find it right now. In the hands of a tenured expert though, AI should be a big step up.

ljm

Similar to the pre-COVID coding bootcamps and crash-courses we’ll likely just end up with an even larger cohort of junior engineers who have a hard time growing their career because of the expectations they were given. This is a shame but still, if such a person has the wherewithal to learn, the resources to do so are more abundant and accessible than they’ve ever been. Even the LLM can be used to explain instead of answer.

LinkedIn is awash with posts about being a ‘product engineer’ and ‘vibe coding’ and building a $10m startup over a weekend with Claude 3.5 and a second trimester foetus as a cofounder, and the likely end result there is simply just another collection of startups whose founding team struggles to execute beyond that initial AI prototyping stage. They’ll mistake their prompting for actual experience not realising just how many assumptions the LLM will make on their behalf.

Won’t be long before we see the AI startup equivalent of a rug-pull there.

vrighter

Played a game called Beyond a Steel Sky (sequel to the older beneath a steel sky).

In the starting section there was an "engineer" going around fixing stuff. He just pointed his AI tool at the thing, and followed the instructions, while not knowing what he's doing at any point. That's what I see happening

threeseed

I just did that with Cursor/Claude where I asked it to port a major project.

No code just prompts. Right now after a week it has 4500 compilation errors with every single file having issues requiring me to now go back and actually understand what its gone and done. Debatable whether it has saved time or not.

antupis

I think it goes the same way as compilers so bits -> assembly -> c -> jvm and now you don't need mostly care what happens at lower levels because stuff works. With AI we are now basically the bits -> assembly phase so you need to care a lot about what is happening at a lower level.

Lanolderen

To be honest you don't need to know the lower level things. It's just removing the need to remember the occasional boilerplate.

If I need to parse a file I can just chuck it a couple lines, ask it to do it with a recommended library and get the output in 15 minutes total assuming I don't have a library in mind and have to find one I like.

Of course verification is still needed but I'd need to verify it even if I wrote it myself anyway, same for optimization. I'd even argue it's better since it's someone else's code so I'm more judgemental.

The issue comes when you start trying to do complex stuff with LLMs since you then tend to halfass the analysis part and get led down the development path the LLM chose, get a mish mash of the AIs codestyle and yours from the constant fixes and it becomes a mess. You can get things implemented quickly like that, which is cool, but it feels like it inevitably becomes spaghetti code and sometimes you can't even rewrite it easily since it used something that works but you don't entirely understand.

vbezhenar

Do you worry about calculators preventing people to master big number multiplication? That's an interesting question actually. When I was a kid, calculators were not as widespread and I could easily multiply 4-digit numbers in my brain. Nowadays I'd be happy to multiply 2-digit numbers without mistakes. But I carry my smartphone with me, so there's just no need to do so...

So while learning basic stuff is definitely necessary just like it's necessary to have understanding how to multiply or divide numbers of any size (kids learn that nowadays, right?), actually mastering those skills may be wasted time?

dijksterhuis

> actually mastering those skills may be wasted time?

this question is, i’m pretty sure it is safe to assume, the absolute bane of every maths teacher’s existence.

if i don’t learn and master the fundamentals, i cannot learn and master more advanced concepts.

which means no fluid dynamics for me when i get to university because “what’s the point of learning algebra, i’m never gonna use it in real life” (i mean, i flunked fluid dynamics but it was because i was out drinking all the time).

i still remember how to calculate compound interest. do i need to know how to calculate compound interest today? no. did i need to learn and master it as an application of accumulation functions? absolutely.

just because i don’t need to apply something i learned to master before doesn’t mean i didn’t need to learn and master it in order to learn and master something else later.

mastery is a cumulative process in my experience. skipping out on it with early stuff makes it much harder to master later stuff.

> Do you worry about calculators preventing people to master big number multiplication?

yes.

alexchamberlain

Honestly, I think we already see that in wider society, where mental arithmetic has more or less disappeared. This is in general fine ofc, but it makes it much harder to check the output of a machine if you can't do approximate calculations in your head.

threeseed

> but they are currently at best at a keen junior/intern level

Would strongly disagree here. They are something else entirely.

They have the ability to provide an answer to any question but where its accuracy decreases significantly depending on the popularity of the task.

So if I am writing a CRUD app in Python/React it is expert level. But when I throw some Scala or Rust it is 100x worse than any junior would ever be. Because no normal person would confidently rewrite large amounts of code with nonsense that doesn't even compile.

And I don't see how LLMs get significantly better without a corresponding improvement in input data.

littlestymaar

It's not reasonable to say “I cannot do that as it would be completing your work”, no.

Your tool have no say in the morality of your actions, it's already problematic when they censor sexual topics but the tool makers feel entitled to configure their tools only to allow you some kind of use for your work then we're speedrunning to dystopia (as if it wasn't the case already).

johnisgood

Exactly. The purpose of it is to help you, that is its one and only utility. It has one job, and it refuses to do it.

coldtea

>Your tool have no say in the morality of your actions,

Asimov's "three laws of robotics" beg to differ.

genewitch

Only if your actions would harm another human or yourself, right?

Anyhow in the caliban books the police robot could consider unethical and immoral things, and even contemplate harming humans, but it made it work really slow, almost tiring it.

Shorel

Still not enough.

Not only that, still very far away from being good enough. An opinionated AI trying to convince me his way of doing things is the one true way and my way is not the right way, that's the stuff of nightmares.

Give it a few years and when it is capable of coding a customized and full-featured clone of Gnome, Office, Photoshop, or Blender, then we are talking.

rvnx

It's because they "nerfed" Cursor by not sending the whole files anymore to Claude, but if you use RooCode, the performance is awesome and above average developer. If you have the money to pay for the queries, try it :)

romanovcode

Had extremely bad experience with Cursor/Claude.

Have a big Angular project, +/- 150 TS files. Upgraded it to Angular 19 and now I can optimize build by marking all components, pipes, services etc as "standalone" essentially eliminating the need for modules and simplifying code.

I thought it is perfect for AI as it is straight forward refactor work but would be annoying for a human.

1. Search every service and remove the "standalone: false"

2. Find module where it is declared, remove that module

3. Find all files where module was imported, import the service itself

Cursor and Claude constantly was losing focus, refactoring services without taking care of modules/imports at all and generally making things much worse no matter how hard "prompt engineering" I tried. I gave up and made a Jira task for a junior developer instead.

_joel

> I gave up and made a Jira task for a junior developer instead.

The true senior engineer skill.

M4v3R

Yes it feels like refactoring would be a perfect use case for LLMs but it only works for very simple cases. I've tried several times to do bigger refactors spanning several files with Cursor and it always failed. I think it's a combination of context window being not big enough and also tooling/prompting could probably be improved to better support the use case of refactoring.

MrMcCall

How is a 'guess-the-next-token' engine going to refactor one's codebase?

Just because everyone's doing it (after being told by those who will profit from it that it will work) doesn't mean it's not insane. It's far more likely that they're just a rube.

At the end of using whatever tool one uses to help refactor one's codebase, you still have to actually understand what is getting moved to production, because you will be the one getting called at midnight on Saturday to fix the thing.

vbezhenar

Some IDEs like Intellij Idea have structured search&replace feature. I think there are dedicated projects for this task, so you can search&replace things using AST and not just text.

May be it would make sense to ask AI to use those tools instead of doing edits directly?

cess11

I would use find ... -exec, ripgrep and sed for this. If I have a junior or intern around I'd screen share with them and then have them try it out on their own.

Text files are just a database with a somewhat less consistent query language than SQL.

bambax

Could this not be done algorithmically though? One can go reasonably far with static code parsing.

LocalH

Sounds like Claude "has" ADHD lol

rvnx

It doesn't have ADHD, it's more likely because they create too much small chunks in the recent versons of Cursor. So Cursor is looking at the project with a very small magnifying glass, and forgets what is the big picture (in addition to the context length issue).

jumperabg

This is quite a lot of code to handle in 1 file. The recommendation is actually good in the past(month - feels like 1 year of planning) I've made similar mistakes with tens of projects - having files larger than 500-600 lines of code - Claude was removing some of the code and I didn't have coverage on some of them and the end result was missing functionality.

Good thing that we can use .cursorrules so this is something that partially will improve my experience - until a random company releases the best AI coding model that runs on a Rassbery Pi with 4GB ram(yes this is a spoiler from the future).

maeln

> I've made similar mistakes with tens of projects - having files larger than 500-600 lines of code

Is it a mistake though ? Some of the best codebase I worked on were a few files with up to a few thousands LoC. Some of the worst were the opposite, thousands of files with less than a few hundred LoC in each of them. With the tool that I use, I often find navigating and reading through a big file much simpler than having to have 20 files open to get the full picture of what I am working on.

At the end of the day, it is a personal choice. But if we have to choose something we find inconvenient just to be able to fit in the context window of an LLM, then I think we are doing things backward.

johnisgood

Claude seems to be somewhat OK with 1500 LOC in one file. It may miss something, mess something up, sure, that is why you should chunk it up.

davidwritesbugs

I'm using Cursor & Claude/R1 on a file with 5000 loc, seems to cope OK

doix

I wonder if this was real or if they set a custom prompt to try and force such a response.

If it is real, then I guess it's because LLMs have been trained on a bunch of places where students asked other people to do their homework.

thinkingemote

it's real, (but a reply on the forum suggests) Cursor has a few modes for chat, and it looks like he wasn't in the "agent" chat pane, but in the interactive, inline chat thingy? The suggestion is that this mode is limited to the size of what it can look at, probably a few lines around the caret?

Thus, speculating, a limit on context or a prompt that says something like "... you will only look at a small portion of the code that the user is concerned about and not look at the whole file and address your response to this..."

Other replies in the forum are basically "go RTFM and do the tutorial"!

gloxkiqcza

Sounds like something you would find on Stack Overflow