Skip to content(if available)orjump to list(if available)

LLMs are mirrors of operator skill

makmanalp

Counterthoughts: a) These skills fit on a double sided sheet of paper (e.g. the claude code best practices doc) and b) what these skills are has been changing so rapidly that even the best practices docs fall out of date super quick.

For example, managing the context window has become less of a problem with increased context windows in newer models and tools like the auto-resummarization / context window refresh in claude code make it so that you might be just fine without doing anything yourself.

All this to say that the idea that you're left significantly behind if you aren't training yourself on this feels bogus (I say this as a person who /does/ use these tools daily). It should take any programmer not more than a few hours to learn these skills from scratch, with the help of a doc, meaning any employee you hire should be able to pick these up no problem. I'm not sure it makes sense as a hiring filter. Perhaps in the future this will change. But right now these tools are built more like user friendly appliances - more like a cellphone or a toaster than a technology to wrap your head around, like a compiler or a database.

jmsdnns

A key thing that has been shown in research at Wharton is that LLMs elevate people with less experience a lot more than they elevate people a lots of experience.

If we take "operator skill" to mean "they know how to make prompts", there is some truth to it and we can see that by whether or not the operator is designing the context window or not.

But for the more important question, whether or not LLMs are useful has an inverse relationship with how skilled the person is in the context they are using for. This is why the best engineers mostly shrug at LLMs while those that arent the best feel a big lift.

So, LLMs are not mirrors of operator skill. This post is instead an argument that everyone should become prompt engineers.

namuol

Disagree. Poor engineers will go in circles with AI because they will under specify their tasks and fail to recognize improper solutions. Ultimately if you’re not thoughtful about your problem and critical about the solution you will fail. This is true with or without AI at the wheel.

Jensson

> Poor engineers will go in circles with AI

But they move quickly around that circle making them feel much more productive. And if you don't need something outside of the circle it is good enough.

jmsdnns

It's not an opinion, it's what research has shown many times. For example, they can ask the LLM about how to get started or what an experienced engineer might do, etc, as a research tool before writing code.

a_e_k

Experience, though, can definitely vary by domain. Trying to get one to code an algorithm recently that I had a pretty good idea of took longer and gave worse results than just doing it myself.

On the other hand, something like wrestling with the matplotlib API? I don't have too much experience there and an LLM was a great help in piecing things together.

ghuntley

Keen to read the research. Can you drop the link?

jmsdnns

I was at Penn's first AI conference last year and heard Dr Lilach Mollick's keynote where she said this is shown to be true over and over. She doesnt seem to publish often, but her husband Ethan always has a lot to say about AI.

https://www.oneusefulthing.org/p/everyone-is-above-average

ghuntley

Thanks.

namuol

> If I were interviewing a candidate now, the first things I'd ask them to explain would be the fundamentals of how the Model Context Protocol works and how to build an agent.

Um, what? This is the sort of knowledge you definitely do not need in your back pocket. It’s literally the perfect kind of question for AI to answer. Also this is such a moving target that I suspect most hiring processes change at a slower pace.

jrm4

Having worked with and around a lot of different people and places in IT as an instructor; frankly the funniest thing I've observed is that everyone in IT believes that there is some baseline concept or acronym or something that "ought to be obvious and well known to EVERYONE."

And it never is. There's just about nothing that fits this criteria.

jiggawatts

Hashtable.

Explain how it works and what you use for.

If you don’t know this, you’re not a programmer in any language, platform, framework, front end or back end.

It’s my go-to interview question.

Tell me what’s wrong with that.

aezart

What's wrong with that is that a lot of languages don't call them hashtables. I don't think I've used the actual term since like 2016.

tough

try asking chatgpt about MCP and its so new it will hallucinate about some other random stuff

its still a bad interview question unless you’re hiring someone to build ai agents imho

dgfitz

I would just give some sort of LLM-esque answer that sounds correct but is very wrong, and hope I would get the opportunity to follow that up with: "Oh, I must have hallucinated that, can you give me a better prompt?"

I crack myself up.

layer8

Don’t forget to apologize and acknowledge that they are right.

a_e_k

You're right to ask about that! ...

ghuntley

No, it is actually a critical skill. Employers will be looking for software engineers that can orchestrate their job function and these are the two key primitives to do that.

kartoffelsaft

The way it is written is to say that this is an important interview question for any software engineering position, and I'm guessing you agree by the way you say it's critical.

But by the same logic, should we be asking for the same knowledge of the language server protocol and algorithms like treesitter? They're integral right now in the same way these new tools are expected to become (and have become for many).

As I see it, knowing the internals of these tools might be the thing that makes the hire, but not something you'd screen every candidate with who comes through the door. It's worth asking, but not "critical." Usage of these tools? sure. But knowing how they're implemented is simply a single indicator to tell if the developer is curious and willing to learn about their tools - an indicator which you need many of to get an accurate assessment.

ghuntley

Understanding how to build an agent and how Model Context Protocol works is going to be, by my best guess, the new "what is a linked list and how do you reverse a linked list" interview question in the future. Sure, new abstractions are going to come along, which means that you could perhaps be blissfully unaware about how to do that because there's a higher order function to achieve such things. But for now, we are at the level of C and, like C, it's essential to know what those are and how to work with them.

elliotbnvl

It's a good heuristic for determining how read in somebody is to the current AI space, which is super important right now, regardless of being a moving target. The actual understanding of MCP is less important than the mindset having such an understanding represents.

namuol

Hard disagree. It’s not super important to be AI-pilled. You just need to be a good communicator. The tooling is a moving target, but so long as you can explain what you need well and can identify confusion or hallucination, you’ll be effective with them.

elliotbnvl

Nope. Being a good communicator and being good at AI are two completely different skillsets. Plenty of overlap, to be sure, but being good at one does not imply being good at the other any more than speaking first-language quality English means you are good at fundraising in America.

I know plenty of good communicators who aren't using AI effectively. At the very least, if you don't know what an LLM is capable of, you'll never ask it for the things it's capable of and you'll continue to believe it's incapable when the reality is that you just lack knowledge. You don't know what you don't know.

mromanuk

Every time I ask an LLM to write some UI and model for SwiftUI I have to specify to use @Observable macro (is the new way), which they normally do, after asking for it.

The LLM tells me that they prefere the "older way" because it's more broadly compatible, that's ok if you are aiming for that. But If the programmer doesn't know about that they will be stuck with the LLM calling the shots for them.

bcrosby95

You need to create your own preamble that you include with every request. I generally have one for each codebase, which includes a style guide, preferred practices & design (lots of 'best practices' are cargo culted and the LLM will push them on you even when it doesn't make sense - this helps eliminate those), and declarations of common utility functions that may need to be used.

klntsky

Use always-enabled cursor (or your agentic editor of choice) rules.

starlust2

A thing people miss is that there are many different right ways to solve a problem. A legacy system might need the compatibility or it might be a greenfield. If you leave a technical requirement out of the prompt you are letting the LLM decide. Maybe that will agree with your nuanced view of things, but maybe not.

We're not yet at a point where LLM coders will learn all your idiosyncrasies automatically, but those feedback loops are well within our technical ability. LLM's are roughly a knowledgeable but naïve junior dev; you must train them!

Hint: add that requirement to your system/app prompt and be done with it.

maxwell

It's just a higher level abstraction, subject to leaks as with all abstractions.

How many professional programmers don't have assemblers/compilers/interpreters "calling the shots" on arbitrary implementation details outside the problem domain?

LorenPechtel

But we trust those tools to do the job correctly. The compiler has considerable latitude in messing with the details so long as the result is guaranteed to match what was ordered--when we find any deviation from that even in an edge case we consider it a bug (Borland Pascal debugger, I'm looking at you. I wasted a *lot* of time on the fact that in single step mode you peacefully "execute" an invalid segment register load!) LLMs lack this guarantee.

maxwell

We trust those tools to do the job correctly now.

https://vivekhaldar.com/articles/when-compilers-were-the--ai...

null

[deleted]

cpinto

Have you tried writing rules for how you want things done, instead of repeating the same things every time?

jasonjmcghee

The attempting backward compatibility trained behavior has never once been useful to me and is constantly an irritation.

> Please write this thing

> Here it is

> That's asinine why would you write it that way, please do this

> I rewrote it and kept backward compatibility with the old approach!

:facepalm:

dwaltrip

You will get better results if you reset the code changes, tweak the prompt with new guidelines (e.g. don’t do X), and then run it again in a fresh chat.

The less cruft and red herrings in the context, the better. And likewise with including key info, technical preferences, and guidelines. The model can’t read our minds, although sometimes we wish so :)

There are lots of simple tricks to make it easier for the model to provide a higher quality result.

Using these things effectively is definitely an complex skill set.

diggan

Sounds like an OK default, especially since the "better" (in your opinion) way can be achieved by just adding "Don't try to keep backwards compatibility with old code" somewhere in your reusable system prompt.

It's mostly useful when you work a lot with "legacy code" and can't just remove things willy nilly. Maybe that sort of coding is over-represented in the datasets, as it tends to be pretty common in (typically conservative) larger companies.

patrickhogan1

I agree with most of what you’re saying—especially the Unix pipe analogy and the value of building a prompt library while understanding which LLM to use.

That said, I think there’s value in a catch-all fallback: running a prompt without all the usual rules or assumptions.

Sometimes, a simple prompt on the latest model just works—and often in surprisingly more effective ways than one with a complex prompt.

null

[deleted]

null

[deleted]

foldr

Isn’t this just a roundabout way of saying that people who are skilled at using LLMs will get better results from them than people who aren’t? In other words, LLMs “mirror operator skill” in about the same way as hammers, paintbrushes or any other tool. Hardly a blinding insight.

energy123

It's a controversial opinion because both AI optimists and AI pessimists can find room for disagreement with the premise. The optimists think vibe coding is about to be fully automated and humans don't have long, one or two years at best. The pessimists think LLMs don't add much value in the first place. In either case they would disagree with the premise.

marinmania

Agreed. By HN standards I am a very shitty programmer, and as of a year ago I would have said it takes up about 25% of my time. I pretty much just make demos to display some non-coding research.

I think with the rise of LLMs, my coding time has been cut down by almost half. And I definitely need to bring in help less often. In that sense it has raised my floor, while making the people above me (not necessarily super coders, but still more advanced) less needed.

brcmthrowaway

Is there a good guide resource on properly using LLMs/agents for coding? How to get started?

spacebanana7

I found that learning prompt engineering was largely a waste of time. The value of the knowledge seems to depreciate so quickly.

I spent loads of time learning to use special syntax that helped GPT 3.5 or comfy ui for Stable diffusion. Now the latest models can do exactly what I want without any of those “high skill” prompts. The context windows are so big that we can be quite lazy about dumping files into prompts without optimisation.

The only general advice I’d give it to take more risk and continually ask more of the models.

ghuntley

See the tweet on workflow that I put in the post. No courseware, no bullshit, it's there. Have fun. The blog has plenty of guidance from using specs to creating standard libraries of prompts and how to clone a venture capital-backed company while you sleep.

satisfice

A lot of these questions no one knows the answer to.

If you think you know “the best LLM to use for summarizing” then you must have done a whole lot of expensive testing— but you didn’t, did you? At best you saw a comment on HN and you believed it.

And if you did do such testing, I hope it wasn’t more than a month ago, because it’s out of date, now.

The nature of my job affords me the luxury of playing with AI tech to do a lot of things, including helping me write code. But I’m not able to answer detailed technical questions about which LLM is best for what. There is no reliable and durable data. The situation itself is changing too quickly to track unless you have the resources to have full subscriptions to everything and you don’t have any real work to do.

ninetyninenine

My hope is that the AI never improves to take over my job and the next generation of programmers is so used to learning programming with AI that they become mirrors of hallucinating AI. That eliminates ageism and AI taking my job.

Realistically though I think AI will come to a point where it can take over my job. But if not, this is my hope.

whatnow37373

Maybe we can learn some lessons from digital artists who naturally fret over the use of their skills and how they will be replaced by stable diffusion and friends.

In one way, yes, this massively shifts power into the hands of the less skilled. On the other hand if you’d need some proper and I mean proper marketing materials, who are you going to hire? A professional artist using AI or some dipshit with AI?

There will be slop of course but after a while everyone has slop and the only differentiating factor will be quality or at least some gate-kept arbitrary level of complexity. Like how rich people want fancy hand made stuff.

Edit: my point is mainly that the level will rise to a point that you’d need to be scientist to create a - then - fancy app again. You see this with web. It was easy, we made it ridiculous and I mean ridiculously complicated where you need to study computer science to debug React rendering for your marketing pamphlet.

henning

Centering an interview around MCP and "building an agent" which often means "write a prompt and make HTTP calls to some LLM host service" is incredibly stupid unless that is what the product the company produces is.

MCP is a tool and may only be minor in terms of relevance to a position. If I use tools that use MCP but our actual business is about something else, the interview should be about what the company actually does.

Your arrogance and presumptions about the future don't make you look smart when you are so likely to be wrong. Enumerating enough predictions until one of them is right is not prescient, it's bullshit.