Skip to content(if available)orjump to list(if available)

Context Engineering for Agents

Context Engineering for Agents

10 comments

·July 1, 2025

dmezzetti

Good retrieval/search is the foundation of context. It's definitely garbage in - garbage out here otherwise. Search is far from a solved problem.

ares623

Another article handwaving or underselling the effects of hallucination. I can't help but draw parallels to layer 2 attempts from crypto.

FiniteIntegral

Apple released a paper showing the diminishing returns of "deep learning" specifically when it comes to math. For example, it has a hard time solving the Tower of Hanoi problem past 6-7 discs, and that's not even giving it the restriction of optimal solutions. The agents they tested would hallucinate steps and couldn't follow simple instructions.

On top of that -- rebranding "prompt engineering" as "context engineering" and pretending it's anything different is ignorant at best and destructively dumb at worst.

senko

That's one reading of that paper.

The other is that they intentionally forced LLMs to do the things we know are bad at (following algorithms, tasks that require more context that available, etc) without allowing them to solve it in a way they're optimized to do (write a code that implements the algorithm).

A cynical read is that the paper is the only AI achievement Apple has managed to do in the past few years.

(There is another: they managed not to lose MLX people to Meta)

OJFord

Let's just call all aspects of LLM usage 'x-engineering' to professionalise it, even while we're barely starting to figure it out.

hnlmorg

Context engineering isn’t a rebranding. It’s a widening of scope.

Like how all squares are rectangles, but not all rectangles are squares; prompt engineering is context engineering but context engineering also includes other optimisations that are not prompt engineering.

That all said, I don’t disagree with your overall point regarding the state of AI these days. The industry is full of so much smoke and mirrors these days that it’s really hard to separate the actual novel uses of “AI” vs the bullshit.

jes5199

good survey of what people are already implementing, but I’ve convinced we barely understand the possibility space here. There may be much more elaborate structures that we will put context into that haven’t been discovered yet

azaras

To provide context, I utilize the memory-bank pattern with GitHub Copilot Agent, but I believe I'm wasting a significant number of tokens.

truth_seeker

Nah ! I am not convinced that context engineering is better (in the long trem) than prompt engineering. Context engineering is still complex and needs maintainance. Its much lower level than human level language.

Given that domain expertise of the problem statment, we can apply the same tactics in context engineering on higher level in prompt engineering.

hnlmorg

This whole industry is complex and needs constant maintenance. APIs break all the time -- and that's assuming they were even correct to begin with. New models are constantly released, each with their own new quirks. People are still figuring out how to build this tech -- and as quickly as they figure one thing out, the goal posts move again.

This entire field is basically being built on quicksand. And it will stay like this until the bubble bursts.