Skip to content(if available)orjump to list(if available)

AI must RTFM: Why tech writers are becoming context curators

epolanski

I will just say that AI is forcing me to document and write a lot more than I used to, and I feel it's super boring yet beneficial for everyone, there are no more excuses to procrastinate on those aspects.

The difference from before was: all stakeholders lived on a shared yet personal interpretation of the domain (business, code, whatever). This often leads to wastes of times, onboarding issues, etc, etc.

LLMs are forcing me to plan, document and define everything, and I think that's making the codebases/documentation/tests/prs and myself all better.

ivape

I’m finding that too. Most of my LLM dev work is research and technical design documents. That’s my version of vibe coding. By the time I’m done with a spec, everything is mostly sorted out to the point I can very accurately drive the LLM to code complete stuff I’ve already internalized in the design process.

I really don’t see vibe coding as a mark of a developer just from my own personal experience. It’s the final version of no-code tools, but devs never reach for that anyway.

jerpint

I’ve been building a tool to help me co-manage context better with LLMs

When you load it to your favourite agents, you can safely assume whatever agent you’re interacting is immediately up to speed with what needs to get done, and it too can update the context via MCP

https://github.com/jerpint/context-llemur

sixtyj

AI reads your “prompt”, but you have to very specific and to know what and how you want achieve it.

Typical example is when you’ve asked about JSON once then Claude or ChatGPT starts to think that you want everything with JSON. :)

I have spent last two months using Google Gemini with 1 mil token window, and I have to say - inaccurate assignment leads to inaccurate result. And time really runs by days.

On the other side, I wouldn’t be able to have anything without it, as I am solo plus a bad coder :)

But you have spend a lot of time with writing and refining the assignment.

Sometimes it was better to start over than to end up in a dead end after 2-3 hours of wrangling the code from ai.

AI probably knows all main documentation. But in case you want to give more documentation before starting a project, just upload it as markdown.

actuallyalys

> At this point, most tech writing shops are serving llms.txt files and LLM-optimized Markdown

I find this hard to believe. I‘m not sure I’ve ever seen llms.txt in the wild and in general I don’t think most tech writing shops are that much on the cutting edge.

I have seen more companies add options to search their docs via some sort of AI, but I doubt that’s a majority yet.

ryandv

At this point I can't tell if all these blog posts hyping LLMs are themselves written by LLMs, and thus hallucinating alleged productivity boosts and "next-generation development practices" that are nowhere to actually be found in reality.

Shame, because it's a bunch of nice looking words - but it doesn't matter if they're completely false.

theletterf

No, not a majority yet. Forgot to add "bleeding edge" :)

shortrounddev2

Im not sure why any publisher would go out of their way to make it easier for an LLM tor read their site. If it were possible id block them entirely

dang

Related (I think?)

We revamped our docs for AI-driven development - https://news.ycombinator.com/item?id=44697689 - July 2025 (35 comments)

theletterf

Indeed. :)

jimbokun

It makes sense that technical writers would make the best Vibe Coders.

Giving the LLM a clear, thorough, fluent description of the system requirements, architecture, and constraints sufficient to specify a good implementation.

tokyolights2

Tangentially related: for those of you using AI tools more than I am, how do LLMs handle things like API updates? I assume the Python2/3 transition was far enough in the past that there aren't too many issues. How about other libraries that have received major updates in the last year?

Maybe a secret positive outcome of using automation to write code is that library maintainers have a new pressure to stop releasing totally incompatible versions every few years (looking at Angular, React...)

mrbungie

Horribly. In my experience when dealing with "unstable" or rapidly evolving APIs/designs like IaC with OpenTofu you need MCP connected to tf provider documentation (or just example/markdown files, whichever you like most) for LLMs to actually work correctly.

gopalv

> for those of you using AI tools more than I am, how do LLMs handle things like API updates?

From recent experience, 95% of changes are good and are done in 15 minutes.

5% of changes are made, but break things because the API might have documentation, but your code probably doesn't document "Why I use this here" and instead has "What I do here" in bits.

In hindsight it was an overall positive experience, but if you'd asked me at the end of the first day, I'd have been very annoyed.

I thought this would take me from Mon-Fri if I was asked to estimate, but it took me till Wed afternoon.

But half a day in I thought I was 95% done, but then it took me 2+ more days to close that 5% of hidden issues.

And that's because the test-suite was catching enough class of issues to go find them everywhere.

kaycebasques

> how do LLMs handle things like API updates?

Quite badly. Can't tell you how many times an LLM has suggested WORKSPACE solutions to my Bazel problems, even when I explicitly tell them that I'm using Bzlmod.

whynotmaybe

With Dart/Flutter, it's often recommending deprecated code and practice.

Deprecated code is quickly identified by VSCode (like Text.textScaleFactor) but not the new way of separating items in a column/row by using the "Spacing" parameters (instead of manually adding a SizedBox between every items).

Coding with an LLM is like coding with a Senior Dev who doesn't follow the latest trends. It works, has insights and experience that you don't always have, but sometimes it might code a full quicksort instead of just calling list.sort().

aydyn

If you think the correct API is not going to be in its weights (or if there are different versions in current use), you ask nicely for it to "please look at the latest API documentation before answering".

Sometimes it ignores you but it works more often than not.

remify

LLMs fall short on most edge cases

coffeecoders

To put it bluntly, the current state of AI often comes down to this: describing a problem in plain English (or your local language) vs writing code.

Say, “Give me the stock status of an iPhone 16e 256GB White in San Francisco.”

I still have to provide the API details somewhere — whether it’s via an agent framework (e.g. LangChain) or a custom function making REST calls.

The LLM’s real job in this flow is mostly translating your natural language request into structured parameters and summarizing the API’s response back into something human-readable.

devmor

I think something people really misunderstand about these tools is that for them to be useful outside of very general, basic contexts, you have to already know the problem you want to solve, and the gist of how to solve it - and then you have to provide that as context to the LLM.

That's what the point of these text documents is, and that's why it doesn't actually produce an efficiency gain the majority of the time.

A programmer who expects the LLM to solve an engineering problem is rolling the dice and hoping. A programmer who has solved an engineering problem and expects the implementation from the LLM will usually get something close to what they want. Will it be faster than doing it yourself? Maybe. Is it worth the cost of the LLM? Probably not.

The wild estimates and hype about AI-assisted programming paradigms come from people winning the dice roll on the former case and thinking that result is not only consistent, but also the same for the latter case.

quantdev1

> I think something people really misunderstand about these tools is that for them to be useful outside of very general, basic contexts, you have to already know the problem you want to solve, and the gist of how to solve it - and then you have to provide that as context to the LLM.

Politely need to disagree with this.

Quick example. I'm wrapping up a project where I built an options back-tester from scratch.

The thing is, before starting this, I had zero experience or knowledge with:

1. Python (knew it was a language, but that's it)

2. Financial microstructure (couldn't have told you what an option was - let alone puts/calls/greeks/etc)

3. Docker, PostgreSQL, git, etc.

4. Cursor/IDE/CLIs

5. SWE principles/practices

This project used or touched every single one of these.

There were countless (majority?) of situations where I didn't know how to define the problem or how to articulate the solution.

It came down to interrogating AI at multiple levels (using multiple models at times).

mylifeandtimes

ok, but if you don't have a lot of prior experience in this domain, how do you know your solution is good?

quantdev1

*zero experience

Short answer: No idea. Because I don't trust my existing sources of feedback.

Longer answer:

I've only gotten feedback from two sources...

AI (multiple models) and a friend that's a SWE.

Despite my best efforts to shut down AI's bias towards positive feedback - it keeps saying the work is ridiculously good and thinks I need to seriously consider a career change.

My friend - who knows my lack of experience - had a hard time believing I did the work. But he's not a believable source - since friends won't give you cold, hard feedback.

I'm thinking about sharing it on here when it's done.

devmor

I should have specified that I am referring to their usage for experienced developers working on established projects.

I think that they have much more use for someone with no/little experience just trying to get proof of concepts/quick projects done because accuracy and adherence to standards don't really matter there.

(That being said, if Google were still as useful of a tool as it was in its prime, I think you'd have just as much success by searching for your questions and finding the answers on forums, stackexchange, etc.)

quantdev1

Thanks for clarifying, and great points.

I could see how it would be dangerous in large-scale production environments.

bgwalter

They add claude.md files because they are forced by their employers. They could have done that years ago for humans.

I also see it in mostly spaghetti code bases, not in great code bases where no one uses "AI".

theletterf

Author here: If the LLM revolution helps us get more accessible and better docs, I, for one, welcome it.

Edit: I guess some commenters misunderstood my message. I'm saying that by serving also the needs of LLMs we might get more resources to improve docs overall.

jacobsenscott

We've seen this story many times before. Remember UML and Rational Rose, waterfall, big up front design, low/no code services etc. In every case the premise is "pay a few 'big brains' to write the requirements in some 'higher level language' and tools/low skill and low pay engineers can automatically generate the code."

It turns out, to describe a system in enough detail to implement the system, you need to use a programming language. For the system to be performant and secure you need well educated high skill engineers. LLMs aren't going to change that.

Anyway, this is tacitly declaring LLM bankruptcy - the LLM can't understand what to do by reading the most precise specification, the code, so we're going to provide less specific instructions and it will do better?

jeremyjh

Right, because LLMs cannot learn from their prior work in a project nor can they read a large code base for every prompt. So it can be helpful to provide it documents for understanding the project and also the code base layouts immediately in its context, but you can also generate a lot of that and just fix it up.

theletterf

Oh, I remember. I'm an OpenAPI fan, for example. This time it feels a bit different, though. It won't erase the requirement of having high skill engineers, nor the need to have tech writers create docs, but it might indeed help focus on docs under a different light: not as a post-factum, often neglected artifact, but as a design document. I'm talking of actual user docs here, not PRFAQs or specs.

jrvieira

hell, semantic web even. we wouldn't need AI

zer00eyz

> It turns out, to describe a system in enough detail to implement the system, you need to use a programming language.

Go back to design patterns. Not the Gang of Four, rather the book where the name and concept was lifted from.

What you will find is that implementations are impacted by factors that are not always intuitive without ancillary information.

It's clear when there is a cowpath through a campus, and the need for a sidewalk becomes apparent. It's not so clear when that happens in code because it often isnt linear. It's why documentation is essential.

"Agile" has made this worse, because the why is often lost, meetings or offline chats lead to tickets with the what and not the why. It's great that those breadcrumbs are linked through commits but the devil is in the details. Even when all the connections exist you often have to chase them through layers of systems and pry them out of people ... emails, old slack messages, paper notes, a photo of a white board.

9rx

> more accessible and better docs

Easy now. You might be skilled in documentation, but most developers write docs like they write code. For the most part all you are only going to get the program written twice, once in natural language and once in a programming language. In which case, you could have simply read the code in the first place (or had an LLM explain the code in natural language, if you can't read code for some reason).

StableAlkyne

> you are going to get the program written twice, once in natural language and once in a programming language.

How is this a bad thing? Personally, I'm not superhuman and more readily understand natural language.

If I have the choice between documentation explaining what a function does and reading the code, I'm going to just read the docs every time. Unless I have reason to think something is screwy or having an intricate understanding is critical to my job.

If you get paid by the hour then go for it, but I don't have time to go line by line through library code.

null

[deleted]

bgwalter

It does not give us better docs. The claude.md files are a performative action for the employer. No one will write APUE level documentation because everything is rewritten constantly in those code bases anyway and no one either has the skills or is rewarded to write like that any more.

theletterf

That's why I feel tech writers need step in and help.

righthand

I think I found where efficiency is being lost.

theletterf

Could you elaborate?

righthand

Well you’ve noticed a trend:

> I’ve been noticing a trend among developers that use AI: they are increasingly writing and structuring docs in context folders so that the AI powered tools they use can build solutions autonomously and with greater accuracy

To me this means a lot of engineers are spending time maintaining files that help them automate a few things for their job. But sinking all that time into context for an LLM is most likely going to net you efficiency gains only for the projects that the context was originally written for. Other projects might benefit from smaller parts of these files, but if engineers are really doing this then there probably is some efficiency lost in the creation and management of it all.

If I had to guess contrary to your post is that devs aren’t RTFM, but instead asking Llm or web search what a good rule/context/limitation would be and pasting it into a file. In which case the use of Llms is a complexity shift.

theletterf

I think some are doing just that, yes, which I guess would only increase entropy. So it's not just curation, but also design through words. Good old specs, if you want.