I avoid using LLMs as a publisher and writer
104 comments
·July 19, 2025ryeats
You know that teammate that makes more work for everyone else on the team because they do what they are asked to do but in the most buggy and incomprehensible way, that when you finally get them to move on to another team and you realize how much time you spent corralling them and fixing their subtle bugs and now when they are gone work doesn't seem like so much of a chore.
That's AI.
Spooky23
Just like a poorly managed team, you need to learn how to manage AI to get value from it. All ambiguous processes are like this.
In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.
AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.
blibble
> You know that teammate
now imagine he can be scaled indefinitely
you thought software was bad today?
imagine Microsoft Teams in 5 years time
darthcircuit
I’m not even looking forward to Microsoft teams on Monday.
ThatMedicIsASpy
I only need to look at the past 5 years of Windows
DavidPiper
We need to update Hanlon's Razor: Never attribute to AI that which is adequately explained by incompetence.
xerox13ster
And just like the original Hanlon’s Razor, this is not an excuse to be stupid or incompetent.
It is not a reason to accept stupidity or incompetence. We should reject these things and demand better.
chistev
Thank you.
bdangubic
smart people are reading comments like and going “I am glad I am in the same market as people making such comments” :)
ookblah
seriously, the near future is going to be:
1) people who reject it completely for whatever reason. 2) people who use it lazily and produce a lot of garbage (lets be honest, this is probably going to happen a lot which is why maybe group #1 hates this future. reminds me of the outsourcing era) 3) people who selectively use it to their advantage.
no point in groups 1 and 3 trying to convince each other of anything.
cgriswald
I think that has been the state of affairs for awhile now.
I think your explanation for group 1 is true to a degree but have two other additional explanations: (1) Some element of group 1 is ideologically opposed. It might be copyright, or Luddism, or some other concern for our fellow humans. (2) Some are deluded into thinking there are only two groups and that group 3 people are all delusional.
Although it is probably an uphill battle I do think both groups 1 and 3 have things to learn from each other.
IAmGraydon
I’m glad for now. Understanding how to utilize AI to your advantage is still an edge at the moment, but it won’t be long before almost everyone figures it out.
raincole
Yeah. Interestingly enough, I've found utilizing AI is a very shallow skill that anyone should be able to learn in days. But (luckily) people have some tendency preventing them from doing so.
bdangubic
it’ll be years because 87.93% of SWEs are subpar like the post I made comment on.
bambax
I'm extremely wary of AI myself, especially for creative tasks like writing or making images, etc., but this feels a little over the top. If you let it run wild then yes the result is disaster, but for well defined jobs with a small perimeter AI can save a lot of time.
runiq
In the context of code, where review bandwidth is the bottleneck, I think it's spot on. In the arts, comparatively -- be they writing, drawing, or music -- you can feel almost at a glance that something is off. There's a bit of a vibe check thing going on, and if that doesn't pass, it's back to the drawing board. You don't inherit technical debt like you do with code.
0xEF
You are not wrong, but I pose the argument that too many people approach Gen AI as a replacement instead of a tool, and therein lies the root of the problem.
When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.
It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.
When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.
Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.
cardanome
Generative AI is like micromanaging an talented Junior Dev that never improves. And I mean micromanaging to such a toxic degree that not human would ever put up with that.
It works but it simply not what most people want. If you love to code then you just abstracted away the most fun parts and have to only do the boring parts now. If you love to manage, well managing actual humans and seeing them grow and become independent is much more fulfilling.
On a side note, I feel like prompting and context management is something that is easier for me personally as a person with ADHD as I am already used to working with forms of intelligence that are different to my own. I am used to having to explicitly state my needs. My neurotypical co-workers get frustrated that the LLM can't read their minds and always tell me that it should know what they want. When it nudge them to give it more context and explain better what they need they often resist and say they shouldn't have to. Of course I am stereotyping a bit here but still an interesting observation.
Prompting is indeed a skill. Though I believe the skill ceiling will lower once tools get better so I wouldn't bank too much on it. What is going to be valuable for a long time is probably general software architecture skills.
nathan_douglas
I don't disagree with anything you've said, but I _do_ think I'm starting to enjoy this workflow. I don't mind the micromanagement because it's usually the ideas that appeal most to me, not the line-level details of writing code. I suppose I fit in somewhere between the "love to code" and "love to manage" dichotomy you've presented. Perhaps I love to make it look like I have coded? :)
I set up SSH certificates in my homelab last night with Claude Code. It was a somewhat aggravating process - I had to remind it a couple times of some syntax issues, and I'm not sure that it actually took less time than I would've taken to do it myself. And it also locked me out of my cluster when it YOLO'ed some changes it should not have. On the whole, one of the worst AI experiences I've had recently.
But I'm thrilled with it, TBH, because it got done, it works, I didn't have to beat my head against the wall for each little increment of progress, and while Claude Code was beating its own head against the wall, I was able to relax and 1) practice my French, and 2) read my book (Steven Levy's _Artificial Life_, which I recently saw excerpted on HN).
The general state of things is probably still pretty terrible. I know there're no end of irritations that I have with Claude Code, and everything else I've looked at is even less pleasant. But I feel like this might be going in a good direction.
*EDIT*: It should go without saying though that I'd much rather be mentoring a junior person, though, as you said.
scarecrowbob
"Gen AI is a great tool, if you approach it with the right mindset."
People keep writing this sentence as if they aren't talking to the most tool-ed up group of humans in history.
I have no problems learning tools, from chorded key shortcuts to awk/sed/grep to configuring all three of my text editors (vim, sublime, and my IDE) to work for their various tasks.
Hell, I have preferred ligature fonts for different languages.
Sometimes tools aren't great and make your life harder, and it's not because folks aren't willing to learn the tool.
ninetyninenine
They write that sentence because gen ai has been effective for them.
We have intelligent people using ai and claiming it’s useful.
And we have other intelligent people who’s saying it’s not useful.
I’m inclined to believe the former. You can’t be deluded about positives usefulness. But you can be about the negative simply by using the LLM in a half assed way and picking the most convenient conclusion without nuance.
null
billy99k
You can think that..and you will eventually be left behind. AI is not going anywhere and can be used as a performance booster. Eventually, it will be a requirement for most tech-based jobs.
andersmurphy
This reminds me of crypto’s “have fun being poor”. Except now it’s “have fun being left behind/being unemployed”. The more things change the more things stay the same.
mwigdahl
Yes, and it was exactly the same with compilers. All hype and fad -- everyone who's serious about software development writes in assembly.
billy99k
A bit different when you actually see the results.
A guy I went to highschool with complains endlessly about AI generated art and graphics (he's an artist) and like you, just wants to bury his head in the sand.
Consumers don't care if art is generated by AI or humans and in a short period of time, you won't be able to tell the difference.
With the money being poured into AI by all major tech companies, you will be unemployed if you don't keep up with AI.
sampl3username
Left behind what? Consumeristic trash?
dragontamer
Don't you see that the future is XML SOAP RPCs? If you don't master this new technology now, you'll be left behind!!
Then again, maybe I'm too old now and being left behind if I remember the old hype like this....
The entirety of the tech field is constantly hyping the current technology out of FOMO. Whether or not it works out in the future it's always the same damn argument.
ryeats
I was being a bit melodramatic, I'll use it occasionally and If AI gets better it can join my team again I don't love writing boilerplate I just know it's not good at writing maintainable code yet.
null
rsynnott
I mean, the promoters of every allegedly productivity improving fad have been saying this sort of thing for all of the twenty-odd years I’ve been in the industry.
If LLMs eventually become useful to me, I’ll adopt LLMs, I suppose. Until then, we’ll, fool me once…
BrouteMinou
When all you got is pontificating...
threatripper
You sound bitter. Did you try using more AI for the bug fixing? It gets better and better.
ryeats
My interest tend to be bleeding edge where there is little training data. I do use AI to rubber duck but can rarely use it's output directly.
threatripper
I see. In my experience current LLMs are great for generating boilerplate code for basic UIs but fail at polishing UI and business logic. If it's important you need to rewrite the core logic completely because they may introduce subtle bugs due to misunderstandings or sloppiness.
Arainach
One of the biggest problems with AI is that it doesn't get better and better. It makes the same mistakes over and over instead of learning like a junior eng would.
AI is like the absolute worst outsourced devs I've ever worked with - enthusiastically saying "yes I can do that" to everything and then delivering absolute garbage that takes me longer to fix/convince them to do right than it would have taken for me to just do it myself.
skydhash
Cognitive load are not related to the difficulty of a task. It’s about how much mental energy is spent monitoring it. To reduce cognitive load, you either boost confidence or avoid caring. You can’t have confidence in AI output and most people proposing it looks like they’re preaching to not care about quality (because quantity yay).
threatripper
But quality is going up a lot. Granted, it's not up to human levels yet, but it is going up fast. Also we will see more complex quality control in AI output, tailored to specific use cases and sold at a premium. Right now these don't exist and if they existed it would be too expensive to run 100x requests for the same amount of output. So humans are stuck in quality control, for now.
ants_everywhere
My writing style is pretty labor intensive [0]. I go through a lot of drafts and read things out loud to make sure they work well etc. And I tend to have a high standard for making sure I source things.
I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.
I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.
The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.
I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.
[0] Aside from internet comments of course, which are mostly stream of consciousness.
bgwalter
Michelangelo worked alone on the David for more than two years:
https://en.wikipedia.org/wiki/David_(Michelangelo)#Process
Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).
Even research many authors simply could not afford.
ants_everywhere
Maybe Michelangelo was a bad choice, but I hope it's clear from my wording that I was using Michelangelo as an example and not saying anything specific his use of assistants compared to his peers. And David is a masterpiece not a minor work.
I don't see where the article says he worked alone on David. It does seem that he used a miniature (bozzetto) and then scaled up with a pointing machine. One possibility is he made the miniature and had assistants rough out the upscaled copy before doing the fine work himself. Essentially, using the assistants to do the work you'd do on a band saw if you were carving out of wood.
> I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).
Restricting to non-commercial authors would narrow it down since hiring assistants to write drafts probably only makes financial sense if the cost of the assistant is less than the cost of your time it would take drafting.
Alexander Dumas is maybe a bit higher brow than Stephen King
> He founded a production studio, staffed with writers who turned out hundreds of stories, all subject to his personal direction, editing, and additions. From 1839 to 1841, Dumas, with the assistance of several friends, compiled Celebrated Crimes, an eight-volume collection of essays on famous criminals and crimes from European history. https://en.wikipedia.org/wiki/Alexandre_Dumas
But in general I agree, drafts are often the heart of the work and it's where I'd expect masters to spend a lot of their time. Similarly with the statue miniatures.
netule
James Patterson comes to mind. He simply writes detailed outlines for the plots of his novels and has other authors write them for him. The books are then published under his name, which is more like a brand at that point.
null
BolexNOLA
At its most basic level I just like throwing things I’ve written at ChatGPT and telling it to rewrite it in “x” voice or tone, maybe condense it or expand on some element, and I just pick whatever word comes to mind for the style. Half the time I don’t really use what it spits out. I am a much stronger editor than I am a writer, so when I see things written a different way it really helps me break through writer’s block or just the inertia of moving forward on something. I just treat it like a mediocre sounding board and frankly it’s been great for that.
When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha
mrbluecoat
I avoided cell phones too when they first came out. I didn't want the distraction or "digital leash". Now it's a stable fixture in my life. Some technology is simply transformational and is just a matter of time until almost everyone comes to accept it at some level. Time will tell if AI breaks through the hype curve but my gut feeling is it will within 5 years.
GlacierFox
My phone is a fixture in my life but spend a lot of effort trying to rid myself of it actually. The thing for me is currently, on the receiving end is that I just don't read anything (apart from books) like it has any semblance of authenticity anymore. My immediate assumption is that a large chunk of it or sometimes the entire piece has been written or substantially altered by AI. Seeing this transferring into the publishing and writing domain is just simply depressing.
uludag
I avoided web3/crypto/bitcoin altogether when they came out. I'm happy I did and I don't see myself diving into this world anytime soon. I've also never used VR/AR, never owned a headset, never even tried one. Again, I don't see this changing any time soon.
Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.
cheschire
smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.
I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.
As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"
People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.
MCP will probably kill the web as we know it.
TheOtherHobbes
That's not what will happen. The ad tech companies will pivot and start selling these services as neutral helpers, when in fact they'll use their knowledge of your schedule, preferences, and income to spend money on goods and services you don't really want.
It will be controlling and disempowering - manipulative personality-profiled "suggestions" with a much higher click rate than anything we have today.
And the richer you are, the more freedom you'll have to opt out and manage your own decisions.
sampl3username
>smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.
This. I need access to banking , maps and 2FA. If I could use a dumb phone, with just a camera, GPS and whatsapp, I would use it.
wright-goes
Access to banking is indeed critical, but when? And for 2FA, which accounts, and when? As bank apps become more invasive and they also fail to offer substantive 2FA (e.g. the forcing of text messaging as a 2FA option falls outside my risk tolerance), I've segmented my devices' access.
The ability to transfer funds is something I'm now fine doing via a dedicated device with a dedicated password manager account, and I'm fine uninstalling banks' apps from my phone and dis-enrolling cell phone numbers.
Given the wanton collection and sale of my data by many entities I hadn't expected (naivety on my part), I've restricted access to critical services by device and or web browser only. It's had the added bonus of making me more purposeful in what I'm doing, albeit at the expense of a convenience. Ultimately, I'm not saying my approach is right for everyone, but for me it's felt great to take stock of historical behavior and act accordingly.
Findecanor
I bought my first smartphone in 2020 after my old compact camera died, and I couldn't find a replacement to buy because they had been supplanted by smartphones.
coliveira
If this happens I have an excellent business strategy. Human concierges that will help people with specific areas of their lives. Sell a premium service where paid humans will interact with all this noise so clients will never have to talk to machines.
ApeWithCompiler
True, but at least for me also true: Smartphones are a stable fixture in my life and by now I try to get rid of them as much as possible.
threatripper
What AI currently lacks is mainly context. A well trained, experienced human knows their reader very well and knows what they don't need to write. And for what they write they know the tone they need to hit. I totally expect that in the future this will totally turn around, the Author will write the facts and framework with the help of AI and your AI will extract and condense it for your consumption. Your AI knows everything about you. Knows everything you ever consumed. Knows how you think and what it needs to tell you in which tone to give you the best experience. You will be informed better than ever before. The future in AI will be bright!
timeon
Analogies are not arguments.
tolerance
For things like coding LLMs are useful and DEVONThink's recent AI integrations allow me to use local models as something like an encyclopedia or thesaurus to summarize unfamiliar blocks of text. At best I use it like scratch paper.
I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.
I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.
Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.
mobeets
I’m with you—-I think you did a good job of summarizing all the places that LLMs are super practical/useful, but agreed that for prose (as someone who considers themselves a proficient writer), it just never seems to contribute anything useful. And those who are not proficient writers, I’m sure it can be helpful, but it certainly doesn’t contribute any new ideas if you’re not providing them.
jml78
I am not a writer. My oldest son,16, started writing short stories. He did not use AI in any aspect of the words on the page. I did however recommend that he feed his stories and ask a LLM for feedback on things that are confusing, unclear, or holes in the plot.
Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom
moregrist
Have you looked for:
- Writing groups. They often have sessions that provide feedback and also help writers find/build a sense of community. Your son would also get to listen to other writers talk about their work, problems they’ve run into and overcome, and other aspects of their craft.
- School (sometimes library) writing workshops. This helps students develop bonds with their peers and helps both students: the ones giving feedback are learning to be better editors.
Both of these offer a lot of value in terms of community building and also getting feedback from people vested in the the craft of writing.
jml78
Good feedback, we live a somewhat unusual lifestyle. We are digital nomads that live on a sailboat. I think some of that is possible and I will recommend he look for some online writing groups but the places we generally sail to are countries where schools/libraries aren’t going to have those types of things. It is challenge enough flying him back to the US to take AP exams
ryeats
The open question is will someone who learns this way actually develope taste and mastery. I think the answer is mixed because some will use it as a crutch but it will also be able to give them a little bit of insight beyond what they could learn by reading and inquisitive minds will be able to grow discerning.
zB2sj38WHAjYnvm
This is very sad.
endemic
Why? Seems like a good idea, relying on the LLM to write for you won’t develop your skills, but using it as an editor is a good middle ground. Also there’s no shame in saying an LLM is “better” than you at a task.
zaphod420
It's not sad, it's using modern tools to learn. People that don't embrace the future get left behind.
SV_BubbleTime
Large Language Model, not Large Fact Model.
tombarys
I am a book publisher & I love technology. It can empower people. I have been using LLM chatbots since they became widely available. I regularly test machine translation at our publishing house in collaboration with our translators. I have just completed two courses in artificial intelligence and machine learning at my alma mater, Masaryk University, and I am training my own experimental models (for predicting bestsellers :). I consider machine learning to be a remarkable invention and catalyst for progress. Despite all this, I have my doubts.
esjeon
I know a publisher who translates books (English to Korean). He works alone these days. Using GPT, he can produce a decent-quality first draft within a day or two. His later steps are also vastly accelerated because GPT reliably catches typos and grammar errors. It doesn't take more than a month to translate and print a book from scratch. Marvelous.
But I still don't like that the same model struggles w/ my projects...
null
jdietrich
As a professional writer, the author of this post is likely a better writer than 99.99% of the population. A quick skim of his blog suggests that he's comfortably more intelligent than 99% of people. I think it's totally unsurprising that he isn't fully satisfied with the output of LLMs; what is remarkable is that someone in that position still finds plenty of reasons to use them.
Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce".
nerevarthelame
I'm worried that an increasing number of people are relying on LLMs for things as fundamental to daily life as expressing themselves verbally or critical thinking.
Perhaps LLMs can move someone's results from the 25th percentile to the 50th for a single task. (Although there's probably a much more nuanced discussion to be had about that: people with poor writing skills can still have unique, valuable, and interesting perspectives that get destroyed in the median-ization of current LLM output.) But after a couple years of using LLMs regularly, I fear that whatever actual talent they have will atrophy below their starting point.
antegamisou
Idk, LLM writing style somehow almost always ends up sounding like an insufferable smartass Redditor spiel. Maybe it's only appealing to the respective audience.
K0balt
Ai is useful in closed loop applications, often it can even do a decent job of closing the loop itself… but you need to understand that it is a fundamentally extractive, not creative, process. The body of human cultural knowledge is the underlying resource , and AI is the drill with which we pull out the parts we want.
Coding, robotics, navigation of constrained data spaces such as translation, tagging, indexing, logging, parsing, data transformations… those are all strong target candidates for transformer architecture automation.
Creative thought is not.
null
metalrain
Pretty similar view than others have expressed in veiks of "LLMs can be good, just not at my [area of expertise]".
esjeon
I'm pretty sure they were generally (if not completely) correct when they said that.
It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.
romarioj2h
AI is a tool like any other, and it can be used well or poorly, just like any other tool. It's important to know its limits. Being a tool, it must be studied for proper use.
There have been quite a few skeptic blog posts recently about LLM. Some say they won't use it for coding, others for getting creative ideas, and others won't use it for editing and publishing. However, the silent issue all these posts have in common is that resistance is futile.
To be fair, I also don't like using Copilot when working on code. In many cases it turns into a weird experience when the agent generates the next line(s) and I basically become a discriminator judging if the thing really understands my problem and solution. To be honest, it's boring even if eventually it might make me turn in code faster.
With that said, I cannot ignore that LLMs are happening, and this is the future. The models keep improving but more importantly, the ecosystem keeps improving with things like MCP and better defined context for LLM tools.
We might be looking at a somewhat grim prospect. But like it or not, this is the future. Adapt and survive.