I know you didn't write this
112 comments
·December 22, 2025messe
twothamendment
Another copy/paste reason - I can't count the number of times I've written up something for work on my own google account by mistake, then paste it into a new doc on the work account so I can share it.
QuercusMax
You really should use separate browser profiles...
yjftsjthsd-h
Or separate machines. It's not impossible to maintain sufficient separation in software, but it's a lot easier to skip the whole mess.
el_benhameen
Yep. I do this because I explicitly do not want a third party to see my thought process. If I wanted the reader to see my edits and second thoughts, I would have included them in the final document.
like_any_other
> my thought process
Don't forget about typing patterns, that could be used to deanonymize you across different platforms (anywhere that you type into a webpage that runs javascript):
https://www.bleepingcomputer.com/forums/t/759050/improve-ink...
GaryBluto
It's bizarre to me that this didn't occur even slightly to the post author.
NitpickLawyer
As with many other things (em dashes, emojis, bullet lists, it's-not-x-it's-y constructs, triple adjectives, etc) seeing any one of them isn't a tell. Seeing all of them, or many of them in a single piece of content, is probably the tell.
When you use these tools you get a knack for what they do in "vanilla" situations. If you're doing a quick prompt, no guidance, no context and no specifics, you'll get a type of answer that checks many of the "smells" above. Getting the same over and over again gets you to a point where you can "spot" this pretty effectively.
pessimizer
The author did not do this. The author thought it was wonderful, read the entire thing, then on a lark (they "twigged" it) checked out the edit history. They took the lack of it as instant confirmation ("So it’s definitely AI.")
The rest of the blog is just random subjective morality wank with implications of larger implications, constructed by borrowing the central points of a series of popular articles in their entirety and adding recently popular clichés ("why should I bother reading it if you couldn't bother to write it?")
No other explanations about why this was a bad document, or this particular event at all, but lots of self-debate about how we should detect, deal with, and feel about bad documents. All documents written by LLM are assumed to be bad, and no discussion is attempted about degrees of LLM assistance.
If I used AI to write some long detailed plan, I'd end up going back and forth with it and having it remove, rewrite, rethink, and refactor multiple times. It would have an edit history, because I'd have to hold on to old drafts in case my suggested improvements turned out not to be improvements.
The weirdest thing about the article is that it's about the burden of "verification," but it thinks that what people should be verifying is that LLMs had no part in what they've received. The discussion I've had about "verification" when it comes to LLMs is the verification that the content is not buggy garbage filled with inhuman mistakes. I don't care if it's LLM-created or assisted, other than a lot of people aren't reading and debugging their LLM code, and LLMs are dumb. I'm not hunting for em-dashes.
-----
edit: my 2¢; if you use LLMs to write something, you basically found it. If you send it to me, I want to read your review of it i.e. where you think it might have problems and why you think it would help me. I also want to hear about your process for determining those things.
People are confusing problems with low-effort contributors with problems with LLMs. The problem with low-effort contributors is that what they did with the LLM was low-effort and isn't saving you any work. You can also spend 5 minutes with the LLM. If you get some good LLM output that you think is worth showing to me, and you think it would take significant effort for me to get it myself, give me the prompts. That's the work you did, and there's nothing wrong with being proud of it.
jandrese
Or the tell that the guy who usually writes fairly succinctly suddenly dumps five thousand words with all of the details that most people wouldn't bother to write down.
It would be interesting to see the history where the whole document is dumped in the file at once, but then edits and corrections are applied top to bottom to that document. Using AI isn't so much the problem as trusting it blindly.
plorkyeran
Dumping the entire file into google docs and then editing and corrections applied top to bottom is exactly my normal workflow. I do my writing in vim, paste it into google docs, and then do a final editing pass while fixing the formatting.
like_any_other
> the whole document is dumped in the file at once, but then edits and corrections are applied top to bottom to that document
This also happens if one first writes in an editor without spellchecking, then pastes into the Google Doc (or HN text box) that does have spellchecking.
Lerc
I have seen a number of write ups where I think the only logical explanation is that they are not conveying what literally happened but spinning narrative to express their point.
There was an article the other day where the writer said something along the lines of it suddenly occurred to them that others might read content they had access to. They described thenselves as a security researcher. I couldn't imagine a security researcher having that occur to them, I would think that it is a concept continually present in their concept of what data is. I am not a security researcher and it certainly something I'm fairly constently aware of.
Similarly I'm not convinced the "shouldn't this plan be better" question is in good faith either. Perhaps it just betrays a fundamental misunderstanding of the operation being performed by a model, but my intuition is that they never expected it to be very good and are feigning surprise that it is not.
pgwhalen
It probably did, but they didn't feel the need to fully explain why they were confident it was AI generated, since that's not the point of the article.
zephen
> This isn't always a great indicator.
Right. Certainly not dispositive.
> use VIM and the copy/paste the completed document into it.
But he did mention tables. You'd think if they weren't just ASCII art, there'd be _some_ google docs history about fixing them up.
Izkata
Different-sized headers too.
clickety_clack
I also interact with Google Docs as little as possible. I draft in Notes or Obsidian and copy the text in. I just hate the platform.
el_benhameen
Oh, another fun one: I once got an offer letter via Docs. The edit history included the original paste from another candidate’s offer letter, including their name and salary. Useful for benchmarking!
superultra
I don’t need everyone seeing the dirty laundry of my first draft and edits. I too work in a working doc and then when completed I drop it at once into the final google doc.
jchw
On another similar but different note, I don't think I've ever uploaded any code written by LLMs to GitHub, but I do sometimes upload fully complete projects under my "initial commit". Some people may legitimately just hide the edit history on purpose just because they don't want to "show their work". It's not really a particularly good habit, but I think a lot of us can relate.
LanceH
A legit reason to hide your edit history is you might not remember what was in there. Say you have a moment of frustration and type out "this is an absolute garbage assignment by a braindead professor". Or you jot a quick note from the doctor because it happens to be open.
The simple fact is that the reader has no business reading the edit history, and the ability to make this happen should probably be far more prominent in document applications like Word or Google Docs.
karaterobot
I'm sometimes asked to produce meaningless 30-page documents that nobody ever reads. I mean literally nobody, since I can see the history of who has accessed it. Me and a proof-reader, and occasionally someone will open it up to check that it exists. But nobody reads them, let alone reads them closely. Not the distant funder who added it as a line-item requirement to their grant (their job is adding line items to grants, not reading documents), nor the actual people involved in the project, who don't have time to read a meaningless document, and don't need to. It's of use to no one, it's just something that must be done because we live in a stupid world.
I've started having AI write those documents. Each one used to take me a full week to produce, now it's maybe one day, including editing. I don't feel bad about it. I'm ecstatic about it, actually; this shouldn't be part of my job, so reducing its footprint in my life is a blessing. Someday, someone will realize that such documents do not need to exist in the first place, but that's not the world we live in right now, and I can't change it. I'm just glad AI exists for this kind of pointless yeoman's work.
zephen
It's like burning fuel to till the soil so you can plant corn to make ethanol.
Almost an inverse Kafka universe; there are tools that can empower you to work the system in such a way that the effects of the externalities are very diffuse.
Still not good, but better than a typical Catch-22.
isodev
Once, I had a very frustrating slack chat with a fellow developer. We were discussing edge cases for a new feature and the experience from my perspective was that for each of my messages, I’d get a “in case of … how about …” style reply. The topic was focused around iOS vs. Android app lifecycle. Every now and then my colleague would suggest APIs or events that simply don’t exist.
This was before vibe coding, around the days of GPT 3.5. At the time I just thought it was a challenging topic and my colleague was probably preoccupied with other things so we parked the talk.
A few weeks later, while exploring ways to use GPT for technical tasks I suddenly remembered that slack chat and realised the person had been copy pasting my messages to gpt and back. I really felt bad at that moment, like… how can you do this to someone…? It’s not bad that you try tools to find information or whatever, but not disclosing that you’re effectively replacing your agency with that of a bot is just very suboptimal and probably disrespectful.
teaearlgraycold
Anyone doing this should be fired. Both for the lack of trust they bring to the team but also because they’re just making themselves a middle man to an LLM. Why not cut out the middle man?
zephen
People who make things don't make any money.
People who claim that they are disrupting with disintermediation, but actually simply replace the old intermediary with their own?
Those people get filthy rich.
People who _should_ be making things but are trying this intermediation technique themselves will most likely find that it's like other forms of lying. Go big or go home.
thatjoeoverthr
“Any time saved by (their) AI prompting gets consumed by verification overhead, …”
This
When I receive a PR, of course it’s natural an AI is involved.
The mortal sin is the rubber stamp.
If they haven’t read their own PR, I only have so many warnings in me. And yes, it is highly visible.
a1j9o94
I know I'm an outlier on HN, but I really don't care if AI was used to write something I'm reading. I just care whether or not the ideas are good and clear. And if we're talking about work output 99% of what people were putting out before AI wasn't particularly good. And in my genuine experience AI's output is better than things people I worked with would spend hours and days on.
I feel like more time is wasted trying to catch your coworkers using AI vs just engaging with the plan. If it's a bad plan say that and make sure your coworker is held accountable for presenting a bad plan. But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.
skwirl
>But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.
The coworker should just give me the five bullet points they put into ChatGPT. I can trivially dump it into ChatGPT or any other LLM myself to turn it into a "plan."
mrisoli
I feel the same way, if all one is doing is feeding stuff into AI without doing any actual work themselves, just include your prompt and workflow into how you got AI to spit this content out, it might be useful for others to learn how to use these LLMs and shows train of thought.
I had a coworker schedule a meeting to discuss a technical design of an upcoming feature, I didn't have much time so I only checked the research doc moments before the meeting, it was 26 pages long with over 70 references, of which about 30+ were reddit links. This wasn't a huge architectural decision so I was dumbfounded, seemed he barely edited the document to his own preferences, the actual meeting was maybe my most awkward meeting I've ever attended as we were expected to weigh in on the options presented but no one had opinions, not even the author, on the whole thing. It was just too much of an AI document to even process.
poemxo
How do you know the coworker didn't bully the LLM for 20 minutes to get the desired output? It isn't often trivial to one-shot a task unless it's very basic and you don't care about details.
Asking for the prompt is also far more hostile than your coworker providing LLM-assisted word docs.
dj_mc_merlin
If ChatGPT can make a good plan for you from 5 bullet points, why was there a ticket for making a plan in the first place? If it makes a bad plan then the coworker submitted a bad plan and there's already avenues for when coworkers do bad work.
a1j9o94
Honestly if you have a working relationship/communication norms where that's expected, I agree just send the 5 bullets.
In most of my work contexts, people want more formal documents with clean headings titles, detailed risks even if it's the same risks we've put on every project.
meowface
Ever since some non-native-English-speaking people within my company started using LLMs, I've found it much easier to interact and communicate with them in Jira tickets. The LLM conveys what they intend to say more clearly and comprehensively. It's obviously an LLM that's writing but I'm overall more productive and satisfied by talking to the LLM.
If it's fiction writing or otherwise an attempt at somewhat artful prose, having an LLM write for you isn't cool (both due to stolen valor and the lame, trite style all current LLMs output), but for relatively low-stakes white collar job tasks I think it's often fine or even an upgrade. Definitely not always, and even when it's "fine" the slopstyle can be grating, but overall it's not that bad. As the LLMs get smarter it'll be less and less of an issue.
amarant
Agreed! I've reached the conclusion that a lot of people have completely misunderstood why we work.
It's all about the utility provided. That's the only thing that matters in the end.
Some people seem to think work is an exchange of suffering for money, and omg some colleagues are not suffering as much as they're supposed to!
The plan(or any other document) has to be judged on its own merits. Always. It doesn't matter how it was written. It really doesn't.
Does that mean AI usage can never be problematic? Of course not! If a colleague feeds their tasks to a LLM and never does anything to verify quality, and frequently submits poor quality documents for colleagues to verify and correct, that's obviously bad. But think about it: a colleague who submits poor quality work is problematic regardless of if they wrote it themselves or if they had an AI do it.
A good document is a good document. And a bad one is a bad one. Doesn't matter if it was written using vim, Emacs or Gemini 3
mystifyingpoi
> I just care whether or not the ideas are good and clear
That's the thing. It actually really matters whether the ideas presented are coming from a coworker, or the ideas are coming from LLM.
I've seem way too many scenarios where I'm asking a coworker, if we should do X or Y, and all I get is a useless wall of spewed text, with a complete disregard to the project and circumstances on hand. I need YOUR input, from YOUR head right now. If I could ask Copilot I'd do that myself, thanks.
a1j9o94
I would argue that's just your coworker giving you a bad answer. If you prompt a chatbot with the right business context, look at what it spits out, and layer in your judgement before you hit send, then it's fine if the AI typed it out.
If they answer your question with irrelevant context, then that's the problem, not that it was AI
jvanderbot
If I discover you fed me AI output, directly from AI, it really makes me wonder what you are doing here. What did you add to this equation when I could have done it myself?
At least a "Generated by AI, reviewed and edited by xyz" tag would be some indicator of effort and accountability.
It may not be wrong to use AI to generate things whole cloth, but it definitely sidesteps something important and calls into question the "prompter's" contributions to the whole thing.
GMoromisato
Before AI, if someone submitted a well-formatted, well-structured document, we could assume they spent a lot of time on it and probably got the substance right. It's like the document is a proof-of-work that means I can probably trust the results.
Maybe we need a different document structure--something that has verification/justification built in.
I'd like to see a conclusion up front ("We should invest $x billion on a new factory in Malaysia") followed by an interrogation dialogue with all the obvious questions answered: "Why Malaysia and not Indonesia?", "Why $x and not $y billion?", etc.
At that point, maybe I don't care if the whole thing was produced by AI. As long as I have the justification in front of me, I'm happy. And this format makes it easy to see what's missing. If there's a question I would have asked that's not in the document, then it's not ready.
axus
Why can't the plan be judged on its merits? Rigorous verification of the idea is a good thing that should happen anyways. The main potential problem I see is transmission of privileged information to a third party.
I assume they are working at a business to make money, not a school or a writing competition.
andy99
Because AI can generate meritless works far faster than anyone can judge their merits. Asking someone to read your AI thing is basically asking someone to do the work for you. If you respect your colleagues time, you should be sharing your best version of inputs, not raw material. Not only that, you should have thought about and be able to defend it. Throwing some AI thing over the fence, you haven’t thought about it either, why would you expect your colleague to?
I’d add to that, long form AI output is really bad and basically unsuitable for anything.
Something like “I got GPT to make a few bullet points to structure the conversation” is probably acceptable in some cases if it’s short. The worst I can imagine is giving someone a “deep research” article to read as if that’s different from sending them to google.
axus
Yes I made the assumption that the person who "put the plan together" did their own diligence of reviewing it before emailing, but maybe that is too charitable for an "AI plagiarist".
If someone sends me incomplete work I will judge them for that, the history of the work relationship matters and I didn't see it in the blog post.
tediousgraffit1
This is a trust issue. If someone I trust hands me a big pr, I focus on the important details. If someone i dont trust hands me a big pr, i just reject it and ask them to break the problem down further. I dont waste my time on this kind of thing, regardless of whether it was hand written or generated.
zephen
The unstated elephant in the room is that you can't possibly know how much thought the originator has given to this.
You can't know if it has been reviewed and checked for minimal sanity, or just chucked over the fence.
So you have to fully vet it.
And, if you have to fully vet it, then what value has the originator added? Might as well eliminate their position.
dj_mc_merlin
> The unstated elephant in the room is that you can't possibly know how much thought the originator has given to this.
You can just ask them if they reviewed it in detail.
xeckr
>Might as well eliminate their position.
It's where we're headed.
ben_w
> Why can't the plan be judged on its merits? Rigorous verification of the idea is a good thing that should happen anyways.
Situational.
I don't know this blogger or what the plan involved; but for sake of agument, let's say it was a business plan, and let's say in isolation it's really good, 99.9% chance of success with 10x returns kind of good.
Everyone in whatever problem space this is probably just got the same quality of advice from their own LLM prompting. That 99.9% is no longer "in isolation", it is a correlated failure where all the other people doing the same thing as you makes it less viable.
That's a good reason not to use a public tool, even when the output is good.
Correlated risk disguised as uncorrolated risk was a big part of the global financial crisis in the late 00s.
recursive
The problem comes from the asymmetry between the effort that went into generating and judging. You can have one person spinning out documents that can keep a whole team busy and dragging everyone down.
Along the same lines as "A lie travels around the globe while the truth is putting on its shoes."
dj_mc_merlin
If the documents they're putting out are bad, then they're doing bad work and that eventually comes with consequences from your coworkers and superiors. If they're doing good work, then great! Who cares if an LLM wrote most of it and they just edit it? That's not super different than the current relationship between senior and line workers.
unyttigfjelltol
So many technologists offended at the use of technology. Next they’ll insist on pen-on-paper for truly authentic work product, and after that, 3 days’ wilderness meditation on it, to prove you really internalized it.
Look, it’s now like, email in 2004. You see spam, that it has found email. It doesn’t mean you refuse to interact with anyone by email, write geocities posts mocking email-users. You just acknowledge the technology (email) can be used for efficiency, results, and it also can be misused as a giant time-waster.
The author of the article here is basically saying “technology was used = work product is trash”. The ”spam” folks are seeing must be horrible to evoke this kind of condemnatory response.
acedTrex
Because judging something on its merit is intrinsically tied to judging the underlying amount of effort that was put into something.
bakugo
> Why can't the plan be judged on its merits?
Because of the difference in effort involved in generating it vs effort required to judge it.
Why are you entitled to "your" work being judged on its merits by a real human, when the work itself was not created by you, or any human? If you couldn't be bothered to write it, why should someone else be bothered to read it?
null
ArcHound
> If you know in your heart of hearts that you didn’t put the work in, you’re undermining the social contract between you and your reader.
There's been a lot of social contract undermining lately. Does anyone please know about something that can be done to try and revert back? Social contract of "F you. I got mine" isn't very appealing to me, but that seems to be the current approach.
jdashg
We literally have to be willing to get taken advantage of sometimes, and we have to come down hard on the "don't hate the player, hate the game" f-you-got-mine assholes.
It is not weakness, but strength, to make yourself (reasonably!) vulnerable to being taken advantage of. It is not strength, but weakness, to let bad behavior happen around you. You don't have to do everything, but you have to do something, or nothing changes.
We gotta spend less time explaining away (and tacitly excusing) bad behavior as unfortunate game theory, and more time coming down hard on people who violate trust.
Ante trust gladly, but come down hard on defectors.
ArcHound
Consider this situation: security review before a project go-live.
I have never seen this team before and I'll "never" see this team after the fact. They might be contracted externally, they might leave before the second review.
Let's say I can sus out people doing this. I don't have the option of giving them the benefit of the doubt and they have the motivation to trick me.
I guess I've answered my own question a bit, such an environment isn't built to foster trust at all.
zephen
Upvoted because this is true, but we need to establish coping mechanisms for this.
For example:
"Sorry, yes, I know the report is due tomorrow, but I don't have time to review it again because I wasted 2 hours on the first version."
or
"I found these three problems on the first page and stopped reading."
What else?
ecshafer
Lets steel man this:
1. If the output is solid, does it matter?
2. The author could simply have done the research, created the plan, and then gave an LLM the bullet list points of research and told it to "make this into a presentable plan". The author does the heavy work and actually does the creative work, and outsources the manual formatting to the LLM. My Wife speaks English as a second language, she much prefers telling an LLM what she is trying to say and to generate a business friendly email from this than writing it herself and letting in grammatical mistakes.
3. If I were to write a paper in my favorite text editor and then put it through pandoc to generate a word doc it would do the same thing.
phyzome
How can you tell the output is solid?
The creation of a plan also implies that some work has gone into making sure it's a good one. That's one human (the author) asserting that it's solid. But now you're not even sure if that one vote exists.
Sharlin
When it comes to LLMs, the only thing I hate more than the "I don't know, the AI wrote it" people is the "I wrote this" crowd. No you didn't, you asked someone else to write it. If you couldn't claim copyright for it in an IP court, you did not write it. Period.
zdragnar
Has this actually been tried? Plenty of people have released AI generated (in part or nearly whole) media as their own, especially in music and fiction literature.
Personally, I'd love to see most of this stuff disappear from services that advertise it on par with human generated media like spotify and amazon (though I'll also admit to having a soft spot for the soul style AI covers of 50 cent and others).
zephen
> Has this actually been tried?
Yes, Thaler v. Perlmutter.
I'm pretty sure, even though that's recent, that it fully comports with decades old law on patents, as well.
I can't find an older case, but Thaler v. Vidal is a recent patent case.
lbrito
>Regardless of their intent I realised something subtle had happened. Any time saved by (their) AI prompting gets consumed by verification overhead, the work just gets passed along to someone else – in this case me.
This is _exactly_ how I feel. Any time saved by precooking a "plan" (typically halfbaked ideas) with AI isn't really time saved, it is a transfer of work from the planner to whoever is going to implement the plan.
> Suspicions aroused, I clicked on the “Document History” button in the top right and saw a clean history of empty document – and then wham – fully-formed plan, as if it had just spilled out of someone’s brain, straight onto the screen, ready to share.
This isn't always a great indicator.
I can't stand Google Docs as an interface to write with, so use VIM and the copy/paste the completed document into it.