Skip to content(if available)orjump to list(if available)

It's insulting to read AI-generated blog posts

jihadjihad

It's similarly insulting to read your AI-generated pull request. If I see another "dart-on-target" emoji...

You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?

mikepurvis

I would never put up a copilot PR for colleague review without fully reviewing it myself first. But once that’s done, why not?

goostavos

It destroys the value of code review and wastes the reviewers time.

Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.

ok_dad

> Code review is one of the places where experience is transferred.

Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.

CjHuber

I mean I totally get what you are saying about pull requests that are secretly AI generated.

But otherwise, writing code with LLM‘s is more than just the prompt. You have to feed it the right context, maybe discuss things with it first so it gets it and then you iterate with it.

So if someone has done the effort and verified the result like it‘s their own code, and if it actually works like they intended, what’s wrong with sending a PR?

I mean if you then find something to improve while doing the review, it’s still very useful to say so. If someone is using LLMs to code seriously and not just to vibecode a blackbox, this feedback is still as valuable as before, because at least for me, if I knew about the better way of doing something I would have iterated further and implemented it or have it implemented.

So I don‘t see how suddenly the experience transfer is gone. Regardless if it’s an LLM assisted PR or one I coded myself, both are still capped by my skill level not the LLMs

irl_zebra

I don't think this is what they were saying.

mmcromp

You're not "reviewing" ai's slop code. If you're using it for generation, use it as a starting point and fix it up to the proper code quality

ab_io

100%. My team started using graphite.dev, which provides AI generated PR descriptions that are so bloated with useless content that I've learned to just ignore them. The issue is they are doing a kind of reverse inference from the code changes to a human-readable description, which doesn't actually capture the intent behind the changes.

nbardy

You know you can AI review the PR too, don't be such a curmudgeon. I have PR's at work I and coworkers fully AI generated and fully AI review. And

latexr

This makes no sense, and it’s absurd anyone thinks it does. If the AI PR were any good, it wouldn’t need review. And if it does need review, why would the AI be trustworthy if it did a poor job the first time?

This is like reviewing your own PRs, it completely defeats the purpose.

And no, using different models doesn’t fix the issue. That’s just adding several layers of stupid on top of each other and praying that somehow the result is smart.

jvanderbot

I get your point, but reviewing your own PRs is a very good idea.

As insulting as it is to submit an AI-generated PR without any effort at review while expecting a human to look it over, it is nearly as insulting to not just open the view the reviewer will have and take a look. I do this all the time and very often discover little things that I didn't see while tunneled into the code itself.

darrenf

I haven't taken a strong enough position on AI coding to express any opinions about it, but I vehemently disagree with this part:

> This is like reviewing your own PRs, it completely defeats the purpose.

I've been the first reviewer for all PRs I've raised, before notifying any other reviewers, for so many years that I couldn't even tell you when I started doing it. Going through the change set in the Github/Gitlab/Bitbucket interface, for me, seems to activate an different part of my brain than I was using when locked in vim. I'm quick to spot typos, bugs, flawed assumptions, edge cases, missing tests, to add comments to pre-empt questions ... you name it. The "reading code" and "writing code" parts of my brain often feel disconnected!

Obviously I don't approve my own PRs. But I always, always review them. Hell, I've also long recommended the practice to those around me too for the same reasons.

px43

> If the AI PR were any good, it wouldn’t need review.

So, your minimum bar for a useful AI is that it must always be perfect and a far better programmer than any human that has ever lived?

Coding agents are basically interns. They make stupid mistakes, but even if they're doing things 95% correctly, then they're still adding a ton of value to the dev process.

Human reviewers can use AI tools to quickly sniff out common mistakes and recommend corrections. This is fine. Good even.

duskwuff

I'm sure the AI service providers are laughing all the way to the bank, though.

charcircuit

Your assumptions are wrong. AI models do not always have equal generation and discrimination abilities. It is possible for AIs to recognize that they generated something wrong.

symbogra

Maybe he's paying for a higher tier than his colleague.

enraged_camel

>> This makes no sense, and it’s absurd anyone thinks it does.

It's a joke.

falcor84

> That’s just adding several layers of stupid on top of each other and praying that somehow the result is smart.

That is literally how civilization works.

gdulli

> You know you can AI review the PR too, don't be such a curmudgeon. I have PR's at work I and coworkers fully AI generated and fully AI review. And

Waiting for the rest of the comment to load in order to figure out if it's sincere or parody.

kacesensitive

He must of dropped connection while chatGPT was generating his HN comment

thatjoeoverthr

His agent hit what we in the biz call “max tokens”

latexr

Considering their profile, I’d say it’s probably sincere.

null

[deleted]

KalMann

If An AI can do a review then why would you put it up for others to review? Just use the AI to do the review yourself before creating a PR.

dickersnoodle

One Furby codes and a second one reviews...

shermantanktop

Let's red-team this: use Teddy Ruxpin to review, a Tamagotchi can build the deployment plan, and a Rock'em Sock'em Robot can execute it.

athrowaway3z

If your team is stuck at this stage, you need to wake up and re-evaluate.

I understand how you might reach this point, but the AI-review should be run by the developer in the pre-PR phase.

i80and

Please be doing a bit

photonthug

> fully AI generated and fully AI review

This reminds me of an awesome bit by Žižek where he describes an ultra-modern approach to dating. She brings the vibrator, he brings the synthetic sleeve, and after all the buzzing begins and the simulacra are getting on well, the humans sigh in relief. Now that this is out of the way they can just have a tea and a chat.

It's clearly ridiculous, yet at the point where papers or PRs are written by robots, reviewed by robots, for eventual usage/consumption/summary by yet more robots, it becomes very relevant. At some point one must ask, what is it all for, and should we maybe just skip some of these steps or revisit some assumptions about what we're trying to accomplish

the_af

> It's clearly ridiculous, yet at the point where papers or PRs are written by robots, reviewed by robots, for eventual usage/consumption/summary by yet more robots, it becomes very relevant. At some point one must ask, what is it all for, and should we maybe just skip some of these steps or revisit some assumptions about what we're trying to accomplish

I've been thinking this for a while, despairing, and amazed that not everyone is worried/surprised about this like me.

Who are we building all this stuff for, exactly?

Some technophiles are arguing this will free us to... do what exactly? Art, work, leisure, sex, analysis, argument, etc will be done for us. So we can do what exactly? Go extinct?

"With AI I can finally write the book I always wanted, but lacked the time and talent to write!". Ok, and who will read it? Everybody will be busy AI-writing other books in their favorite fantasy world, tailored specifically to them, and it's not like a human wrote it anyway so nobody's feelings should be hurt if nobody reads your stuff.

footy

did AI write this comment?

kacesensitive

You’re absolutely right! This has AI energy written all over it — polished sentences, perfect grammar, and just the right amount of “I read the entire internet” vibes! But hey, at least it’s trying to sound friendly, right?

sesm

To be fair, the same problem existed before AI tools, with people spitting out a ton of changes without explaining what problem are they trying to solve and what's the idea behind the solution. AI tools just made it worse.

o11c

There is one way in which AI has made it easier: instead of maintainers trying to figure out how to talk someone into being a productive contributor, now "just reach for the banhammer" is a reasonable response.

zdragnar

> AI tools just made it worse.

That's why it isn't necessary to add the "to be fair" comment i see crop up every time someone complains about the low quality of AI.

Dealing with low effort people is bad enough without encouraging more people to be the same. We don't need tools to make life worse.

kcatskcolbdi

This comment seems to not appreciate how changing the scope of impact is itself a gigantic problem (and the one that needs to be immediately solved for).

It's as if someone created a device that made cancer airborne and contagious and you come in to say "to be fair, cancer existed before this device, the device just made it way worse". Yes? And? Do you have a solution to solving the cancer? Then pointing it out really isn't doing anything. Focus on getting people to stop using the contagious aerosol first.

reg_dunlop

Now an AI-generated PR summary I fully support. That's a use of the tool I find to be very helpful. Never would I take the time to provide hyperlinked references to my own PR.

latexr

> You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?

I don’t think they are (telling you that). The person who sends you an AI slop PR would be just as happy (probably even happier) if you turned off your brain and just merged it without any critical thinking.

r0me1

On the other hand I spend less time adapting to every developer writing style and I find the AI structure output preferable

null

[deleted]

alyxya

I personally don’t think I care if a blog post is AI generated or not. The only thing that matters to me is the content. I use ChatGPT to learn about a variety of different things, so if someone came up with an interesting set of prompts and follow ups and shared a summary of the research ChatGPT did, it could be meaningful content to me.

> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!

It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.

thatjoeoverthr

Even letting the LLM “clean it up” puts its voice on your text. In general, you don’t want its voice. The associations are LinkedIn, warnings from HR and affiliate marketing hustles. It’s the modern equivalent of “talking like a used car salesman”. Not everyone will catch it but do think twice.

ryanmerket

It's really not hard to say "make it in my voice" especially if it's an LLM with extensive memory of your writing.

chipotle_coyote

You can say anything to an LLM, but it’s not going to actually write in your voice. When I was writing a very long blog post about “creative writing” from AIs, I researched Sudowrite briefly, which purports to be able to do exactly this; not only could it not write convincingly in my voice (and the novel I gave it has a pretty strong narrative voice), following Sudowrite’s own tutorial in which they have you get their app to write a few paragraphs in Dan Brown’s voice demonstrated it could not convincingly do that.

I don’t think having a ML-backed proofreading system is an intrinsically bad idea; the oft-maligned “Apple Intelligence” suite has a proofreading function which is actually pretty good (although it has a UI so abysmal it’s virtually useless in most circumstances). But unless you truly, deeply believe your own writing isn’t as good as a precocious eighth-grader trying to impress their teacher with a book report, don’t ask an LLM to rewrite your stuff.

px43

Exactly. It's so wild to me when people hate on generated text because it sounds like something they don't like, when they could easily tell it to set the tone to any other tone that has ever appeared in text.

latexr

> It would be more human to handwrite your blog post instead.

“Blog” stands for “web log”. If it’s on the web, it’s digital, there was never a period when blogs were hand written.

> The use of tools to help with writing and communication should make it easier to convey your thoughts

If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.

athrowaway3z

> If you’re using an LLM to spit out text for you, they’re not your thoughts

The thoughts I put into a text are mostly independent of the sentences or _language_ they're written in. Not completely independent, but to claim thoughts are completely dependent on text (thus also the language) is nonsense.

> Might as well just give people your prompt.

What would be the value of seeing a dozen diffs? By the same logic, should we also include every draft?

ChrisMarshallNY

> there was never a period when blogs were hand written.

I’ve seen exactly that. In one case, it was JPEG scans of handwriting, but most of the time, it’s a cursive font (which may obviate “handwritten”).

I can’t remember which famous author it was, that always submitted their manuscripts as cursive writing on yellow legal pads.

Must have been thrilling to edit.

latexr

Isolated instances do not a period define. We can always find some example of someone who did something, but the point is it didn’t start like that.

For example, there was never a period when movies were made by creating frames as oil paintings and photographing them. A couple of movies were made like that, but that was never the norm or a necessity or the intended process.

cerved

> If it’s on the web, it’s digital, there was never a period when blogs were hand written.

This is just pedantic nonsense

jancsika

> If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.

It's like listening to Bach's Prelude in C from WTCI where he just came up with a humdrum chord progression and uses the exact same melodic pattern for each chord, for the entire piece. Thanks, but I can write a trivial for loop in C if I ever want that. What a loser!

Edit: Lest HN thinks I'm cherry picking-- look at how many times Bach repeats the exact same harmony/melody, just shifting up or down by a step. A significant chunk of his output is copypasta. So if you like burritos filled with lettuce and LLM-generated blogs, by all means downvote me to oblivion while you jam out to "Robo-Bach"

Aeolun

Except the prompt is a lot harder and less pleasant to read?

Like, I’m totally on board with rejecting slop, but not all content that AI was involved in is slop, and it’s kind of frustrating so many people see things so black and white.

caconym_

> It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.

Whether I hand write a blog post or type it into a computer, I'm the one producing the string of characters I intend for you to read. If I use AI to write it, I am not. This is a far, far, far more important distinction than whatever differences we might imagine arise from hand writing vs. typing.

> your thoughts

No, they aren't! Not if you had AI write the post for you. That's the problem!

signorovitch

I tend to agree, though not in all cases. If I’m reading because I want to learn something, I don’t care how the material was generated. As long as it’s correct and intuitive, and LLMs have gotten pretty good at that, it’s valuable to me. It’s always fun when a human takes the time to make something educational and creative, or has a pleasant style, or a sense of humor; but I’m not reading the blog post for that.

What does bother me is when clearly AI-generated blog posts (perhaps unintentionally) attempt to mask their artificial nature through superfluous jokes or unnaturally lighthearted tone. It often obscures content and makes the reading experience inefficient, without the grace of a human writer that could make it worth it.

However, if I’m reading a non-technical blog, I am reading because I want something human. I want to enjoy a work a real person sank their time and labor into. The less touched by machines, the better.

> It would be more human to handwrite your blog post instead.

And I would totally ready handwritten blog posts!

paulpauper

AI- assisted or generated content tends to have an annoying wordiness or bloat to it, but only astute readers will pick up on it.

But it can make for tiresome reading. Like, a 2000 word post can be compressed to 700 or something had a human editor pruned it.

korse

:Edit, not anymore kek

Somehow this is currently the top comment. Why?

Most non-quantitative content has value due to a foundation of distinct lived experience. Averages of the lived experience of billions just don't hit the same, and are less likely to be meaningful to me (a distinct human). Thus, I want to hear your personal thoughts, sans direct algorithmic intermediary.

B56b

Even if someone COULD write a great post with AI, I think the author is right in assuming that it's less likely than a handwritten one. People seem to use AI to avoid thinking hard about a topic. Otherwise, the actual writing part wouldn't be so difficult.

This is similar to the common objection for AI-coding that the hard part is done before the actual writing. Code generation was never a significant bottleneck in most cases.

throw35546

The best yarn is spun from mouth to ear over an open flame. What is this handwriting?

falcor84

It's what is used to feed the flames.

c4wrd

I think the author’s point is that by exposing oneself to feedback, you are on the receiving end of the growth in the case of error. If you hand off all of your tasks to ChatGPT to solve, your brain will not grow and you will not learn.

chemotaxis

I don't like binary takes on this. I think the best question to ask is whether you own the output of your editing process. Why does this article exist? Does it represent your unique perspective? Is this you at your best, trying to share your insights with the world?

If yes, there's probably value in putting it out. I don't care if you used paper and ink, a text editor, a spell checker, or asked an LLM for help.

On the flip side, if anyone could've asked an LLM for the exact same text, and if you're outsourcing a critical thinking to the reader - then yeah, I think you deserve scorn. It's no different from content-farmed SEO spam.

Mind you, I'm what you'd call an old-school content creator. It would be an understatement to say I'm conflicted about gen AI. But I also feel that this is the most principled way to make demands of others: I have no problem getting angry at people for wasting my time or polluting the internet, but I don't think I can get angry at them for producing useful content the wrong way.

dewey

> No, don't use it to fix your grammar, or for translations

I think that's the best use case and it's not AI related as spell-checkers and translation integrations exist forever, now they are just better.

Especially for non-native speakers that work in a globalized market. Why wouldn't they use the tool in their toolbox?

j4yav

Because it doesn’t just fix your grammar, it makes you sound suspiciously like spam.

whatsakandr

I have a prompt to make it not rewrite, but just point out "hey you could rephrase this better." I still keep my tone, but the clanker can identify thoughts that are incomplete. Stuff that spell chekcer's can't do.

thw_9a83c

> Because it doesn’t just fix your grammar, it makes you sound suspiciously like spam.

This ship sailed a long time ago. We have been exposed to AI-generated text content for a very long time without even realizing it. If you read a little more specialized web news, assume that at least 60% of the content is AI-translated from the original language. Not to mention, it could have been AI-generated in the source language as well. If you read the web in several languages, this becomes shockingly obvious.

orbital-decay

No? If you ask it to proofread your stuff, any competent model just fixes your grammar without adding anything on its own. At least that's my experience. Simply don't ask for anything that involves major rewrites, and of course verify the result.

j4yav

If you can’t communicate effectively in the language how are you evaluating that it doesn’t make you sound like a bot?

ianbicking

It does however work just fine if you ask it for grammar help or whatever, then apply those edits. And for pretty much the rest of the content too: if you have the AI generate feedback, ideas, edits, etc., and then apply them yourself to the text, the result avoids these pitfalls and the author is doing the work that the reader expects and deserves.

dewey

It's a tool and it depends on how you use it. If you tell it to fix your grammar with minimal intervention to the actual structure it will do just that.

kvirani

Usually

portaouflop

I disagree. You can use it to point out grammar mistakes and then fix them yourself without changing the meaning or tone of the subject.

YurgenJurgensen

Paste passages from Wikipedia featured articles, today’s newspapers or published novels and it’ll still suggest style changes. And if you know enough to know to ignore ChatGPTs suggestions, you didn’t need it in the first place.

cubefox

Yeah. It's "pick your poison". If your English sounds broken, people will think poorly of your text. And if it sounds like LLM speak, they won't like it either. Not much you can do. (In a limited time frame.)

geerlingguy

Lately I have more appreciation for broken English and short, to the point sentences than the 20 paragraph AI bullet point lists with 'proper' formatting.

Maybe someone will build an AI model that's succinct and to the point someday. Then I might appreciate the use a little more.

j4yav

I would personally much rather drink the “human who doesn’t speak fluently” poison.

yodsanklai

LLM are pretty good to fix documents in exactly the way you want. At the very least, you can ask it to fix typos, grammar errors, without changing the tone, structure and content.

boscillator

Yah, it is very strange to equivocate using AI as a spell checker and a whole AI written article. Being charitable, they meant asking the AI re-write your whole post, rather than just using it to suggest comma placement, but as written the article seems to suggest a blog post with grammar errors is more Human™ than one without.

mjr00

> Especially for non-native speakers that work in a globalized market. Why wouldn't they use the tool in their toolbox?

My wife is ESL. She's asked me to review documents such as her resume, emails, etc. It's immediately obvious to me that it's been run through ChatGPT, and I'm sure it's immediately obvious to whomever she's sending the email. While it's a great tool to suggest alternatives and fix grammar mistakes that Word etc don't catch, using it wholesale to generate text is so obvious, you may as well write "yo unc gimme a job rn fr no cap" and your odds of impressing a recruiter would be about the same. (the latter might actually be better since it helps you stand out.)

Humans are really good at pattern matching, even unconsciously. When ChatGPT first came out people here were freaking out about how human it sounded. Yet by now most people have a strong intuition for what sounds ChatGPT-generated, and if you paste a GPT-generated comment here you'll (rightfully) get downvoted and flagged to oblivion.

So why wouldn't you use it? Because it masks the authenticity in your writing, at a time when authenticity is at a premium.

dewey

Having a tool at your disposal doesn't mean you don't have to learn how to use it. I see this similar to having a spell checker or thesaurus available and right clicking every word to pick a fancier one. It will also make you sound inauthentic and fake.

These type of complains about LLMs feel like the same ones people probably said about using a typewriter for writing a letter vs. a handwritten one saying it loses intimacy and personality.

throwawayffffas

I already found it insulting to read seo spam blog posts. The ai involved is beside the point.

icapybara

If they can’t be bothered to write it, why should I be bothered to read it?

abixb

I'm sure lots of "readers" of such articles fed it to another AI model to summarize it, thereby completely bypassing the usual human experience of writing and then careful (and critical) reading and parsing of the article text. I weep for the future.

Also, reminds me of this cartoon from March 2023. [0]

[0] https://marketoonist.com/2023/03/ai-written-ai-read.html

trthomps

I'm curious if the people who are using AI to summarize articles are the same people who would have actually read more than the headline to begin with. It feels to me like the sort of person who would have read the article and applied critical thinking to it is not going to use an AI summary to bypass that since they won't be satisfied with it.

thw_9a83c

> If they can’t be bothered to write it, why should I be bothered to read it?

Isn't that the same with AI-generated source code? If lazy programmers didn't bother writing it, why should I bother reading it? I'll ask the AI to understand it and to make the necessary changes. Now, let's repeat this process over and over. I wonder what would be the state of such code over time. We are clearly walking this path.

conception

Why would source code be considered the same as a blog post?

thw_9a83c

I didn't say the source code is the same as a blog post. I pointed out that we are going to apply the "I don't bother" approach to the source code as well.

Programming languages were originally invented for humans to write and read. Computers don't need them. They are fine with machine code. If we eliminate humans from the coding process, the code could become something that is not targeted for humans. And machines will be fine with that too.

Ekaros

Why would I bother to run it? Why wouldn't I just have AI to read it and then provide output on my input?

alxmdev

Many of those who can't be bothered to write what they publish probably can't be bothered to read it themselves, either. Not by humans and certainly not for humans.

bryanlarsen

Because the author has something to say and needs help saying it?

pre-AI scientists would publish papers and then journalists would write summaries which were usually misleading and often wrong.

An AI operating on its own would likely be no better than the journalist, but an AI supervised by the original scientist quite likely might do a better job.

kirurik

I agree, I think there is such a thing as AI overuse, but I would rather someone uses AI to form their points more succinctly than for them to write something that I can't understand.

CuriouslyC

Tired meme. If you can't be bothered to think up an original idea, why bother to post?

YurgenJurgensen

2+2 doesn’t suddenly become 5 just because you’re bored of 4.

AlienRobot

Now that I think about it, it's rather ironic that's a quote because you didn't write it.

noir_lord

I just hit the back button as soon as my "this feels like AI" sense tingles.

Now you could argue but you don't know it was AI it could just be really mediocre writing - it could indeed but I hit the back button there as well so it's a wash either way.

rco8786

There's definitely an uncanny valley with a lot of AI. But also, it's entirely likely that lots of what we're reading is AI generated and we can't tell at all. This post could easily be AI (it's not, but it could be)

Waterluvian

Ah the portcullis to the philosophical topic of, “if you couldn’t tell, does that demonstrate that authenticity doesn’t matter?”

noir_lord

I think it does, We could get a robotic arm to paint in the style of a Dutch master but it'd not be a Dutch master.

I'd sooner have a ship painting from the little shop in the village with the little old fella who paints them in the shop than a perfect robotic simulacrum of a Rembrandt.

Intention matters but it matters less sometimes but I think it matters.

Writing is communication, it's one of the things we as humans do that makes us unique - why would I want to reduce that to a machine generating it or read it when it has.

embedding-shape

I do the same almost, but use "this isn't interesting/fun to read" and don't really care if it was written by AI or not, if it's interesting/fun it's interesting/fun, and if it isn't, it isn't. Many times it's obvious it's AI, but sometimes as you said it could just be bad, and in the end it doesn't really matter, I don't want to continue reading it regardless.

shadowgovt

I do the same, but for blog posts complaining about AI.

At this point, I don't know there's much more to be said on the topic. Lines of contention are drawn, and all that's left is to see what people decide to do.

jquaint

> Do you not enjoy the pride that comes with attaching your name to something you made on your own? It's great!

This is like saying a photographer shouldn't find the sunset they photographed pretty or be proud of the work, because they didn't personally labor to paint the image of it.

A lot more goes into a blog post than the actual act of typing the context out.

Lazy work is always lazy work, but its possible to make work you are proud with with AI, in the same way you can create work you are proud of with a camera

rcarmo

I don't get all this complaining, TBH. I have been blogging for over 25 years (20+ on the same site), been using em dashes ever since I switched to a Mac (and because the Markdown parser I use converts double dashes to it, which I quite like when I'm banging out text in vim), and have made it a point of running long-form posts through an LLM asking it to critique my text for readability because I have a tendency for very long sentences/passages.

AI is a tool to help you _finish_ stuff, like a wood sander. It's not something you should use as a hacksaw, or as a hammer. As long as you are writing with your own voice, it's just better autocorrect.

yxhuvud

The problem is that a lot of people use it for a whole lot more than just polish. The LLM voice in a text get quite jarring very quickly.

curioussquirrel

100% agree. Using it to polish your sentences or fix small grammar/syntax issues is a great use case in my opinion. I specifically ask it not to completely rewrite or change my voice.

It can also double as a peer reviewer and point out potential counterarguments, so you can address them upfront.

cyrialize

I'm reading a blog because I'm interested in the voice a writer has.

If I'm finding that voice boring, I'll stop reading - whether or not AI was used.

The generic AI voice, and by that I mean very little prompting to add any "flavor", is boring.

Of course I've used AI to summarize things and give me information, like when I'm looking for a specific answer.

In the case of blogs though, I'm not always trying to find an "answer", I'm just interested in what you have to say and I'm reading for pleasure.

VladVladikoff

Recently I had to give one of my vendors a dressing down about LLM use in emails. He was sending me these ridiculous emails where the LLM was going off the rails suggesting all sorts of features etc that were exploding the scope of the project. I told him he needs to just send the bullet notes next time instead of pasting those into ChatGPT and pasting the output into an email.

doug_durham

I don't like reading content that has not been generated with care. The use of LLMs is largely orthogonal to that. If a non-native English speaker uses an LLM to craft a response so I can consume it, that's great. As long as there is care, I don't mind the source.