Skip to content(if available)orjump to list(if available)

Anthropic's AI-generated blog dies an early death

anon7000

AI generated web content has got to be one of the most counterproductive things to use AI on.

If I wanted an AI summary of a topic or answer to a question, a chatbot of choice can easily provide that for you. There’s no need for yet another piece of blogspam that isn’t introducing new information into the world. That content is already available inside the AI model. At some point, we’ll get so oversaturated with fake, generated BS that there won’t be enough high quality new information to feed them.

jerf

This is the fundamental reason why I am in favor of a ban on simply posting AI-generated content in user forums. It isn't that AI is fundamentally bad per se, and to the extent that it is problematic now, that badness may well be a temporary situation. It's because there's not a lot of utility in you as a human being basically just being an intermediary to what some AI says today. Anyone who wants that can go get it themselves, in an interactive session where they can explore the answer themselves, with the most up-to-date models. It's fundamentally no different than pasting in the top 10 Google results for a search with no further commentary; if you're going to go that route just give a letmegooglethat.com link. It's exactly as helpful, and in its own way kind of carries the same sort of snarkiness with it... "oh, are you too stupid to AI? let me help you with that".

Similarly, I remember there was a lot of frothy startup ideas around using AI to do very similar things. The canonical one I remember is "using AI to generate commit messages". But I don't want your AI commit messages... again, not because AI is just Platonically bad or something, but because if I want an AI summary of your commit, I'd rather do it in two years when I actually need the summary, and then use a 2027 AI to do it rather than a 2025 AI. There's little to no utility to basically caching an AI response and freezing it for me. I don't need help with that.

xorokongo

This only means that the web (websites and web 2.0 platforms) for public usage is becoming redundant because any type of data that can be posted on the web can now be generated by an LLM. LLMs have been only around for a short while but the web is already becoming infested with AI spam. Future generations that are not accustomed to the old pre AI web will prefare to use AI rather than the web, LLMs will eventually be able to generate all aspects of the web. The web will remain useful for private communication and general data transfer but not for surfing as we know it today.

Edit to add:

Projects like the Internet Archive will be even more important in the future.

fallinditch

Editorial guidelines at many publications explicitly state that AI can assist with drafts, outlines, and editing, but not with generating final published stories.

AI is widely used for support tasks such as: - Transcribing interviews - Research assistance and generating story outlines - Suggesting headlines, SEO optimization, and copyediting - Automating routine content like financial reports and sports recaps

This seems like a reasonable approach, but even so I agree with your prediction that people will mostly interact with the web via their AI interface.

chermi

While I largely agree, I don't think it's quite correct to say AI generated blogs contain no new information. At least not in a practical sense. The output is a function of the the LLM and the prompt. The output contains new information assuming the prompt does. If the prompt/context contains internal information no one outside the company has access to, then a public post thus generated certainly contains information new to the public.

raincole

What we wish for: better search.

What we got: more content polluting search, aka worse search.

_fat_santa

> AI generated web content has got to be one of the most counterproductive things to use AI on.

For something like a blog I would agree, but I found AI to be fantastic at generating copy for some SaaS websites I run. I find it to be a great "polishing engine" for copy that I write. I will often write some very sloppy copy that just gets the point across and then feed that to a model to get a more polished version that is geared to a specific outcome. Usually I will generate a couple variants of the copy I fed it, validate it for accuracy, slap it into my CMS and run an a/b test and then stick with the content that accomplishes the specific goal of the content best based on user engagement/click through/etc.

fullshark

Anthropic cares about that, every individual content creator does not. Their goal is to win the war for attention, which is now close to zero sum with everyone on the internet and there's only 24 hours in the day.

h1fra

hear me out: seo

echelon

Using AI generated content to mass-scale torpedo the web could be a tool to get people off of Google and existing social media platforms.

I'm certainly using Google less and less these days, and even niche subreddits are getting an influx of LLM drivel.

There are fantastic uses of AI, but there's an over-abundance of low-effort growth hacking at scale that is saturating existing conduits of signal. I have to wonder if some of this might be done intentionally to poison the well.

paxys

It's fascinating how creative these large AI companies are at finding ways to burn through VC funding. Hire a team of developers/content writers/editors, tune your models, set up a blog and build an entire infrastructure to publish articles to it, market it, and then...shut it all down in a week. And this is a company burning through multiple billions of dollars every quarter just to keep the lights on.

elzbardico

The joys of wealth transfer from the poor and the middle class workers to the asset owning class via inflation and the Cantillion Effect [1].

1- https://www.adamsmith.org/blog/the-cantillion-effect

stanford_labrat

I've always thought of these VC fueled expeditions to nowhere as the opposite. Wealth transfer from the owning class to the middle class seeing as a lot of these ventures crash and burn with nothing to show for it.

Except for the founders/early employees who get a modest (sometimes excessive) paycheck.

givemeethekeys

Can VC's get their funding from mutual funds and pension plans?

chimeracoder

> I've always thought of these VC fueled expeditions to nowhere as the opposite. Wealth transfer from the owning class to the middle class seeing as a lot of these ventures crash and burn with nothing to show for it.

That would be the case if VCs were investing their own money, but they're not. They're investing on behalf of their LPs. Who LPs are is generally an extremely closely-guarded secret, but it includes institutional investors, which means middle-class pensions and 401(k)s are wrapped up in these investments as well, just as they were tied up in the 2008 financial crisis.

It's not as clean-cut as it seems.

swyx

it's fascinating how you think being creative is an insult.

an-honest-moose

It's about how they're applying that creativity, not the creativity itself.

bowsamic

What makes you think they think that? If someone says “finding creative ways to murder people” you think they’re saying the problem is the “creative” part?

pscanf

People use AI to write blogs, passing them off as human-written. AI companies use humans to write blogs, passing them off as AI-written. :)

jasonthorsness

Is there an archive anywhere? People can argue to no end based on some whimsical assumptions of what the blog was and why it was taken down, but it really comes down to the content. I have found even o3 cannot write high-quality articles on the topics I want to write about.

jsnider3

We try things, sometimes they don't work.

Powdering7082

Did the reporter reach out to Anthropic for public comment on this? They list a "source familiar" with some details about what the intended purpose was for, but no mention on the why

jsemrau

Up until a few weeks ago, my LinkedIn seemed to become better because of AI, but now it seems everything is lazy AI slop.

We meatbags are great pattern recognizers. Here is a list of my current triggers:

"The twist?",

"Then something remarkable happened",

That said, this is more of an indictment of the lazyness of the authors to provide clearer instructions on the style needed so the app defaults to such patterns.

swyx

[flagged]

neya

I can tell you this much - most people who are opposed to AI writing blog articles are usually from the editorial team. They somehow believe they're immune to being replaced by AI. And this stems from the misconception that AI content will always sound AI, soul-less, dry, boring, easy to spot and all that. This was true with ChatGPT-3xx. It's not anymore. In fact, the models have advanced so much so that you will have a really hard time distinguishing between a real writer and an AI. We actually tried this with a large Hollywood publisher in-house as a thought experiment. We asked some of the naysayers from the editorial + CXO team to sit in a room with us, while we presented on a large white screen - a comparison of two random articles - one written by AI, which btw wasn't trained, but just fed a couple of articles of the said writer on the slide into the AI's context window, and another which was actually written by the writer themselves. Nobody in the room could tell which was AI and which wasn't. This is where we stand today. Many websites you read daily actually have so much AI in them, just that you can't tell anymore.

A_D_E_P_T

Counterpoint: GPT-4 and later variants, such as o3 and 4.5, have such a characteristic style that it's hard not to spot them.

Em dashes, "it's not just (x), it's (y)," "underscoring (z)," the limited number of ways it structures sentences and paragraphs and likes to end things with an emphasized conclusion, and I could go on all day.

DeepSeek is a little bit better at writing in a generic and uncharacteristic tone, but still... it's not good.

whywhywhywhy

> We asked some of the naysayers from the editorial + CXO team to sit in a room with us, while we presented on a large white screen - a comparison of two random articles - one written by AI, which btw wasn't trained

Needlessly close to bullying way to try and prove your point.

neya

> We asked some

Which part of this looks like bullying? It was opt-in. They attended the presentation because they were interested.

koakuma-chan

Have you tried gptzero?

neya

Yep, it is not able to recognize. To be fair, it's not just dump it into ChatGPT and copy paste kind of AI. We feed it into the model in stages, we use 2-3 different models for content generation, and another 2 later on to smoothen the tone. But, all of these are just OTS models, not trained. For example, we do use Gemini in one of the flows.