Skip to content(if available)orjump to list(if available)

If writing is thinking then what happens if AI is doing the writing and reading?

evolve2k

The sci-fi movie Brazil (nothing much to do with the country), is set in a beauracratic dystopic future and at the start of the movie a literal real world bug falls into “the machine that never makes mistakes”. The error plays out over the course of the movie having a somewhat (negative) butterfly effect.

I feel the movie well captures the tone of the current moment.

Worth a watch.

dexwiz

For anyone who wants to watch, there are a few endings in similar flavors to the Blade Runner endings.

cjbgkagh

I guess like the Penfield Mood Organ you can dial how you want to feel. But you probably shouldn’t pick ‘good’.

null

[deleted]

GMoromisato

I think Sinofsky is asking a question: what does the future look like given that (a) writing is thinking but (b) nobody reads and (c) LLMs are being used to write and read.

It's that (already) old joke: we give the LLM 5 bullet points to write a memo and the recipient uses an LLM to turn it back to 5 bullet points.

Some plausible (to me) possibilities:

1. Bifurcation: Maybe a subset of knowledge workers continue to write and read and therefore drive the decisions of the business. The remainder just do what the LLM says and eventually get automated away.

2. Augmentation: Thinking is primarily done by humans, but augmented by AI. E.g., I write my thoughts down (maybe in 5 bullet points or maybe in paragraphs) and I give it to the LLM to critique. The LLM helps by poking holes and providing better arguments. The result can be distributed to everyone else by LLMs in customized form (some people get bullet points, some get slide decks, some get the full document).

3. Transformation: Maybe the AI does the thinking. Would that be so bad? The board of directors sets goals and approves the basic strategy. The executive team is far smaller and just oversees the AI. The AI decides how to allocate resources, align incentives, and communicate plans. Just as programmers let the compiler write the machine code, why bother with the minutiae of resource allocation? That sounds like something an algorithm could do. And since nobody reads anyway, the AI can direct people individually, but in a coordinated fashion. Indeed, the AI can be far more coordinated than an executive team.

Swizec

> 1. Bifurcation: Maybe a subset of knowledge workers continue to write and read and therefore drive the decisions of the business. The remainder just do what the LLM says and eventually get automated away.

This already happens. Being the person who writes the doc [for what we wanna do next] gives it ridiculous leverage and sway in the business. Everyone else is immediately put in the position of feedbacking instead of driving and deciding.

Being the person who feedbacks gives you incredible leverage over people who just follow instructions from the final version

didericis

4. Degradation: Humans with specialized knowledge lose their specialized knowledge due to over reliance on AI, and AI degrades over time due to the lack of new human data and AI contaminated data sets.

bugbuddy

5. Society collapses in an Idiocratic fashion.

volemo

6. PROFIT?

oldge

The executives in example three seem redundant and a cost center we can eliminate.

bugbuddy

Everyone without significant capital is redundant and can be eliminated.

andai

The other day I gave GPT a journal entry and asked it to rewrite it from the POV of a person with low Openness (personality trait). I found this very illuminating, as I am on the opposite end of that spectrum.

karaterobot

The title made me think this was going to be about the mental consequences of outsourcing writing to AI. In fact, the article is completely about people not reading documents. Corporate documents to be exact. His examples are from the 00s, so the problem has absolutely nothing to do with AI.

Heck, I, too, have noticed that nobody reads anything: what does that have to do with AI? At least with AI, people could read a summary of his 30 page corporate memo and ask it questions.

I repeat: that people do not read is not a new problem, nor is it made one iota worse by AI.

mlinhares

Completely wild counterpoint, now that our docs are available to the AI bot, people are interacting with them more because when they ask the bot can reply, explain and then say "docs are available here" do the point i'm actually investing even more of my time in writing them.

sothatsit

I agree, it feels like the value of text documents has gone up immensely with AI making it much easier to make use of them. Accurate reference documentation that AI can point to is really valuable.

Although, the next step on this ladder is going to be that people don't even double-check the facts in the original document, and just take what the LLM said as truth, which is perhaps scarier to me than people not reading the original documents in the first place...

gleenn

I am firmly a believer very few people read anything. They don't read long things as much as they don't even read short ones. One of the things I always thought was funny was having Product Managers see there were problems with UI, then I would get tickets to add text near the problems. It always crackme up because if users didn't even barely read the button they were clicking they why would they read a paragraph nearby.

wlesieutre

In school they were really big on the 5-paragraph persuasive essay format. I guess because it teaches you to think through an argument and present it to someone.

In practice, I find that if I don't format something as a bulleted/numbered list, nobody is going to look at it.

Herring

I'm sympathetic. It's like if code isn't color-coded and properly indented, I'm just not reading it.

null

[deleted]

sakesun

Just watched Veritasium's "How One Company Secretly Poisoned The Planet"

https://www.youtube.com/watch?v=SC2eSujzrUY

Inventions created for convenience decades ago have now become health concerns. I wonder how AI might affect our intellectual well-being in the decades to come.

jojobas

It's already affecting students knowledge. Why bother with understanding something if you can ask chatgpt to do the work?

gchamonlive

I think it's the type of thing that the mere awareness of it counts a lot to counterweight the problem.

It's like when you are growing up and a certain type of behaviour that would work for socializing when you're 14 years old suddenly doesn't work anymore when you are 21. You learn about it when someone you trust brings you attention to it and suddenly you have the opportunity to reflect and change your behaviour.

The thing with AI that I really fear is the same with mind-altering drugs like Adderall. In some places you just can't afford the luxury of not using it without losing competitiveness (I think, never used it but I know of people that do with regularity).

So maybe we don't want to not read what we write, but sometimes there is a middle manager making you do it. Then it's a problem of context that awareness in itself doesn't help, maybe only in the long run.

caseyohara

> mind-altering drugs like Adderall

This is strange to me. You could give me 100 chances to guess which “mind-altering drug” you are thinking of and Adderall wouldn’t cross my mind. Amphetamine is a stimulant; it’s plainly not mind-altering in the way that psychedelics are. Adderall is mind-altering in the same way that caffeine is. Which is to say, it isn’t.

saulpw

Yes, Adderall is mind-altering, even if it's not as profound as psychedelics. Caffeine is also mind-altering to a lesser degree.

mayukh

I use AI a ton for writing emails and other corporate stuff, I have no problems there, it structures and presents them well enough and quickly that it saves me a ton of time.

Where I am conflicted is creative writing -- its something I have been interested in but never pursued...and now I am able to pursue it with AI's help. There is a degree of embarrassment when confiding to folks, that yes a piece was AI assisted... see here by what I mean: https://humancurious.substack.com/p/the-architect-and-the-cr...

apparent

Do you find yourself learning from how AI structures your bullet points or rewrites messages?

I sort of feel like it would blunt the downsides of AI rewriting everything if it had to explain why it was making all the changes. Being told the rationale would allow users to make better decisions about whether to accept/reject a change, and also help the user avoid making the same writing mistakes in the future.

indigodaddy

Do you feel weird that your coworkers must largely know that you use AI to communicate with them? I'd feel weird doing it, I know that much, so I've never even contemplated using AI for emails/communication.

muratsu

In my experience people don't read these large documents because they are not personalized/relevant. When you're writing to a large audience, you naturally assume people know the least amount possible about the subject and start from there. In a corporate setting this comes off as irrelevant or boring. I'm sure rebranding initiatives like One Microsoft, Copilot, or Office 365 makes things simpler for executives but employees are left confused. The memo usually mentions future efficiency gains or synergies but will omit why this brand change is needed. Surely if you're sending a memo to 100k people, it makes sense to not talk about negatives (a good example of this is politicians) but at that point the value of memo is also very low. This may come off as odd but short format videos seem to work much better at large scale. Perhaps the future of communication is really just lots of easy to consume/repeated content.

jillesvangurp

Reading long form text is a big commitment in time. In a business context that's simply not appropriate. I always joke about documentation as write only content. It's the type of thing you are asked to write that then doesn't get used or read. I've more than once gotten the question whether I could produce a diagram or some sort of documentation only to realize later that after the thumbs up I got on delivery, nobody was actually doing anything with the document.

Here on HN, short comments are more appreciated than longer comments. People are skimming, not reading. The ability to say a lot with very few words is what is appreciated the most.

That's nothing new btw. As Mark Twain once wrote: “I didn’t have time to write a short letter, so I wrote a long one instead.”

Using LLMs to rephrase things more efficiently is a good use of LLMs. People are getting used to better signal to noise ratios in written text and higher standards of writing. And they'll mercilessly use LLMs to deal with long form drivel produced by their colleagues.

voidhorse

> I always joke about documentation as write only content. It's the type of thing you are asked to write that then doesn't get used or read.

That's not actually true. It may be true now when everyone still has context, but if you built a sound system that will outlast your own contributions to it, the documentation becomes invaluable.

> People are skimming, not reading.

Yes. The cause of much suffering and misery in the modern world.

> And they'll mercilessly use LLMs to deal with long form drivel produced by their colleagues.

otoh, they also use them to generate more noise and drivel than we ever imagined possible. When it took human effort to pump out boring corp-speak, that at least put a cap on the amount of useless documentation and verbiage being emitted. Now the ceiling has been completely blown off. People who have been incapable of even crafting a single sentence their entire lives can now shovel volumes of AI-generated garbage down our throats.

Herring

Idk, I'm more optimistic than the author.

I'm currently using a LLM to rewrite a fitness book. It takes ~20 pages of rambling text by a professional coach // amateur writer and turns it into a crisp clear 4 pages of latex with informative diagrams, flow charts, color-coding, tables, etc. I sent it out to friends and they all love the new style. Even the ones who hate the gym.

My experience is LLMs can write very very well; we just have to care.

Hubert Humphrey (VP US) was asked how long it would take him to prepare a 15 minute talk: "one week". Asked how long to prepare a two hour talk? "I am ready right now".

kaushikt

this. I am almost addicted to dropping long voice notes (pronounced rambling) and LLMs do such a great job at creating and managing these notes. I can then convert that format into anything.

although, I agree with the author since many emails and messages onlinkedin i get these days are just long post shits by AI. I am not reading them anymore but it's some other ai summarising ebcause no human talks or writes like basic ai prompting does. so so difficult to read that

wvenable

> 4 pages of latex with informative diagrams, flow charts, color-coding, tables, etc.

What tool are you using for this?

I too have used LLM to do writing that, frankly, I wouldn't have done without it. Often I don't even take what it says but it helps to get the ideas out in written form and then I can edit it to sound more like how I want to sound.

intended

Production became easier -

But your consumer changed as well.

Herring

Gemini pro + overleaf (learned latex from a long stint in grad school). Cheers mate.

ramesh31

>My experience is LLMs can write very very well; we just have to care.

My experience is that people who think this are really bad writers. That's fine, because most human writing is bad too. So if your goal is just to put more bad writing into the world for commercial reasons, then there's some utility in it for sure.

Herring

Then in the A/B test you're the 1 fail out of 7 tries, everyone else loved it, and you haven't even looked at it. I can live with those results. Lesson is don't always publicize the LLM help.

the_af

> everyone else loved it

Most people are very bad readers, too.

For example, most of my coworkers don't read books at all, and the few that do, only read tech or work-related books. (Note that most don't even read that).

api

As with visual art, AI is replacing humans when it comes to creating filler and background material.

I haven’t seen many examples of anything in either visual or prose arts coming out of an AI that I’ve liked, and the ones I have seen seem like they took a human doing a lot of prompting to the point that they are essentially human made art using the AI as a renderer. (Which is fine. I’m just saying the AI didn’t make it autonomously.)

southernplaces7

Might I ask what LLM you use for it to do all of that, including the visuals, so neatly?

Herring

Gemini pro. The flow chart, color-coding, tables etc are just latex. The illustrations are picked off the web. I suspect I might have to hire a professional photographer eventually.

southernplaces7

>The flow chart, color-coding, tables etc are just latex. The illustrations are picked off the web

Thanks for replying so quickly! Just to clarify, what do you mean by latex?

So you don't use AI-generated illustrations. Those are real.

aaronbrethorst

Fun fact: Socrates thought writing would lead to forgetfulness. https://newlearningonline.com/literacies/chapter-1/socrates-...

voidhorse

And he was right. Poets and bard used to memorize entire epics. Writing changed this not by augmenting their abilities, but by changing expectations: it became acceptable to read form a written record, rather than from memory.

My memory gets exercised a lot less frequently than it would need to without writing.

But memory is also not thinking. It is a component in thinking, but it is not thinking itself. Discourse, arguably, though, whether in natural or symbolic language is thinking. If we offload all of that onto machines, we'll do less of it, and yes our expectations will change, but I actually think the scenario here is different than the one Socrates faced and that the stakes are slightly higher—and Socrates wasn't wrong, we just needed internal memory less than we thought once external memory became feasible, as cool and badass as it may seem to "own" Socrates in retrospect.

blondie9x

Drastic oversimplification of what is meant here. The core take away is writing can give the illusion of wisdom without the reality of it. True wisdom must be cultivated internally, through thoughtful interaction and lived experience—not simply consumed passively through text.

raincole

The title (subtitle, to be precise) is almost completely unrelated to the content.

Title is about AI. Content is just rambling on how people dislike reading business reports.

If writing is thinking I think the author is having a trouble thinking coherently.

gavmor

I was hoping he'd touch on how writing is "embodied" or "extended" cognition, ie how it allows for manipulation and reorganization of ideas in ways that are functionally similar to what occurs within the mind a la Clark and Chalmers' “Otto’s notebook” thought experiment (in which an Alzheimer's patient has written all of his directions down in a notebook to serve the function of his memory).

Or how the thoughts we have as we are writing shape our understanding, and so we come out with not only a written composition, but a new frame of mind; AI generated writing allows us to preserve our frame of mind—for better or worse!

There is something to be said for the ways in which coding, too, is an exercise in self-authoring, ie authoring-of-the-self.

bravesoul2

Then nothing is being learned by AI (yet due to LLM being prefabbed but not dynamic) or human.

I dont use LLMs at all for writing. Mainly for checking stuff and the most boilerplate of code.

jart

Putting aside the people who need LLMs as a prosthetic, the only writers who are asking AI to write lengthy prose for them are the ones who on that occasion didn't have anything important to say in the first place. Maybe they work a fake job where they're required to do ritualistic writing, similar to the medieval scribes who laboriously copied books by hand over and over again. Now, thanks to AI, they're liberated from having to toil through that lengthy process. So, yes, writing is thinking. It's also willpower. But if the purpose for it didn't matter in the first place, then nothing is actually lost if you stop doing it. You just won't get punished.