Skip to content(if available)orjump to list(if available)

I'd rather read the prompt

I'd rather read the prompt

312 comments

·May 4, 2025

sn9

> I should hope that the purpose of a class writing exercise is not to create an artifact of text but force the student to think; a language model produces the former, not the latter.

It's been incredibly blackpilling seeing how many intelligent professionals and academics don't understand this, especially in education and academia.

They see work as the mere production of output, without ever thinking about how that work builds knowledge and skills and experience.

Students who know least of all and don't understand the purpose of writing or problem solving or the limitations of LLMs are currently wasting years of their lives letting LLMs pull them along as they cheat themselves out of an education, sometimes spending hundreds of thousands of dollars to let their brains atrophy only to get a piece of paper and face the real world where problems get massively more open-ended and LLMs massively decline in meeting the required quality of problem solving.

Anyone who actually struggles to solve problems and learn themselves is going to have massive advantages in the long term.

fallinditch

It's been obvious since ChatGPT blew up in early 2023 that educators had to rethink how they educate.

I agree that this situation that the author outlines is unsatisfactory but it's mostly the fault of the education system (and by extension the post author). With a class writing exercise like the author describes, of course the students are going to use an LLM, they would be stupid not to if their classmates are using it.

The onus should be on the educators to reframe how they teach and how they test. It's strange how the author can't see this.

Universities and schools must change how they do things with respect to AI, otherwise they are failing the students. I am aware that AI has many potential and actual problems for society but AI, if embraced correctly, also has the potential to transform the educational experience in positive ways.

easygenes

Amusingly, when I asked o3 to propose changes to the education system which address the author's complaints wrt writing assignments, one of the first things it suggested was transparent prompt logging (basically what the author proposes).

https://chatgpt.com/share/6817fe76-973c-8011-acf3-ef3138c144...

palata

> Students who know least of all and don't understand the purpose of writing or problem solving or the limitations of LLMs are currently wasting years of their lives

Exactly. I tend to think that the role of a teacher is to get the students to realise what learning is all about and why it matters. The older the students get, the more important it is.

The worst situation is a student finishing university without having had that realisation: they got through all of it with LLMs, and probably didn't learn how to learn or how to think critically. Those who did, on the other hand, didn't need the LLMs in the first place.

soerxpso

> I should hope that the purpose of a class writing exercise is not to create an artifact of text but force the student to think

I'm there for the degree. If I wanted to learn and engage with material, I could save $60,000 and do that online for free, probably more efficiently. The purpose of a class writing exercise is to get the university to give me the degree, which I cannot do by actually learning the material (and which, for classes I care about, I may have already done without those exercises), but can only do by going through the hoops that professors set up and paying the massive tuition cost. If there were a different system where I could just actually learn something (which probably wouldn't be through the inefficient and antiquated university system) and then get a valid certificate of employability for having actually learned it, that would be great. Unfortunately, however, as long as university professors are gatekeepers of the coveted certificate of employability, they're going to keep dealing with this incentive issue.

palata

> I'm there for the degree. If I wanted to learn and engage with material, I could save $60,000

I would argue that if it costs $60,000, both your education system and the recruitment in those companies that require this degree are broken. It's not the case in all countries though.

Not that it is your fault, just stating the obvious.

jrmg

The work this degree will credential you for is so is so disconnected from the areas of study in your degree program - presumably in the same field as the job - that the majority of the things you might learn would not be valuable?

I can’t imagine this in my own life. I use concrete things and ways of thinking and working I learned in my CS degree _all the time_.

mplanchard

Heck my degree was in biochemistry, and now I’m a programmer, but I still feel like I am constantly using skills I developed in school. The scientific method and good test design transcend all sciences.

lurking_swe

That’s a disingenuous argument. You don’t know what you don’t know. Literally. A completely self guided high school graduate following random online materials will not learn nearly as much on their own. Or they will go down rabbit holes and waste countless hours, and not having an expert unblock you or guide you down the right path would waste a lot of time.

Further, some high school graduates (like myself at the time) literally don’t know HOW to learn on their own. I thought I did but college humbled me, made me realize that suddenly i’m in the drivers seat and my teachers won’t be spoon feeding me knowledge step by step. it’s a really big shift.

If you were the perfect high school graduate, then congrats, you’re like the 0.01%! And you should be proud (no sarcasm). This doesn’t describe society at large though.

For the very few that are extremely motivated and know exactly what job they want, i do think we need something in between self guided and college? No BS - strictly focusing on job training. Like a boot camp, but one that’s not a scam haha.

The other aspect of college you ignore is, it is a way to build a network prior to entering the workforce. It’s also one of the best times to date, but that’s another story.

Completely agree that the cost of college in the US is ridiculous though.

efavdb

If this attitude prevails I would think the value of degrees will quickly diminish.

ebiester

This seems to be the general feeling of students right now.

Academia put itself as a gateway and barrier to the middle class. Why would we be surprised when people with no interest in anything but the goal are not enthralled by the process?

johnea

Your post is just another example of child behind the wheel bad driving...

Maybe if and when you ever grow up you might understand just how misguided you are in your entire argument...

"certificate of employability" 8-/ maybe just skip it all and spend the rest of your life in assassin's creed... You have nothing to offer any thought work based employer.

zeroq

You're missing trees for forest.

When I was a kid and got an assignement for writing an essey about "why good forces prevailed in Lords of the Rings" as a gate check to see if I actually read the novel I had three choices: (a) read the novel and write the essey myself (b) find an already written essey - not an easy task in pre-internet era but we had books with esseys on most common topics you could "copy-paste" - and risk that the professor is familiar with the source or someone else used the same source (c) ask class mate to give me their essey as a template and rephrase it as my own

A and C would let me learn about the novel and let me polish my writing skills.

Today I can ask ChatGPT to write me a 4 pages essay about a novel I've never heard of and call it a day. There's no value gained in the process.

That's a simple example. The problem is that the same applies to programming. Novice programmer will claim that LLM give them power to take on hard tasks and programm in languages they were not familiar before. But they are not gaining any skill nor knowledege from that experience.

If I ask google maps to plot me a directions from Prague to Brussels it will yield a list of turns that will guide me to my destinations, but by any means I can't claim I've learned topography of Germany in the process.

palata

essay*

(I don't usually do that, but it appears so many times in the first few sentences that I had to do it here)

I agree with your points, though, but I think that they are in agreement with the comment you are answering to...

zeroq

Hehe, it's fine, at least it proves that the post was written by human. ;)

And yeah, and revisiting the OP we're on the same track.

leereeves

> But they are not gaining any skill nor knowledege from that experience.

It sounds like you agree with GP.

azernik

An analogy I've heard is that it's like using a forklift at the gym. The point is not to get an object from point A to point B, it's to develop skills.

Aeolun

> It's been incredibly blackpilling seeing how many intelligent professionals and academics don't understand this

I figured this out in high school. It can’t be all that uncommon of a thought that if you are already in school and paying and given time to learn, you might as well do so?

palata

I think that figuring this out is a great achievement. Probably one of the goals of school. It depends on many factors and the sooner, the better.

Young kids don't get it, they just do what they're asked. That's okay. University students graduating without having figured it out is a problem. And somewhere in the middle is when the average student gets there, hopefully?

ozim

I always get triggered when people argue against „rote memorization” - but it also is technique that builds up knowledge, skills and experience.

Even if one won’t need that specific know how after exams - just realization how much one can memorize and trying out some approaches to optimize it is where people grow/learn.

colechristensen

Memorizing things is somewhat helpful but being able to parrot back answers to questions is not at all the same thing as knowledge, skills, or experience. Memorizing a bunch of facts is an adequate way to fool someone into thinking you have those things. Testing for memorized facts is a good way to misidentify useful skills.

null

[deleted]

necovek

I've already asked a number of colleagues at work producing insane amount of gibberish with LLMs to just pass me the prompt instead: if LLM can produce verbose text with limited input, I just need that concise input too (the rest is simply made up crap).

jsheard

I'm far from the first to make this observation but LLMs are like anti-compression algorithms when used like that, a simple idea gets expanded into a bloated mess by an LLM, then sent to someone else who runs it through another LLM to summarize it back to something approximating the original prompt. Nobody benefits aside from Sam Altman and co, who get to pocket a cool $0.000000001 for enabling this pointless exercise.

musicale

> LLMs are like anti-compression algorithms when used like that, a simple idea gets expanded into a bloated mess by an LLM,

I think that's the answer:

LLMs are primarily useful for data and text translation and reduction, not for expansion.

An exception is repetitive or boilerplate text or code where a verbose format is required to express a small amount of information.

derefr

There is one other very useful form of "expansion" that LLMs do.

If you aren't aware: (high-parameter-count) LLMs can be used pretty reliably to teach yourself things.

LLM base models "know things" to about the same degree that the Internet itself "knows" those things. For well-understood topics — i.e. subjects where the Internet contains all sorts of open-source textbooks and treatments of the subject — LLMs really do "know their shit": they won't hallucinate, they will correct you when you're misunderstanding the subject, they will calibrate to your own degree of expertise on the subject, they will make valid analogies between domains, etc.

Because of this, you can use an LLM as an infinitely-patient tutor, to learn-through-conversation any (again, well-understood) topic you want — and especially, to shore up any holes in your understanding.

(I wouldn't recommend relying solely on the LLM — but I've found "ChatGPT in one tab, Wikipedia open in another, switching back and forth" to be a very useful learning mode.)

See this much-longer rambling https://news.ycombinator.com/item?id=43797121 for details on why exactly this can be better (sometimes) than just reading one of those open-source textbooks.

valenterry

They are also useful for association. Imagine an LLM trained on documentation. Then you can retrieve info associated with your question.

This can go beyond just specific documentation but also include things like "common knowledge" which is what the other poster meant when they talked about "teaching you things".

devnullbrain

Yep. They're very closely linked.

http://prize.hutter1.net/

Note the preamble, FAQs, and that all of the winning entries are now neural networks.

charlieyu1

I blame humans. I never understand why unnecessarily long writing is required in a lot of places.

Aeolun

Rituals are significant because they are long. A ritual that consisted of the words “Rain please” wouldn’t convince the gods, much less their human followers.

throwawaysleep

Depends on what you are looking for. I’ve turned half baked ideas into white papers for plenty of praise. I’ve used them to make my Jira tickets seem complicated and complete. I’ve used them to get praised for writing comprehensive documentation.

Part of my performance review is indirectly using bloat to seem sophisticated and thorough.

bdangubic

I’d rather be homeless in Philadelphia than work where you work

musicale

> comprehensive documentation

Documentation is an interesting use case. There are various kinds of documentation (reference, tutorial, architecture, etc.) and LLMs might be useful for things like

- repetitive formatting and summarization of APIs for reference

- tutorials which repeat the same information verbosely in an additive, logical sequence (though probably a human would be better)

- sample code (though human-written would probably be better)

The tasks that I expect might work well involve repetitive reformatting, repetitive expansion, and reduction.

I think they also might be useful for systems analysis, boiling down a large code base into various kinds of summaries and diagrams to describe data flow, computational structure, signaling, etc.

Still, there is probably no substitute for a Caroline Rose[1] type tech writer who carefully thinks about each API call and uses that understanding to identify design flaws.

[1] https://folklore.org/Inside_Macintosh.html?sort=date

necovek

I fully believe you and I am saddened by the reality of your situation.

At the same time, I strive really hard to influence the environment I am in so it does not value content bloat as a unit of productivity, so hopefully there are at least some places where people can have their sanity back!

palata

If your organisation is such that you have to do this even though you are competent for your job, then they deserve it. They lose money because they do it wrong.

If your organisation is functional and you are abusing it by doing that, then you deserve to get fired.

generativenoise

Would be nice to fix the performance reviews so we don't end up in a arms race of creating bloat until it becomes so unproductive it kills the host.

Over-fitting proxy measures is one of the scourges of modernity.

The only silver lining is if it becomes so wide spread and easy it loses the value of seeming sophisticated and thorough.

Wowfunhappy

...thinking about it, there are probably situations where making something more verbose makes it take less effort to read. I can see how an LLM might be useful in that situation.

investa

[flagged]

kevinventullo

Something I’ve found very helpful is when I have a murky idea in my head that would take a long time for me to articulate concisely, and I use an LLM to compress what I’m trying to say. So I type (or even dictate) a stream of consciousness with lots of parentheticals and semi-structured thoughts and ask it to summarize. I find it often does a great job at saying what I want to say, but better.

(See also the famous Pascal quote “This would have been a shorter letter if I had the time”).

P.s. for reference I’ve asked an LLM to compress what I wrote above. Here is the output:

When I have a murky idea that’s hard to articulate, I find it helpful to ramble—typing or dictating a stream of semi-structured thoughts—and then ask an LLM to summarize. It often captures what I mean, but more clearly and effectively.

kace91

“Someone sent me this ai generated message. Please give me your best shot at guessing the brief prompt that originated the text”.

Done, now ai is just lossy prettyprinting.

agentultra

An incredible use of such advanced technology and gobs of energy.

SchemaLoad

This is how I've felt about using LLMs for things like writing resumes and such. It can't possibly give you more than the prompt since it doesn't know anything more about you than you gave it in the prompt.

It's much more useful for answering questions that are public knowledge since it can pull from external sources to add new info.

ponector

Chatgpt very useful for adding softness and politeness to my sentences. Would you like more straight forward text which probably will be rude for regular american?

lttlrck

Yes. I can't stand waffle from native or non-native speakers. Waste of electrons and oxygen :-) that might just be me however. Know your audience ;-)

null

[deleted]

milesrout

[dead]

roarcher

Recently I wasted half a day trying to make sense of story requirements given to me by a BA that were contradictory and far more elaborate than we had previously discussed. When I finally got ahold of him he confessed that he had run the actual requirements through ChatGPT and "didn't have time to proofread the results". Absolutely infuriating.

bost-ty

I like the author's take: it isn't a value judgement on the individual using ChatGPT (or Gemini or whichever LLM you like this week), it's that the thought that went into making the prompt is, inevitably, more interesting/original/human than the output the LLM generates afterwards.

In my experiments with LLMs for writing code, I find that the code is objectively garbage if my prompt is garbage. If I don't know what I want, if I don't have any ideas, and I don't have a structure or plan, that's the sort of code I get out.

I'd love to hear any counterpoints from folks who have used LLMs lately to get academic or creative writing done, as I haven't tried using any models lately for anything beyond helping me punch through boilerplate/scaffolding on personal programming projects.

vunderba

This is the CRUX of the issue. Even with SOTA models (Sonnet 3.5, etc) - the more open-ended your prompt - the more banal and generic the response. It's GIGO turtles all the way down.

I pointed this out a few weeks ago with respect to why the current state of LLMs will never make great campaign creators in Dungeons and Dragons.

We as humans don't need to be "constrained" - ask any competent writer to sit quietly and come up with a novel story plot and they can just do it.

https://news.ycombinator.com/item?id=43677863

That being said - they can still make AMAZING soundboards.

And if you still need some proof, crank the temperature up to 1.0 and pose the following prompt to ANY LLM:

  Come up with a self-contained single room of a dungeon that involves an 
  unusual puzzle for use with a DND campaign. Be specific in terms of the 
  puzzle, the solution, layout of the dungeon room, etc. It should be totally 
  different from anything that already exists. Be imaginative. 
I guarantee 99% of the returns will return a very formulaic physics-based puzzle response like "The Resonant Hourglass", or "The Mirror of Acoustic Symmetry", etc.

johnfn

> I guarantee 99% of the returns will return a very formulaic physics-based puzzle response like "The Resonant Hourglass"

Haha, I was suspicious, so I tried this, and I indeed got an hourglass themed puzzle! Though it wasn't physics-based - characters were supposed to share memories to evoke emotions, and different emotions would ring different bells, and then you were supposed to evoke a certain type of story. Honestly, I don't know what the hourglass had to do with it.

sillysaurusx

Temperature 1.0 results are awful regardless of domain. 0.7 to 0.8 is the sweet spot. No one seems to believe this till they see for themselves.

Nezteb

Out of curiosity, I used your prompt but added "Do not make it a very formulaic physics-based puzzle."

The output is pretty non-sensical: https://pastebin.com/raw/hetAvjSG

HPsquared

It is totally different from anything that exists. It fulfils the prompt, I suppose! It has to be crazy so you can be more certain it's unique. The prompt didn't say anything about it being good.

Y_Y

I liked the puzzle and I think I could DM it.

riknos314

100% agree.

LLMs may seem like magic buy they aren't. They operate within the confines of the context they're given. The more abstract the context, the more abstract the results.

I expect to need to give a model at least as much context as a decent intern would require.

Often asking the model "what information could I provide to help you produce better code" and then providing said information leads to vastly improved responses. Claude 3.7 sonnet in Cline is fairly decent at asking for this itself in plan mode.

More and more I find that context engineering is the most important aspect of prompt engineering.

kergonath

> I'd love to hear any counterpoints from folks who have used LLMs lately to get academic or creative writing done

They’re great at proofreading. They’re also good at writing conclusions and abstracts for articles, which is basically synthesising the results of the article and making it sexy (a task most scientists are hopelessly terrible at). With caveats:

- all the information needs to be in the prompt, or they will hallucinate;

- the result is not good enough to submit without some re-writing, but more than enough to get started and iterate instead of staring at a blank screen.

I want to use them to write methods sections, because that is basically the exact same information repeated in every article, but the actual sentences need to be different each time. But so far I don’t trust them to be accurate with technical details. They’re language models, they have no knowledge or understanding.

sigotirandolas

For creative and professional writing, I found them useful for grammar and syntax review, or finding words from a fuzzy description.

For the structure, they are barely useful: Writing is about having such a clear understanding, that the meaning remains when reduced to words, so that others may grasp it. The LLM won't help much with that, as you say yourself.

Herring

In my experience Gemini can be really good at creative writing, but yes you have to prompt and edit it very carefully (feeding ideas, deleting ideas, setting tone, conciseness, multiple drafts, etc).

https://old.reddit.com/r/singularity/comments/1andqk8/gemini...

CuriouslyC

I use Gemini pretty much exclusively for creative writing largely because the long context lets you fit an entire manuscript plus ancillary materials, so it can serve as a solid beta reader, and when you ask it to outline a chapter it is very good at taking the events preceding and following into account. It's hard to overstate the value of having a decent beta reader that can iteratively review your entire work in seconds.

As a side note, I find the way that you interact with a LLM when doing creative writing is generally more important than the model. I have been having great results with LLMs for creative writing since ChatGPT 3.5, in part because I approach the model with a nucleus of a chapter and a concise summary of relevant details, then have it ask me a long list of questions to flesh out details, then when the questions stop being relevant I have have it create a narrative outline or rough draft which I can finish.

Herring

Interesting. I think I'm a better editor so I use it as a writer, but it makes sense that it works the other way too for strong writers. Your way might even be better, since evaluating a text is likely easier than constructing a good text (Which is why your process worked even back with 3.5).

buu700

I think the author has a fair take on the types of LLM output he has experience with, but may be overgeneralizing his conclusion. As shown by his example, he seems to be narrowly focusing on the use case of giving the AI some small snippet of text and asking it to stretch that into something less information-dense — like the stereotypical "write a response to this email that says X", and sending that output instead of just directly saying X.

I personally tend not to use AI this way. When it comes to writing, that's actually the exact inverse of how I most often use AI, which is to throw a ton of information at it in a large prompt, and/or use a preexisting chat with substantial relevant context, possibly have it perform some relevant searches and/or calculations, and then iterate on that over successive prompts before landing on a version that's close enough to what I want for me to touch up by hand. Of course the end result is clearly shaped by my original thoughts, with the writing being a mix of my own words and a reasonable approximation of what I might have written by hand anyway given more time allocated to the task, and not clearly identifiable as AI-assisted. When working with AI this way, asking to "read the prompt" instead of my final output is obviously a little ridiculous; you might as well also ask to read my browser history, some sort of transcript of my mental stream of consciousness, and whatever notes I might have scribbled down at any point.

palata

> the exact inverse of how I most often use AI, which is to throw a ton of information at it in a large prompt

It sounds to me that you don't make the effort to absorb the information. You cherry-pick stuff that pops in your head or that you find online, throw that into an LLM and let it convince you that it created something sound.

To me it confirms what the article says: it's not worth reading what you produce this way. I am not interested in that eloquent text that your LLM produced (and that you modify just enough to feel good saying it's your work); it won't bring me anything I couldn't get by quickly thinking about it or quickly making a web search. I don't need to talk to you, you are not interesting.

But if you spend the time to actually absorb that information, realise that you need to read even more, actually make your own opinion and get to a point where we could have an actual discussion about that topic, then I'm interested. An LLM will not get you there, and getting there is not done in 2 minutes. That's precisely why it is interesting.

buu700

You're making a weirdly uncharitable assumption. I'm referring to information which I largely or entirely wrote myself, or which I otherwise have proprietary access to, not which I randomly cherry-picked from scattershot Google results.

Synthesizing large amounts of information into smaller more focused outputs is something LLMs happen to excel at. Doing the exact same work more slowly by hand just to prove a point to someone on HN isn't a productive way to deliver business value.

satisfice

If you present your AI-powered work to me, and I suspect you employed AI to do any of the heavy lifting, I will automatically discount any role you claim to have had in that work.

Fairly or unfairly, people (including you) will inexorably come to see anything done with AI as ONLY done with AI, and automatically assume that anyone could have done it.

In such a world, someone could write the next Harry Potter and it will be lost in a sea of one million mediocre works that roughly similar. Hidden in plain sight forever. There would no point in reading it, because it is probably the same slop I could get by writing a one paragraph prompt. It would be too expensive to discover otherwise.

buu700

To be clear, I'm not a student, nor do I disagree with academic honor codes that forbid LLM assistance. For anything that I apply AI assistance to, the extent to which I could personally "claim credit" is essentially immaterial; my goal is to get a task done at the highest quality and lowest cost possible, not to cheat on my homework. AI performs busywork that would cost me time or cost money to delegate to another human, and that makes it valuable.

I'm expanding on the author's point that the hard part is the input, not the output. Sure someone else could produce the same output as an LLM given the same input and sufficient time, but they don't have the same input. The author is saying "well then just show me the input"; my counterpoint is that the input can often be vastly longer and less organized or cohesive than the output, and thus less useful to share.

bsder

> someone could write the next Harry Potter and it will be lost in a sea of one million mediocre works that roughly similar.

To be fair, the first Harry Potter is a kinda average British boarding school story. Rowling is barely an adequate writer (and it shows badly in some of the later books). There was a reason she got rejected by so many publishers.

However, Netscape was going nuts and the Internet was taking off. Anime was going nuts and produced some of the all time best anime. MTV animation went from Beavis and Butthead to Daria in this time frame. Authors were engaging with audiences on Usenet (see: Wheel of Time and Babylon 5). Fantasy had moved from counterculture for hardcore nerd boys to something that the bookish female nerds would engage with.

Harry Potter dropped onto that tinder and absolutely caught fire.

echelon

> I'd love to hear any counterpoints from folks who have used LLMs lately to get academic or creative writing done

I commented in another thread. We're using image and video diffusion models for creative:

https://www.youtube.com/watch?v=H4NFXGMuwpY

Still not a fan of LLMs.

Animats

That's because the instructor is asking questions that merely require the student to regurgitate the instructor's text.

To actually teach this, you do something like this:

"Here's a little dummy robot arm made out of Tinkertoys. There are three angular joints, a rotating base, a shoulder, and an elbow. Each one has a protractor so you can see the angle.

1. Figure out where the end of the arm will be based on those three angles. Those are Euler angles in action. This isn't too hard.

2. Figure out what the angles should be to touch a specific point on the table. For this robot geometry, there's a simple solution, for which look up "two link kinematics". You don't have to derive it, just be able to work out how to get the arm where you want it. Is the solution unambiguous? (Hint: there may be more than one solution, but not a large number.)

3. Extra credit. Add another link to the robot, a wrist. Now figure out what the angles should be to touch a specific point on the table. Three joints are a lot harder than two joints. There are infinitely many solutions. Look up "N-link kinematics". Come up with a simple solution that works, but don't try too hard to make it optimal. That's for the optimal controls course.

This will give some real understanding of the problems of doing this.

jfengel

A LLM can't do that? I'm a little surprised.

(I know jack all about robotics but that sounds like a pretty common assignment, the kind an LLM would regurgitate someone else's homework.)

nikanj

The LLM is very happy to give you an answer with high confidence.

The answer might be bogus, but the AI will sound confident all the way through.

No wonder sales and upper management love AI

casey2

Most physics teachers who do this are happy with the BS answer, maybe 1 in a 10,000 have actually tested their problems in reality.

laurentlb

There are many ways to use LLMs.

The issue, IMO, is that some people throw in a one-shot, short prompt, and get a generic, boring output. "Garbage in, generic out."

Here's how I actually use LLMs:

- To dump my thoughts and get help organizing them.

- To get feedback on phrasing and transitions (I'm not a native speaker).

- To improve tone, style (while trying to keep it personal!), or just to simplify messy sentences.

- To identify issues, missing information, etc. in my text.

It’s usually an iterative process, and the combined prompt length ends up longer than the final result. And I incorporate the feedback manually.

So sure, if someone types "write a blog post about X" and hits go, the prompt is more interesting than the output. But when there are five rounds of edits and context, would you really rather read all the prompts and drafts instead of the final version?

(if you do: https://chatgpt.com/share/6817dd19-4604-800b-95ee-f2dd05add4...)

egglemonsoup

FWIW: Your original comment, in the first message you sent ChatGPT, was way better than the one you posted. Simple, authentic, to the point

gnatolf

I couldn't agree more, this 'polished' style the finished comment comes in is super boring to read. It's hard to put the finger on it, but overall flow is just too... Samesame? I guess it's perfectly _expected_ to be predictable to read ;)

palata

> would you really rather read all the prompts and drafts instead of the final version?

I think you missed the point of the article. They did not mean it literally: it's a way to say that they are interested in what you have to say.

And that is the point that is extremely difficult to make students understand. When a teacher asks a student to write about a historical event, it's not just some kind of ceremony on the way to a degree. The end goal is to make the student improve in a number of skills: gathering information, making sense of it, absorbing it, being critical about what they read, eventually building an opinion about it.

When you say "I use an LLM to dump my thoughts and get help organising them", what you say is that you are not interested in improving your ability to actually absorb information. To me, it says that you are not interested in becoming interesting. I would think that it is a maturity issue: some day you will understand.

And that's what the article says: I am interested in hearing what you have to say about a topic that you care about. I am not interested into anything you can do to pretend that you care or know about it. If you can't organise your thoughts yourself, I don't believe that you have reached a point where you are interesting. Not that you will never get there; it just takes practice. But if you don't practice (and use LLMs instead), my concern is that you will never become interesting. This time is wasted, I don't want to read what your LLM generated from that stuff you didn't care to absorb in the first place.

imhoguy

Exactly it is a tool which needs skill to use. I would add extra use of mine:

- To "Translate to language XYZ", and that is not sometimes strightforward and needs iterating like "Translate to language <LANGUAGE> used by <PERSON ROLE> living in <CITY>" and so on.

And the author is right, I use it as 2nd-language user, thus LLM produces better text than myself. However I am not going to share the prompt as it is useless (foreign language) and too messy (bits of draft text) to the reader. I would compare it to passing a book draft thru editor and translator.

palata

For what it's worth, I think that sending a message translated to a foreign language you don't master is the worst thing you can do.

You speak English? Write and send your message in English. The receiver can copy-paste it in a translator. This way, they will know that they are not reading the original. So if your translated message sounds inaccurate, offensive or anything like that, they can go back to your original message.

investa

[flagged]

Ancalagon

I fully support the author’s point but it’s hard to argue with the economics and hurdles around obtaining degrees. Most people do view obtaining a degree as just a hurdle to getting a decent job, that’s just the economics of it. And unfortunately the employers these days are encouraging this kind of copy/paste work. Look at how Meta and Google claim the majority of the new code written there is AI created?

The world will be consumed by AI.

bruce511

You get what you measure, and you should expect people to game your metric.

Once upon a time only the brightest (and / or richest) went to college. So a college degree becomes a proxy for clever.

Now since college graduates get the good jobs, the way to give everyone a good job is to give everyone a degree.

And since most people are only interested in the job, not the learning that underpins the degree, well, you get a bunch of students that care only for the pass mark and the certificate at the end.

When people are only there to play the game, then you can't expect them to learn.

However, while 90% will miss the opportunity right there in front of them, 10% will grab it and suck the marrow. If you are in college I recommend you take advantage of the chance to interact with the knowledge on offer. College may be offered to all, but only a lucky few see the gold on offer, and really learn.

That's the thing about the game. It's not just about the final score. There's so much more on offer.

cwalv

> However, while 90% will miss the opportunity right there in front of them, 10% will grab it and suck the marrow.

Learning is not just a function of aptitude and/or effort. Interest is a huge factor as well, and even for a single person, what they find interesting changes over time.

I don't think it's really possible to have a large cohort of people pass thru a liberal arts education, with everyone learning the same stuff at the same time, and have a majority of them "suck the marrow" out of the opportunity.

squigz

> you get a bunch of students that care only for the pass mark and the certificate at the end.

This is because that is what companies care about. It's not a proxy for cleverness or intelligence - it's a box to check.

Nathanba

right and getting a family is also just a box to check and eating food is a box to check and brushing my teeth is just a box to check and on it goes for every single thing in life. If we all just checked boxes then we'd not be human anymore.

mrweasel

> Most people do view obtaining a degree as just a hurdle to getting a decent job

Then fail to actually learn anything and apply for jobs and try to cheat the interviewers using the same AI that helped them graduate. I fear that LLMs have already fostered the first batch of developers who cannot function without it. I don't even mind that you use an LLM for parts of your job, but you need to be able to function without it. Not all data is allowed to go into an AI prompt, some problems aren't solvable with the LLMs and you're not building your own skills if you rely on generated code/configuration for the simpler issues.

bee_rider

I think, rather than saying they can’t do their job without an LLM, we should just say some can’t do their jobs.

That is, the job of a professional programmer includes having produced code that they understand the behavior of. Otherwise you’ve failed to do your due diligence.

If people are using LLMs to generate code, and then actually doing the work of understanding how that code works… that’s fine! Who cares!

If people are just vibe coding and pushing the results to customers without understanding it—they are wildly unethical and irresponsible. (People have been doing this for decades, they didn’t have the AI to optimize the situation, but they managed to do it by copy-pasting from stack overflow).

closewith

> That is, the job of a professional programmer includes having produced code that they understand the behavior of.

I have met maybe two people who truly understood the behaviour of their code and both employed formal methods. Everyone else, including myself, are at varying levels of confusion.

mezyt

> I fear that LLMs have already fostered the first batch of developers who cannot function without it.

Playing the contrarian here, but I'm from a batch of developers that can't function without a compiler, and I'm at 10% of what I can do without an IDE and static analysis.

necovek

That's really curious: I've never felt that much empowered by an IDE or static analysis.

Sure, there's a huge jump from a line editor like `ed` to a screen editor like `vi` or `emacs`, but from there on, it was diminishing returns really (a good debugger was usually the biggest benefit next) — I've also had the "pleasure" of having to use `echo`, `cat` and `sed` to edit complex code in a restricted, embedded environment, and while it made iterations slower, not that much more slower than if I had a full IDE at my disposal.

In general, if I am in a good mood (and thus not annoyed at having to do so many things "manually"), I am probably only 20% slower than with my fully configured IDE at coding things up, which translates to less than 5% of slow down on actually delivering the thing I am working on.

candiddevmike

Apples and oranges (or stochastic vs deterministic)

aledalgrande

I've seen this comparison a few times already, but IMHO it's totally wrong.

A compiler translates _what you have already implemented_ into another computer runnable language. There is an actual grammar that defines the rules. It does not generate new business logic or assumptions. You have already done the work and taken all the decisions that needed critical thought, it's just being translated _instruction by instruction_. (btw you should check how compilers work, it's fun)

Using an LLM is more akin to copying from Stackoverflow than using a compiler/transpiler.

In the same way, I see org charts that put developers above AI managers, which are above AI developers. This is just smoke. You can't have LLMs generating thousands of lines of code independently. Unless you want a dumpster fire very quickly...

otabdeveloper4

Lots and lots of developers can't program at all. As in literally - can't write a simple function like "fizzbuzz" even if you let them use reference documentation. Many don't even know what a "function" even is.

(Yes, these are people with developer jobs, often at "serious" companies.)

staunton

I've never met someone like that and don't believe the claim.

Maybe you mean people who are bad at interviews? Or people whose job isn't actually programming? Or maybe "lots" means "at least one"? Or maybe they can strictly speaking do fizzbuzz, but are "in any case bad programmers"? If your claim is true, what do these people do all day (or, let's say, did before LLMs were a thing...)?

palata

> Most people do view obtaining a degree as just a hurdle to getting a decent job, that’s just the economics of it.

Because those who recruit based on the degree aren't worth more than those who get a degree by using LLMs.

Maybe it will force a big change in the way students are graded. Maybe, after they have handed in their essay, the teacher should just have a discussion about it, to see how much they actually absorbed from the topic.

Or not, and LLMs will just make everything worse. That's more likely IMO.

echelon

> I fully support the author’s point

I don't. I think the world is falling into two camps with these tools and models.

> I now circle back to my main point: I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience

Strong disagree with Clayton's conclusion.

We just made this with AI, and I'm pretty sure you don't want to see the raw inputs unless you're a creator:

https://www.youtube.com/watch?v=H4NFXGMuwpY

I think the world will be segregated into two types of AI user:

- Those that use the AI as a complete end-to-end tool

- Those that leverage the AI as tool for their own creativity and workflows, that use it to enhance the work they already do

The latter is absolutely a great use case for AI.

necovek

> We just made this with AI, and I'm pretty sure you don't want to see the raw inputs unless you're a creator:

I am not a creator but I am interested in generative AI capabilities and their limits, and I even suffered through the entire video which tries to be funny, but really isn't (and it'd be easier to skim through as a script than the full video).

So even in this case, I would be more interested in the prompt than in this video.

palata

> The latter is absolutely a great use case for AI.

The video is not exactly great, IMO.

ineedasername

Yes, depending on the model being used, endless text of this flavor isn't all that compelling to read:

"Tall man, armor that is robotic and mechanical in appearance, NFL logo on chest, blue legs".,

And so on, embedded in node wiring diagrams to fiddly configs and specialized models for bespoke purposes, "camera" movements, etc.

necovek

TBH, this video is not that compelling either, though — obviously — I am aware that others might have a different opinion.

Seeing this non-compelling prompt would tell me right off the bat that I wouldn't be interested in the video either.

Workaccount2

I just want to point out that AI generated material is naturally a confirmation bias machine. When the output is obviously AI, you confirm that you can easily spot AI output. When the output is human-level, you just pass through it without a second thought. There is almost no regular scenario where you are retroactively made aware something is AI.

Hasnep

I've heard this called the toupee fallacy. Not all toupees are bad, but you only spot the bad toupees.

djinnish

The vast majority of the time people question whether or not an image or writing is "AI", they're really just calling it bad and somehow not realizing that you could just call the output bad and have the same effect.

Every day I'm made more aware of how terrible people are at identifying AI-generated output, but also how obsessed with GenAI-vestigating things they don't like or wouldn't buy because they're bad.

cadamsdotcom

LLMs and AI use create new dichotomies we don’t have language for.

Exploring a concept-space with LLM as tutor is a brilliant way to educate yourself. Whereas pasting the output verbatim, passing it as your own work, is skipping the only part that matters.

Vibe coding is fun right up to the point it isn’t. (Better models get you further.) But there’s still no substitute for guiding an LLM as it codes for you, incrementally working and layering on code, committing to version control along the way, then finally putting the result through both AI and human peer code reviews.

Yet these all qualify as “using AI”.

The sooner language catches up and enables productive discussion of emerging distinctions the more productive these discussions can be. Without that richness we only have platitudes like “AI is a powerful tool with both appropriate and inappropriate uses and determining which is which depends on context”.

andy99

I used to teach, years before LLMs, and got lots of copy-pasted crap submitted. I always marked it zero, never mentioning plagiarism (which would require some university administration) and just commenting that I asked for X and instead got some pasted together nonsense.

As long as LLM output is what it is, there is little threat of it actually being competitive on assignments. If students are attentive enough to paraphrase it into their own voice I'd call it a win; if they just submit the crap that some data labeling outsourcer has RLHF'd into a LLM, I'd just mark it zero.

gyomu

Yeah, the author here is as much a part of the problem. If you let students get away with submitting ChatGPT nonsense, of course they’re going to do that - they don’t care about the 3000 words appeal to emotion on your blog, they take the path of least resistance.

If you’re not willing to cross out an entire assignment and return it to the student who handed it in with “ChatGPT nonsense, 0” written in big red letters at the top of it, you should ask yourself what is the point of your assignments in the first place.

But I get it, university has become a pay-to-win-a-degree scheme for students, and professors have become powerless to enforce any standards or discipline in the face of administrators.

So all they can do is give the ChatGPT BS the minimum passing grade and then philosophize about it on their blog (which the students will never read).

IshKebab

Yeah this is what I did the one time I invigilated/marked a Matlab exam. Very obvious cheating (e.g. getting the right answer with incorrect code). But no way was I going through the admin of accusing them of cheating. They just got a 0.

sebzim4500

Are you just assuming that a student who you think used an LLM would be unwilling to escalate?

I would have thought that giving 0s to correct solutions would lead to successful complaints/appeals.

jazzyjackson

If it’s copy pasted it’s obvious, and the assignment isn’t to turn in a correct solution, but to turn in evidence that you are able to determine a correct solution. Automated answers deserve 0 credit.

ineptech

Relatedly, there was a major controversy at work recently over the propriety of adding something like this to a lengthy email discussion:

> Since this is a long thread and we're including a wider audience, I thought I'd add Copilot's summary...

Someone called them out for it, several others defended it. It was brought up in one team's retro and the opinions were divided and very contentious, ranging from, "the summary helped make sure everyone had the same understanding and the person who did it was being conscientious" to "the summary was a pointless distraction and including it was an embarrassing admission of incompetence."

Some people wanted to adopt a practice of not posting summaries in the future but we couldn't agree and had to table it.

crooked-v

I think the attribution itself is a certain form of cowardice. If one is actually confident that a summary is correct they'd incorporate it directly. Leaving in the "Copilot says" is an implicit attempt to weasel out of taking responsibility for it.

triyambakam

I see it more as a form of honesty, though maybe also laziness if they weren't willing to edit the summary, or write it themselves.

makeitdouble

It's probably just transparency, because the summary will be written in a different voice and sound AIish either way.

If I were to include AI generated stuff into my communication I'd also make it clear as people might guess it anyway.

duskwuff

LLMs aren't even that good at summarizing poorly structured text, like email discussions. They can certainly cherry-pick bits and pieces and make a guess at the overall topic, but my experience has been that they're poor at identifying what's most salient. They get particularly confused when the input is internally inconsistent, like when participants on a mailing list disagree about a topic or submit competing proposals.

prymitive

I often find Copilot summaries to be more or less an attempt at mainsplaining a simple change. If my tiny PR with a one line description requires Copilot to output a paragraph of text about it it’s not a summary, it’s simply time wasted on someone who loves to talk.

jsheard

I've noticed that even on here, which is generally extremely bullish on LLMs and AI in general, people get instantly downvoted into oblivion for LLM copypasta in comments. Nobody wants to read someone else's slop.

coliveira

It is an admission of incompetence. If you need a summary, why don't you add it yourself? Moreover, any person nowadays can easily create a chatGPT summary if necessary. It is just like adding a page of google search results to your writing.

jddj

I checked your website after this and wasn't disappointed. Funny stuff.

oncallthrow

LLM cheating detection is an interesting case of the toupee fallacy.

The most obvious ChatGPT cheating, like that mentioned in this article, is pretty easy to detect.

However, a decent cheater will quickly discover ways to conduce their LLM into producing text that is very difficult to detect.

I think if I was in the teaching profession I'd just leave, to be honest. The joy of reviewing student work will inevitably be ruined by this: there is 0 way of telling if the work is real or not, at which point why bother?

lionkor

You assume that the teachers job is to catch when someone is cheating; its not. The teachers job is to teach, and if the kids don't learn because their parents allow them to cheat, don't check them at all, and let them behave like shitheads, then the kids will fail in life.

pixl97

This isn't the way reality works.

Or, bad money chases out good. Idiots that cheat will get the recommendations for jobs where by maxing the grade. The person that actually works gets set back. Even worse society at large loses and actually educated person. And lastly a school is going to attempt to protect their name by preventing cheating.

sillysaurusx

> then the kids will fail in life.

Quite the assertion. If anything the evidence is in favor of the other direction.

It was eye opening to see that most students cheat. By the same token, most students end up successful. It’s why everyone wants their kids to go to college.

SoftTalker

In many current-day school systems, the teachers job is to get the required percentage of students to pass the state assessment for their grade level.

They don’t get an exemption if the parents don’t care.

cwalv

> there is 0 way of telling if the work is real or not, at which point why bother?

I might argue you couldn't really tell if it was "real" before LLMs, either. But also, reviewing work without some accompanying dialogue is probably rarely considered a joy anyway.

makeitdouble

On reviewing students' work: people exchange copies, get their hands on past similar assignments, get friends to do their homework , potentially each of them shadow the other in fields they're good at etc.

There always was a bunch of realistic options to not actually do your submitted work, and AI is merely makes it easier, more detectable and more scalable.

I think it moves the needle from 40 to 75, which is not great, but you'd already be holding your nose at student work half of the time before AI, so teaching had to be about more than that (and TBH it was, when I was in school teachers gave no fuck about submitted work if they didn't validate it by some additional face to face or test time)

Retr0id

> a decent cheater will quickly discover ways to conduce their LLM into producing text that is very difficult to detect

Do you have any examples of this? I've never been able to get direct LLM output that didn't feel distinctly LLM-ish.

AstroBen

this immediately comes to mind https://regmedia.co.uk/2025/04/29/supplied_can_ai_change_you...

A study on whether LLMs can influence people on r/changemymind

doright

This only came to light after the study had already been running for a few months. That proves that we can no longer tell for certain unless it's literal GPT-speak the author was too lazy to edit themselves.

Teachers will lament the rise of AI-generated answers, but they will only ever complain about the blatantly obvious responses that are 100% copy-pasted. This is only an emerging phenomenon, and the next wave of prompters will learn from the mistakes of the past. From now on, unless you can proctor a room full of students writing their answers with nothing but pencil and paper, there will be no way to know for certain how much was AI and how much was original/rewritten.

palata

> there is 0 way of telling if the work is real or not

Talk to the student, maybe?

I have been an interviewer in some startups. I was not asking leetcode questions or anything like that. My method was this: I would pretend that the interviewee is a new colleague and that I am having coffee with them for the first time. I am generally interested in my colleagues: who are they, what do they like, where do they come from? And then more specifically, what do they know that relates to my work? I want to know if that colleague is interested in a topic that I know better, so that I could help them. And I want to know if that colleague is an expert in a topic where they could help me.

I just have a natural discussion. If the candidate says "I love compilers", I find this interesting and ask questions about compilers. If the person is bullshitting me, they won't manage to maintain an interesting discussion about compilers for 15 minutes, will they?

It was a startup, and the "standard" process became some kind of cargo culting of whatever they thought the interviews at TooBigTech were like: leetcode, system design and whatnot. Multiple times, I could obviously tell in advance that even if this person was really good at passing the test, I didn't think it would be a good fit for the position (both for the company and for them). But our stupid interviews got them hired anyway and guess what? It wasn't a good match.

We underestimate how much we can learn by just having a discussion with a person and actually being interested in whatever they have to say. As opposed to asking them to answer standard questions.

YmiYugy

Hate the game not the player. For the moment we continue to live in a world where the form and tone of communication matters and where foregoing the use of AI tools can put you at a disadvantage. There are countless homework assignments where teachers will give better grades to LLM outputs. An LLM can quickly generate targeted cover letters dramatically increasing efficiency while job hunting. Getting a paper accepted requires you to adhere to an academic writing style. LLMs can get you there. Maybe society just needs a few more years to adjust and shift expectations. In the meantime you should probably continue to use AI.

Mbwagava

Surely this just makes a mockery of the same tone and style that indicates someone put effort and thought into producing something. This just seems in net to waste everyone's time with no benefit to us.

yupitsme123

I can't even think of what the new set of expectations would even be of that shift were to occur.