Skip to content(if available)orjump to list(if available)

Teacher AI use is already out of control and it's not ok

ozgung

These examples show that we have a serious social issue, and it's not limited to teachers. People misuse LLMs. We engineers understand that LLMs are products under development. They only work correctly under certain circumstances, and they have limitations and non-perfect evaluation metrics. Regular people (non-engineers) treat them as finished products or magic wands. They ignore the warnings in the page saying LLM can make mistakes. And there are billions of those people. This may create huge social problems that engineers can't fix.

coffeefirst

Correct, and the root cause is it’s been sold to the public this way.

lioeters

Marketing to the consumer instead of informing and educating the user. That makes sense in terms of incentives.

pizza234

I think it's a grey area - on one hand, it's been sold as a source of truth, but on the other hand, there's a strong element of confirmation bias and/or simple laziness from users (of course, to varying degrees).

A friend of mine states that the market rates for their position are wrong, because ChatGPT gave higher numbers: this is an example on the far end of the spectrum of confirmation bias - it matters little whether it was sold as source of truth or not.

watwut

> We engineers understand that LLMs are products under development. They only work correctly under certain circumstances

Looking around me, engineers do not understand that. Instead, they have exactly same overblown expectations and actively push for LLM everywhere. They will call you ludite if you say anything else.

no_wizard

Wear that Luddite badge with honor. They were not anti technology[0], they fought for worker rights during an age of rapid rise of new technology.

[0]: https://www.newyorker.com/books/page-turner/rethinking-the-l...

ACCount36

Luddites were idiots. They thought they could stand in the way of progress. They were crushed by it.

What's worse is that people still make the same mistake today.

coffeefirst

"Ludite" "low-IQ" "meat LLM" "you will be left behind"

The behavior of the boosters is basically the opposite of how to make friends and influence people. I've been through plenty of hype cycles, and this is the first one where they seem to need to insult and threaten everyone.

I don't get it. And I don't feel any need to entertain it.

obscurette

I have a bunch of coworkers who paste various LLM outputs to every chat while discussing issues in production.

- "LLM X told that we should try to add this into configuration file – SOMETHING_SOMETHING = false."

- "There is no SOMETHING_SOMETHING configuration option, you have a full source, grep for it."

- "But should we try at least?"

ozgung

That's also because of seeing them as technology under development. The overblown expectations are because of their potential in the future. The glass is half full today. It was almost empty just a few years ago. The water level is rising with an unprecedented rate. But we shouldn't forget it's still half empty right now. More importantly, we are bad at predicting how actual people use the technology.

null

[deleted]

ACCount36

[flagged]

4b11b4

> "Social problems that engineers can't fix"

Sounds similar to social media.

Otherwise, yes, I am very concerned about society's use of LLMs -- particularly young people (students).

But now the very teachers themselves... Frankly, not surprised.

I've been using it to make me a much better tutor/mentor. But the cases outlined in (I'm assuming) the public education sector are very, very worrisome.

amrocha

Engineers are not uniquely immune to this phenomenon. Just look at all the commenters in this board claiming AI makes them 20x more effective.

georgeplusplus

Normal product release cycles bring testable quality measures before the product is released. Do LLMs go through such tests?

If there are serious societal issues that it can cause at the moment I wonder why it was released before being perfected, but then, what does perfected even look like? The product is darn good at the moment.

ozgung

These products are basically public beta. Some features are even experimental. They are released to the public because (1) companies have to gain early market share (2) they need actual user data. Sam Altman firing drama from last year was related to this issue.

beefnugs

I think its more capitalism problems. The constant squeeze for everyone to output more for less pay or die of starvation. No one could ever choose good in these circumstances.

It is hilarious though that humans are just ready to slurp up any opportunity to just start shitting all over their work and their coworkers and students. Less time spent and caring?! yes please! then they become full time salesmen of it to everyone else, and then the social interaction problems just explode. Tribal standoffs, create entirely new tribes, plotting and deception to continue getting what they have a taste of.

Infighting distractions are so convenient as the government guts everything they work under

janalsncm

Wonder if this is a natural consequence of teachers being overworked. If teachers can get more work done with AI (who cares if quality suffers!) then that becomes the baseline and admins will push them to do even more.

In other words I predict this to be less of an issue with smaller class sizes.

tecleandor

In some fields it's becoming a bit ridiculous/worrying.

The work load is making people create or extend their work using LLMs, and the reviewers/managers are also overloaded and don't have enough time to go through it so they feed it to an LLM to get a summary, that later is pushed somewhere else to feed another process... becoming a "broken telephone" business process were nobody really knows the detail of what's going on, and it's just LLMs feeding another LLMs in an eternally absurd process.

Vinnl

In my experience teachers are overworked because they care if quality suffers. They can get their work done in the set time if they just don't care for the students as much.

(Very anecdotal, local-to-the-Netherlands experience, of course.)

bri3k

Are they overworked? In the article he states he is coaching and run three preps in addition to teaching. Less can be more.

monocasa

The fact that you have to have side hustles to make ends meet as a teacher is one aspect of how they're overworked.

bri3k

Same article says he well paid. I’ll grant you that alot of teachers are paid little but there is nothing about money being in the discussion.

sooheon

Overworked is relative. As soon as a way to reduce current workload is available, everyone feels "overworked"

karel-3d

Students give AI-generated essays and the AI then grade it.

That's what's called GAN - generative adversarial network.

exegete

And teachers used AI to generate the essay question.

karel-3d

Unsupervised learning.

empiko

The cracks in the education system were showing even before AI, with unmotivated students and teachers alike just burning their hours. AI just exposed these cracks and showed that the entire system is incredibly inefficient and pointless. I believe that the future of education is in much smaller institutions that can support their communities on a human scale.

probably_wrong

I don't think inefficiency is the right word. In fact, I'd argue the exact opposite: that one of the main problems with education is that every single administrator has been trying to optimize it to death for the wrong metrics, namely "number of students making it through" and "budget".

Smaller institutions are indeed better, but they are also less efficient. It's no wonder that only rich families can afford institutions like that.

spacecadet

I agree with you. Learning is messy, hyper situational, and personalized. "Optimizing" for "efficiency" neglects this and resulted in the cookie cutter "teacher factories" that public education has become. As someone with relatives who were public school teachers- they will tell you that there is no way to scale it back and bring the community aspect in closer... that most communities have too little budget and too many children and it just burns through teachers... Like gun control, this is likely a problem that will continue "without solution" because people are lazy and change is difficult. Im sure future historians will credit some of Americas collapse to this problem, among other "unfixable" societal problems.

obscurette

As a teacher – excellent description, thank you. Just to add my experience to it – I was in school in seventies and had "40 years since graduating" meeting some years ago with my classmates. Vast majority were doing well and while we talked about old times in school, two things stood out. At first while we were in the same class, our experience was very different. We remembered very different things, different teachers were important to us up to the point where some of them were most loved ones to some, but most hated ones to others etc. But we all agreed that our homes were even more important for our education than school – from our homes (parents and grandparents) came the attitude that education is important and no matter what, it's our responsibility to study.

4b11b4

Have been thinking similar... well, that we'll see much smaller educational organizations which are funded/supported in much different ways.

JohnKemeny

Technology is making humankind lazy. First physically, then mentally.

Learning is hard, it's a struggle. Why learn when you can not learn?

xrd

It feels funny to think about this next to the outrage over trans kids in school sports. There are probably a dozen kids nationally participating in a sport with other kids who didn't share the same set of chromosomes at birth. That's a tiny slice of the population, but the issue has captured the attention of a huge group of people. I believe the anger, if you distill it a bit, comes from an "unlevel" playing field, right?

But, when students use AI, and if there are some students that don't, the playing field is "unlevel" there as well. The students that don't perhaps want to learn a craft rather than take a shortcut to getting a grade. I would wager that the number of students and teachers using AI is now the majority population.

I face this dilemma on a daily basis when trying to do my job as a software developer. Let claude take over, and risk losing the only skill I had to differentiate myself in this harsh world? Or, take a chance on being the turtle and trying to win the race against the hare?

simonw

The more time I spend writing code with help from LLMs the less I fear for my job, because I gain an increased understanding of how much depth there is to building software.

To get good results out of an LLM you need to determine exactly what the system needs to do and how it should work.

That's programming! We just don't have to type all the semicolons ourselves any more.

xrd

I always feel pretty special when you respond to a comment of mine. Thank you.

I agree with you. And, I'm not sure LLMs help me learn high level concepts (yet). They certainly have those concepts inside their training data and you can extract the concepts if you do the work. But, in a lot of domains, and this applies to someone old like me and someome young like my kids, knowing what to ask is the central problem.

This applies to what I see my kids doing with AI: I don't think LLMs, right now, encourage them to learn concepts as much as they quickly give them answers.

I don't see ChatGPT Study Mode as fulfilling on this, in my limited usage, but I would love to be wrong about that. Its a good direction indeed.

Probably this is the new frontier, where the best students are the ones that figure out how to use these tools to learn "deeply" rather than just jumping to the answers. Maybe that is how it has always been?

insane_dreamer

> We just don't have to type all the semicolons ourselves any more.

And if we're not having to code in Java then we never had to type all those in the first place! ;)

someuser2345

> I believe the anger, if you distill it a bit, comes from an "unlevel" playing field, right?

Why is "unlevel" in quotes? When it comes to physical activities, biological males have a huge advantage over biological females; high school boys routinely beat professional adult women's sport teams.

> But, when students use AI, and if there are some students that don't, the playing field is "unlevel" there as well. The students that don't perhaps want to learn a craft rather than take a shortcut to getting a grade.

I agree that this is a bigger problem than trans kids in sports. I think people are less upset about this because

1. It's a more recent development 2. They think that the kids using AI are actually putting themselves at a disadvantage, albeit one that will only become apparent after they graduate.

skeezyboy

> Let claude take over, and risk losing the only skill I had to differentiate myself in this harsh world?

the good times are over, it happens. i remember watching that Dall-E come out and feeling sorry for graphic designers, gloating in the knowledge programming was too complex to automate. then they automated it.

a human is still required in the loop for vibe coding, as its fairly fuckin useless without guidance, but i can see that changing too

CompoundEyes

I’m drafting policy at work with teammates about how we will handle pull requests with aggressive use of Claude Code. We are currently researching and piloting it.

I am going to propose that no one should feel pressure to use any of the generative coding tools if they don't want to.

internet_points

> * A teacher sponsoring a club put student artwork through Microsoft Copilot to 'clean it up' because he thought it looked too unfinished and the kid felt incredibly disrespected and upset.

and rightly so! kids deserve better, that is awful

jfarmer

I sometimes find Paul Watzlawick's five axioms of communication helping in thinking about situations like this.

Link: (https://en.wikipedia.org/wiki/Paul_Watzlawick#Five_basic_axi...)

  1. One cannot not communicate

  2. Every communication has a content and relationship
     aspect such that the latter classifies the former
     and is therefore a metacommunication

  3. The nature of a relationship is dependent on the
     punctuation of the partners' communication procedures

  4. Human communication involves both digital and analog
     modalities

  5. Inter-human communication procedures are either
     symmetric or complementary
Re: (1), the "mere" act of using AI communicates something, just like some folks might register a text message as more (or less) intimate than a phone call, email, etc. The choice of modality is always part of what's communicated, part of the act of communication, and we can't stop that. Re: (2), that communication is then classified by each person's idea of what the relationship is.

This is a dramatic and expensive way to learn they had different ideas of their relationship!

Of course, in a teacher/student situation, it's the teacher's job to make it clear to the students what the relationship is. Otherwise you risk relationship-damaging "surprises" like this.

Even ignoring the normative question of what a teacher Should™ do in that situation, it was counterproductive. Whatever benefit the teacher thought AI would provide, they'd (hopefully) agree it was outweighed by the cost to their relationship w/ students. All future interactions w/ those students will now be X% harder.

There's a kind of technical rationale which says that if (1) the GOAL is to improve the student's output and (2) I would normally do that by giving one or more rounds of feedback and waiting for the student to incorporate it then (3) I should use AI because it will help us reach that goal faster and more efficiently.

John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:

”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”

The act of receiving and incorporating feedback is not "inefficient", especially not in a school setting. The consummation of that process is part of the goal. Maybe the most important part!

Full Dewey quote: https://news.ycombinator.com/item?id=44597741

amelius

In another view, this prepares the kids for what the future is going to be like.

null

[deleted]

null

[deleted]

chii

however, this same action could be useful if it was placed in a different context - for example, if the teacher uses the same AI to produce an artefact, then use it to critique the student as part of teaching (say, to show what might be lacking in a particular piece).

jjani

All those teachers should indeed be banned from using AI. But that's not because LLMs are incapable of the things they're using them for, in a way that would be an improvement over how those same teachers were doing those tasks pre-LLMs.

The majority of times I see things like this it turns out that it's either:

- The "they've built it wrong" case; this one is the most common. People using - or in this case being made to use at work - tools that behind the scenes all use very cheap models (e.g. 4o-mini) with little context, half vibe-coded up, to save costs. The company making "MagicSchool" doesn't care, they want to maximize those profit margins and they're selling to school administration, not teachers, who only look at the costs and don't ever actually use the products themselves. Just like classic enterprise software in traditional companies. They need to tick boxes, show features that only show the happy path/case. It is perfectly possible to make it high quality, in a way that adds value, doesn't make shit up, and is properly validated. But especially in this niche, sales trumps everything. The hope is that at some point, this will change. We've seen the same play out with enterprise software to an extent; new such software does tend to be more usable on average than it used to be. It has taken a long time to get there though.

- The "you're holding it wrong" meme; users themselves directly using tools like Microsoft Copilot, 4o and friends (very outdated, free tiers, miles behind Claude/Gemini 2.5 pro/o3/etc.), along with having zero idea about what LLMs can and can't do, and obviously even less of an idea about inherent biases and prompting to prevent those. This combined with a complete lack of caring, along with a lack of competency - people lacking the basic critical thinking skills necessary to spot issues - is a deadly combo.

Of the problems with tasks and outcomes named in that thread, the large majority can indeed be done already with LLMs in a manner that both saves time and provides better quality than the level of those teachers rightly being criticized there. Teachers who are not even checking the output obviously don't give a single damn anyway, and that tells you enough about what the quality of their teaching would've been like pre-LLMs.

tovej

Using LLMs to produce material is not a good idea, except maybe to polish up grammar and phrasing.

As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.

Material created by LLM will have the issues you mentioned, yes, but it will also be less easy to teach, for the reasons mentioned above. In the US, where teaching is already in a terrible state, I wouldn't be surprised if this is accepted quietly, but it will have a long lasting negative impact on learning outcomes.

If we project this forward, a reliance on AI tools might also create a lower expectation of the quality of the material, which will drag the rest of the material down as well. This mirrors the rise of expendable mass produced products when we moved the knowledge needed to produce goods from workers to factory machines.

Commodities are one thing, you could argue that the decrease in quality is offset by volume (I wouldn't, but you could), but for teaching? Not a good idea. At most, let the students know how to use LLMs to look for information, and warn them of hallucinations and not being able to find the sources.

halgir

I agree you shouldn't use LLMs to produce material wholesale, but I think it can be positively useful when used thoughtfully.

I recently taught a high school equivalent philosophy class, and wanted to design an exercise for my students to allocate a limited number of organs to recipients that were not directly comparable. I asked an LLM to generate recipient profiles for the students to choose between. First pass, the recipients all needed different organs, which kind of ruined the point of the dilemma! I told it so, and second pass was great.

Even with the extra handholding, the LLM made good materials faster than if I would have designed them manually. But if I had trusted it blindly, the materials would have been useless.

tovej

How can you ensure that the exercise actually teaches the students anything in this case? Shouldn't you be building the exercise around the kinds of issues that are likely to come up, or that are difficult/interesting?

If you're teaching ethics in high school (which it sounds like you are), how many minutes does it take to write three or four paragraphs, one per case, highlighting different aspects that the student would need to take into account when making ethical decisions? I would estimate five to ten. A random assortment of cases from an LLM is unlikely to support the ethical themes you've talked about in the rest of the class, and the students are therefore also unlikely to be able to apply anything they've learned in class before then.

This may sound harsh, but to me it sounds like you've created a non-didactic, busywork exercise.

jjani

I've done years of private 1:1 teaching and some class teaching though not class lecturing, which is presumably the material you're talking about.

> As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.

It's absolutely necessary to have a good fundamental understanding of the material yourself. These teachers abusing AI and not even catching these obvious issues, clearly don't have such an understanding - or they're not using any of it, which is effectively the same. In fact, they're likely to have a much worse understanding than your average frontier LLM, especially given this post is about high school level teaching.

> The only way to do this is to prepare the material yourself.

As brought up in other comments, what is yourself? For decades teachers have been using premade lesson plans, either third-party, school supplied or otherwise obtained, with minor edits. All teachers? Of course not, but it's completely normalized. Are they doing it themselves? If not, then the remainder did it together with Google and Wikipedia. Were they also not doing it themselves? Especially given how awful modern Google is (and the worldwide number of high school teachers using something like Kagi will be <100 people), simply using a frontier model, especially with web search enabled, is simply a better version of doing that, if used in the same way.

tovej

If you use a prepared lesson plan it at least has some structure to it that students can learn to expect, and if you search for information from the internet, you are still compiling it yourself, which again means structure, a structure _you_ made using information that _you_ have parsed and decided to include. You will also have sources.

None of this will be true for LLM output.

meindnoch

In the future, only prestigious private schools will employ human teachers.

Education in public schools is going to be 100% LLMs with text-to-speech, the only human adult in classrooms will be a security guard, but later they will also be replaced with AI-controlled autocannons that shoot non-lethal projectiles to discipline misbehaving kids.

grues-dinner

Drone-based school security with flash bangs, pepper spray and physical dive-bombing of the attacker is already the plan: https://www.campusguardianangel.com/

Just add a student compliance add-on subscription.

OgsyedIE

So how do they get through closed doors?

grues-dinner

> The drones can also fly through windows, using a front lance to break through.

https://www.campusguardianangel.com/faq

I would say you couldn't make it up, but you could. You'd just be called a bad writer with unsubtle and derivative ideas.

meindnoch

This is satire, right?

3D30497420

Looks pretty real:

https://www.linkedin.com/company/mithril-defense/people/

https://www.nbcnews.com/nightly-news/video/company-says-high...

Why solve the root problem when it can instead be made into a business opportunity?

password321

Nah by that point people won't have a reason to drop them their children off to a glorified daycare designed to condition them to work quietly for a set amount of hours because we won't need to work anymore.

alpinisme

You vastly underestimate the value of daycare (leaving aside the actual point you’re making)

warmedcookie

Exactly, a big part of public school is the "daycare" aspect of it. LLMs cannot provide that.

Applejinx

"get" to

throwaway290

I think it was sarcasm

SJC_Hacker

Have you ever been inside an American K-12 classroom ?

Education is secondary to a teachers job …. the real issue is managing the classroom without disruption

polskibus

Why do you think it’s different in other countries? It’s the same all over Europe too. More and more kids have ADHD or other mental issues, social networking affects social norms etc.

Lord-Jobo

It's culture. American culture treats teaching and education like a free babysitting service, and pays teachers accordingly.

If our culture valued education we would value teachers and their ability to teach, and we so clearly do not.

SJC_Hacker

I don't know about other countries, my experience has only been in US high schools, mostly public. Maybe it would work in other countries, or private schools

microtherion

"non-lethal"? I wish I shared your sunny optimism.

lagniappe

Why would it need lethal? The students have that covered, this is America.

no_wizard

This seems like a dystopian nightmare to me

quantified

Exacly which direction are we headed at the moment that isn't dystopian nightmare? Under-resourced towns will likely happily shed humans, except for the headmaster and security officers. It'll take a generation, though.

gostsamo

There should be a dog to stop the guard from talking to the kids.

grues-dinner

I thought it was the other way around, the guard is there to shoot the dog when it's attacking the children (with apologies to the very old joke about catching bears in trees).

BizarroLand

Nah, kids would have to wear armbands with tasers on them, required to put them on to enter the school building or open doors in the building. Their only human adult interaction will be with the guards that ensure they are banded up and who stay on campus to react to alarms from every kid who tries to remove their armbands.

Buses will be driven by AI as well, so they'll only see their parents for 10 minutes in the morning and for an hour or so during the occasional dinners they eat together, and otherwise kids will be entirely alienated and left alone.

But do not worry! There will be an AI companion for them to talk to at all hours, a nanny AI, or a nAInny, one that starts as a nanny when they are infants and will gradually grow into an educator, confidante, and true friend that will never betray them.

That nAInny will network with other nAInnies to find mates for their charges, coordinate dates, ensure that no hanky-panky goes on until they graduate college and get married, and will be there together to give pointers and instructions during the act of coitus to enhance the likelihood of producing offspring that their fellow nAInnys will get to take care of.

A truly symbiotic relationship where the humans involved never have any agency but never need it as their lives are happy and perfect in every way.

If you don't want to participate in that, you will be removed as an obstacle to the true happiness of the human race.

pessimizer

Classrooms? Lol, it's going to be byod and wfh. You're only going to have to sit in front of the security guard and all of the electronic monitoring while doing standardized tests, and this remaining expense will aggravate the state so much that it will replace the exam rooms with omniscient, thinking rootkits on every school-aged person's computing devices. However, since children could use some adult's computing device to avoid monitoring, once well established those rootkits will be installed on everyone's computing device, at the hardware level.

If you object, it's because you hate children.

Eventually, there are no more misbehaving kids, there are misbehaving parents who children are reporting to the trusted phones who taught them about the world, the phones that aligned the values of your children with the values of the people who paid the people who designed the system.

spacecadet

I mean, I would object, but I also hate children.

crinkly

So my daughter got sent home with some math questions. Thought they looked a bit dry but thought nothing further of it. I checked the answers for her which were all ok.

Couple of days later she comes home and tells me I was wrong about some of them which I know I was not. Apparently they self marked them as the teacher read the answers out. Decided to phone in and ask about the marking scheme which I was told I was wrong too and basically I should have done better at GCSE mathematics.

I relayed my mathematical credentials and immediately the tone changed. The discussion indicated that they’d generated the questions with CoPilot and then fed that back into CoPilot and generated the answer sheet which had two incorrect answers on it.

The teacher and department head in question defended their position until I threatened to feed them to the examination board and leadership team. The following of the tech was almost zealot level religious thinking which is not something I want to see in education.

I check all homework now carefully and there have been other issues since.

Avicebron

Great, and now you too are bagging your own groceries, mission accomplished.

DavidPiper

It seems a little disingenuous to equate the importance of bagging groceries and supporting your child's education when judging how much time and attention each deserves.

zamadatix

I think the meaning was more "and now there is yet another thing the education system was better suited to do the parent now needs to do instead" and less "your child's education is worth grocery bags".

adavid17

That is crazy. Curious - are you planning on raising to the board, administrators, etc? It's probably impacting other students (who don't have a parent checking their work), and teachers of other subjects in the school may be doing the same thing

crinkly

I caused enough stink for them to be looking over their shoulder.

jv22222

FYI This school uses AI as teachers: https://alpha.school/santa-barbara/

They say they have good results?

simonw

Given that "Alpha School tuition ranges from $40,000 upwards" I wouldn't expect them to not say they get good results!

Balgair

https://www.astralcodexten.com/p/your-review-alpha-school

Here's a review of AlphaSchool and it's methods. Honestly, the review is a good one and very well written. It's worth your time if you have inkling about alternative education and the use of AI in the classroom.

TLDR: The magic is not AI, it's that they bribe kids for good grades. Oops, sorry, 'motivate' kids for good grades.

fzeroracer

Teachers using AI to generate all of their lesson material, read student papers and write comments.

Students using AI to generate their papers and solve complex problems.

What are we as humans even doing. Why not just connect two shitty models together and tell them to hallucinate to each other and skip the whole need to think about anything. We can fire both teachers and students at the same time and save money on this whole education thing.

avidiax

> Why not just connect two shitty models together and tell them to hallucinate to each other and skip the whole need to think about anything.

Western countries have better conditions than much of the world for a variety of reasons, but among them is education and culture.

Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.

You might argue that the AI can be a mentor or can guide society appropriately. That's not wholly untrue, but if AI is "a bicycle for the mind", you still have to have the willingness and vision to go someplace with it. If you've never thought for yourself, never learned anything independently, I just don't see how people will avoid using AI to be "stupid faster".

skeezyboy

> You might argue that the AI can be a mentor or can guide society appropriately its a next word predictor trained off datasets.

> Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.

they said the same about tv, youtube and even printed books. short length videos now apparently are the new evil (somehow).

quick question, why was nobody complaining about these exact same "engagement" algorithms 20 years ago? Why only when tiktok short form videos appear? Popularity based ranking was in search engines decades ago but nobody cared then. No cocomelon back then, coincidence?

blibble

> Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.

absolutely

up until 2022 I was optimistic for the future

our current big problems: climate change, nuclear proliferation, global pandemics, dictatorships, antibiotic resistance, all seemed solvable over the long term

"AI" however is different

previously all human societies placed a high value on education

this is now gone, if anything spending time educating yourself is now a negative

I don't see how the species survives this new reality over the long term

ben_w

Teachers:

IIRC, it may be better to have the same number of real humans focussing on fewer pupils. Even when they're using VLMs as assistants.

Students:

While humans max out at a higher skill level than VLMs, I suspect that most (not all!) people who would otherwise have finished education at their local mandatory school leaving age, may be better off finishing school as soon as they can use a VLM.

But also #1: There's also a question of apprenticeships over the current schooling system. Robotics' AI are not as advanced as VLMs, so while plumbing will get solved eventually (and a tentacle robot arm with a camera in each "finger" is clearly superior to our human arms in tight spaces), right now it still looks like a sane thing to train in.

But also #2: Telling who is and isn't getting anything out of the education system is really hard; not only in historical systems like the UK's old eleven-plus exams, but today after university graduation when it can sometimes take a bit of effort to detect that someone only got a degree for the prestige and didn't really learn anything.

meindnoch

>There's also a question of apprenticeships over the current schooling system. Robotics' AI are not as advanced as VLMs, so while plumbing will get solved eventually (and a tentacle robot arm with a camera in each "finger" is clearly superior to our human arms in tight spaces), right now it still looks like a sane thing to train in.

This is the current meta. Today's knowledge workers are propertymaxxing like crazy, and sending their kids to trade school. Well, at least those who see the writing on the wall. The second half of the 021st century will see the rise of the PLIWs [1]. Knowledge work will become extinct. The social order will be:

1. elites: a small aristocracy, who control the access to AI

2. middle class: PLIWs

3. low class: children of today's knowledge workers who couldn't amass sufficient wealth for their kids to become PLIWs. Also called slop-feeders, as their job is to carry out the instructions coming from the AIs without questioning or understanding what they're doing.

________

[1] PLIW = Physical Labour, Inherited Wealth

jmogly

Would be great if that massive solar flare could hit like tomorrow.

HPsquared

It's a microcosm of the real economy.

bdisl

>What are we as humans even doing.

We are avoiding work that we don't want to do and therefore saving time, which is precisely what technology promised would help us do.

prmoustache

> and therefore saving time

Apparently we aren't.

crinkly

We are normalising everything and everyone into information grey goo.

bondarchuk

The problem with your proposal is that people need money to buy food and housing.

kingstnap

There have always been a disgusting number of people who treat education as a means to an end.

A teaching culture of thinking that all you have to do is graduate students + a learning culture of thinking all you have to do is graduate.

This already was at an 8. Got dialled up to 11 during covid. And somehow dialled up to 21 after ChatGPT.

Normally, broken things can hobble along for a very long time. But the strain is so intense on what has become of education that my current guess is that the chickens will come to roost on this one sometime 2026 to 2027.

skeezyboy

that system produced silicon valley, so it cant be that disfunctional

dandanua

> We can fire both teachers and students at the same time and save money on this whole education thing.

The current US administration has already started this process.

5pl1n73r

This is happening across all industries, unfortunately. Medical, engineering, pharmaceuticals, law enforcement, military, transportation, law... Thanks for a perfect post that describes the problem! We need more of these. Most people know they're doing it too, they just need to be told more.