Skip to content(if available)orjump to list(if available)

Trying to teach in the age of the AI homework machine

math_dandy

I teach math at a large university (30,000 students) and have also gone “back to the earth”, to pen-and-paper, proctored and exams.

Students don’t seem to mind this reversion. The administration, however, doesn’t like this trend. They want all evaluation to be remote-friendly, so that the same course with the same evaluations can be given to students learning in person or enrolled online. Online enrollment is a huge cash cow, and fattening it up is a very high priority. In-person, pen-and-paper assessment threatens their revenue growth model. Anyways, if we have seven sections of Calculus I, and one of these sections is offered online/remote, then none of the seven are allowed any in person assessment. For “fairness”. Seriously.

Balgair

I think you've identified the main issue here:

LLMs aren't destroying the University or the essay.

LLMs are destroying the cheap University or essay.

Cheap can mean a lot of things, like money or time or distance. But, if Universities want to maintain a standard, then they are going to have to work for it again.

No more 300+ person freshman lectures (where everyone cheated anyways). No more take-home zoom exams. No more professors checked out. No more grad students doing the real teaching.

I guess, I'm advocating for the Oxbridge/St. John's approach with under 10 class sizes where the proctor actually knows you and if you've done the work. And I know, that is not a cheap way to churn out degrees.

stonemetal12

>I guess, I'm advocating for the Oxbridge/St. John's approach with under 10 class sizes where the proctor actually knows you and if you've done the work. And I know, that is not a cheap way to churn out degrees.

I could understand US tuition if that were the case. These days with overworked adjuncts make it McDonalds at Michelin star prices.

hombre_fatal

Funnily enough I only had 10-person-classes when I paid $125 for summer courses in a community college between expensive uni semesters.

ijk

Given that the adjuncts often aren't paid all that much better than the McDonalds workers...

rwyinuse

Over here in Finland, higher education is state funded, and the funding is allocated to universities mostly based on how many degrees they churn out yearly. Whether the grads actually find employment or know anything is irrelevant.

So, it's pretty hard for universities over here to maintain standards in this GenAI world, when the paying customer only cares about quantity, and not quality. I'm feeling bad for the students, not so much for foolish politicians.

Balgair

Gosh, I'm so myopic here. I'm mostly talking about US based systems.

But, of course, LLMs are affecting the whole world.

Yeah, I'd love to hear more about how other countries are affected by this tool. For Finland, I'd imagine that the feedback loop is the voters, but that's a bit too long and the incentives and desires of the voting public get a bit too condensed into a few choice to matter [0].

What are you seeing out there as to how students feel about LLMs?

[0] funnily enough, like how the nodes in the neural net of an LLM get too saturated if they don't have enough parameters.

fakeBeerDrinker

After a short stint as a faculty member at a McU institution, I agree with much of this.

Provide machine problems and homework as exercises for students to learn, but assign a very low weight to these as part of an overall grade. Butt in seat assessments should be the majority of a course assessment for many courses.

voilavilla

>> (where everyone cheated anyways)

This is depressing. I'm late GenX, I didn't cheat in college (engineering, RPI), nor did my peers. Of course, there was very little writing of essays so that's probably why, not to mention all of our exams were in person paper-and-pencil (and this was 1986-1990, so no phones). Literally impossible to cheat. We did have study groups where people explained the homework to each other, which I guess could be called "cheating", but since we all shared, we tended to oust anyone who didn't bring anything to the table. Is cheating through college a common millenial / gen z thing?

Balgair

Even before LLMs, if you walked into any frat and asked to see their test bank, you'd get thousands of files. Though not technically cheating, having every test a professor ever gave was a huge advantage. Especially since most profs would just reuse tests and HWs without any changes anyway.

To my generation, it wasn't that cheating was a 'thing' as much as it was impossible to avoid. Profs were so lazy that any semi-good test prep would have you discover that the profs were phoning it in and had been for a while. Things like not updating the course page with all the answers on them were unfortunately common. You could go and tell the prof, and most of us did, but then you'd be at a huge disadvantage relative to your peers who did download the answer key. Especially since the prof would still not update the questions! I want to make it clear: this is a common thing at R1 universities before LLMs.

The main issue is that at most R1s, the prof isn't really graded on their classes. That's maybe 5% of their tenure review. The thing they are most incentivized by is the amount of money they pull in from grants. I'm not all that familiar with R2 and below, but I'd imagine they have the same incentives (correct me if I'm wrong!). And with ~35% of students that go to R2 and below, the incentives for the profs for ~65% of students isn't well correlated with teaching said students.

anon84873628

Here's how cheating advanced since then.

1. People in the Greek system would save all homework sets and exams in a "library" for future members taking a given course. While professors do change (and a single professor will try to mix up problems) with enough time you eventually have an inventory of all the possible problems, to either copy outright or study.

2. Eventually a similar thing moved online, both with "black market" hired help, then the likes of Chegg Inc.

3. All the students in a course join a WhatsApp or Discord group and text each other the answers. (HN had a good blog about this from a data science professor, but I can't find it now. College cheating has been mentioned many times on HN).

a_bonobo

I think this is where it's going to end up.

The masses get the cheap AI education. The elite get the expensive, small class, analog education. There won't be a middle class of education, as in the current system - too expensive for too little gain.

armchairhacker

Cheap "universities" are fine for accreditation. Exams can be administered via in-person proctoring services, which test the bare minimum. The real test would be when students are hired, in the probationary period. While entry-level hires may be unreliable, and even in the best case not help the company much, this is already a problem (perhaps it can be solved by the government or some other outside organization paying the new hire instead of the company, although I haven't thought about it much).

Students can learn for free via online resources, forums, and LLM tutors (the less-trustworthy forums and LLMs should primarily be used to assist understanding the more-trustworthy online resources). EDIT: students can get hands-on-experience via an internship, possibly unpaid.

Real universities should continue to exist for their cutting-edge research and tutoring from very talented people, because that can't be commodified. At least until/if AI reaches expert competence (in not just knowledge but application), but then we don't need jobs either.

Balgair

> Real universities should continue to exist for their cutting-edge research and tutoring from very talented people, because that can't be commodified. At least until/if AI reaches expert competence (in not just knowledge but application), but then we don't need jobs either.

Okay, woah, I hadn't thought of that. I'm sitting here thinking that education for it's own sake is one of the reasons that we're trying to get rid of labor and make LLMs. Like, I enjoy learning and think my job gets in the way of that.

I hand't thought that people would want to just not do education of any sort anymore.

That's a little mind blowing.

tgv

10 is a small number. There's a middle ground. When I studied, we had lectures for all students, and a similar amount of time in "work groups," as they were called. That resembled secondary education: one teacher, around 30 students, but those classes were mainly focused on applying the newly acquired knowledge, making exercises, asking questions, checking homework, etc. Later, I taught such classes for programming 101, and it was perfectly doable. Work group teachers were also responsible for reviewing their students' tests.

But that commercially oriented boards are ruining education, that's a given. That they would stoop to this level is a bit surprising.

SoftTalker

Very common. Large lecture with a professor, and small "discussion sections" with a grad student for Q/A, homework help, exam review.

username223

Believe it or not, 300-person freshman lectures can be done well. They just need a talented instructor who's willing to put in the prep, and good TAs leading sections. And if the university fosters the right culture, the students mostly won't cheat.

But yeah, if the professor is clearly checked out and only interested in his research, and the students are being told that the only purpose of their education is to get a piece of paper to show to potential employers, you'll get a cynical death-spiral.

(I've been on both sides of this, though back when copy-pasting from Wikipedia was the way to cheat.)

mathgeek

> though back when copy-pasting from Wikipedia was the way to cheat

Back when I was teaching part time, I had a lot of fun looking at the confused looks on my students' faces when I said "you cannot use Wikipedia, but you'll find a lot of useful links at the bottom of any article there..."

BrenBarn

I see that pressure as well. I find that a lot of the problems we have with AI are in fact AI exposing problems in other aspects of our society. In this case, one problem is that the people who do the teaching and know what needs to be learned are the faculty, but the decisions about how to teach are made by administrators. And another problem is that colleges are treating "make money" as a goal. These problems existed before AI, but AI is exacerbating them (and there are many, many more such cases).

I think things are going to have to get a lot worse before they get better. If we're lucky, things will get so bad that we finally fix some shaky foundations that our society has been trying to ignore for decades (or even centuries). If we're not lucky, things will still get that bad but we won't fix them.

Brybry

Instructors and professors are required to be subject matter experts but many are not required to have a teaching certification or education-related degree.

So they know what students should be taught but I don't know that they necessarily know how any better than the administrators.

I've always found it weird that you need teaching certification to teach basic concepts to kindergartners but not to teach calculus to adults.

throwaway2037

    > Instructors and professors are required to be subject matter experts but many are not required to have a teaching certification or education-related degree.
I attended two universities to get my computer science degree. The first was somewhat famous/prestigious, and I found most of the professors very unapproachable and cared little about "teaching well". The second was a no-name second tier public uni, but I found the professors much more approachable, and they made more effort to teach well. I am still very conflicted about that experience. Sadly, the students were way smarter at the first uni, so the intellectual rigor of discussions was much higher than my second uni. My final thoughts: "You win some; you lose some."

fastasucan

>I've always found it weird that you need teaching certification to teach basic concepts to kindergartners but not to teach calculus to adults.

There is a lot more on the plate when you are kindergarten teacher - as the kids needs a lot of supervision and teaching outside the "subject" matters, basic life skills, learning to socialize.

Conversely, at a university the students should generally handle their life without your supervision, you can trust that all of them are able to communicate and to understand most of what you communicate to them.

So the subject matter expertise in kidnergartens is how to teach stuff to kids. Its not about holding a fork, or to not pull someones hair. Just as the subject matter expertise in an university can be maths. You rarely have both, and I don't understand how you suggest people get both a phd in maths, do enough research to get to be a professor and at the same time get a degree in education?

dr_dshiv

Just watch out for who is certifying how things should be taught. It’s honestly one reason education is so bad and so slow to change.

Edit: and why perfectly capable professionals can’t be teachers without years of certification

hollandheese

>don't know that they necessarily know how any better than the administrators.

If someone is doing something day in and day out, they do gain knowledge on what works and doesn't work. So just by doing that the professors typically know much more about how people should be taught than the administrators. Further, the administrators' incentives are not aligned towards insuring proper instruction. They are aligned with increasing student enrollment and then cashing out whenever they personally can.

rtkwe

> I've always found it weird that you need teaching certification to teach basic concepts to kindergartners but not to teach calculus to adults.

I think this is partially due to the age of the students, by the time you hit college the expectation is you can do a lot of the learning yourself outside of the classroom and will seek out additional assistance through office hours, self study, or tutors/classmates if you aren't able to understand from the lecture alone.

It's also down to cost cutting, instead of having entirely distinct teaching and research faculty universities require all professors to teach at least one class a semester. Usually though the large freshman and sophomore classes do get taught by quasi dedicated 'teaching' professors instead of a researcher ticking a box.

stevage

This is very different in France. Studying to be a teacher at university level is a big deal.

BrenBarn

The instructors may not know the absolute best way to teach, but I think they do know more than the administrators. All my interaction with teacher training suggests to me that a large proportion of it is basically vacuous. On dimensions like the ones under discussion here (e.g., "should we use AI", "can we do this class online"), there is not really anything to "know": it's not like anyone is somehow a super expert on AI teaching. Teacher training in such cases is mostly just fads with little substantive basis.

Moreover, the same issues arise even outside a classroom setting. A person learning on their own from a book vs. a chatbot faces many of the same problems. People have to deal with the problem of AI slop in office emails and restaurant menus. The problem isn't really about teaching, it's about the difficulty of using AI to do anything involving substantive knowledge and the ease of using AI to do things involving superficial tasks.

Telemakhos

A PhD was historically a teaching degree: that’s what the D stands for.

california-og

I totally agree. I think the neo-liberal university model is the real culprit. Where I live, Universities get money for each student who graduates. This is up to 100k euros for a new doctorate. This means that the University and its admin want as many students to graduate as possible. The (BA&MA) students also want to graduate in target time: if they do, they get a huge part of their student loans forgiven.

What has AI done? I teach a BA thesis seminar. Last year, when AI wasn't used as much, around 30% of the students failed to turn in their BA thesises. 30% drop-out rate was normal. This year: only 5% dropped out, while the amount of ChatGPT generated text has skyrocketed. I think there is a correlation: ChatGPT helps students write their thesises, so they're not as likely to drop out.

The University and the admins are probably very happy that so many students are graduating. But also, some colleagues are seeing an upside to this: if more graduate, the University gets more money, which means less cuts to teaching budgets, which means that the teachers can actually do their job and improve their courses, for those students who are actually there to learn. But personally, as a teacher, I'm at loss of what to do. Some thesises had hallucinated sources, some had AI slop blogs as sources, the texts are robotic and boring. But should I fail them, out of principle on what the ideal University should be? Nobody else seems to care. Or should I pass them, let them graduate, and reserve my energy to teach those who are motivated and are willing to engage?

avhception

I think one of the outcomes might be a devaluation of the certifications offered in the public job marketplace.

intended

You should fail them.

The larger work that the intellectual and academic forces of a liberal democracy is that of “verification”.

Part of the core part of the output, is showing that the output is actually what it claims to be.

The reproducibility crisis is a problem Precisely because a standard was missed.

In a larger perspective, we have mispriced facts and verification processes.

They are treated as public goods, when they are hard to produce and uphold.

Yet they compete with entertainment and “good enough” output, that is cheaper to produce.

The choice to fail or pass someone doesn’t address the mispricing of the output. We need new ways to address that issue.

Yet a major part of the job you do. is to hold up the result to a standard.

You and the institutions we depend on will continue to be crushed by these forces. Dealing with that is a separate discussion from the pass or fail discussion.

halgir

> Some thesises had hallucinated sources, some had AI slop blogs as sources, the texts are robotic and boring. But should I fail them, out of principle on what the ideal University should be?

No, you should fail them for turning in bad theses, just like you would before AI.

ninetyninenine

Fail them. Only let the ai generated text that has been verified and edited to be true to pass.

If they want to use AI make them use it right.

sien

In Australia Universities that have remote study have places where people can do proctored exams in large cities. The course is done remotely but the exam, which is often 50%+ of the final grade, is done in a place that has proctored exams as a service.

Can't this be done in the US as well ?

wrp

The Open University in the UK started in 1969. Their staff have a reputation for good interaction with students, and I have seen very high quality teaching materials produced there. I believe they have always operated on the basis of remote teaching but on-site evaluation. The Open University sounds like an all-round success story and I'm surprised it isn't mentioned more in discussions of remote education.

fn-mote

Variations in this system are in active use in the US as well.

Do you feel it is effective?

It seems to me that there is a massive asymmetry in the war here: proctoring services have tiny incentives to catch cheaters. Cheaters have massive incentives to cheat.

I expect the system will only catch a small fraction of the cheating that occurs.

directevolve

> I expect the system will only catch a small fraction of the cheating that occurs.

The main kind of cheating we need them to prevent is effective cheating - the kind that can meaningfully improve the cheater's score.

Requiring cheaters to put their belongings in a locker, using proctor-provided resources, and being monitored in a proctor-provided room puts substantial limits on effective cheating. That's pretty much the minimum that any proctor does.

It may not stop 100% of effective cheating 100% of the time, but it would make a tremendous impact in eliminating LLM-based cheating.

If you're worried about corrupt proctors, that's another matter. National brands that are both self- and externally-policed and depend on a good reputation to drive business from universities would help.

With this system, I expect that it would not take much to avoid almost all the important cheating that now occurs.

baby_souffle

> I expect the system will only catch a small fraction of the cheating that occurs.

It'll depend a lot on who/where/how is doing the screening and what tools (if any) are permitted.

Remember that bogus program for TI8{3,4} series calculators that would clear the screen and print "MEMORY CLEAR"? If the proctor was just looking for that string and not actually jumping through the hoops to _actually_ clear the memory then it was trivial to keep notes / solvers ... etc on the calculator.

wisty

You can't stop people hiring someone who looks similar from sitting the exam, or messages in morse code via Bluetooth. It's hard to stop a palm card.

But it stops a casual cheater from having ChatGTP on a second device.

sien

From what I've seen it works.

There is definitely a war between cheaters and people catching them. But a lot of people can't be bothered and if learning the material can be made easier than cheating then it will work.

You can imagine proctoring halls of the future being Faraday cages with a camera watching people do their test.

aerhardt

I did a proctored exam for Harvard Extension at the British Council in Madrid. The staff is proctoring exams year-round for their in-house stuff so their motivation notwithstanding they know what they’re doing.

barry-cotter

> proctoring services have tiny incentives to catch cheaters. Cheaters have massive incentives to cheat.

If they don’t catch them they don’t have a business model. They have one job. The University of London, Open University and British Council all have 50+ years experience on proctoring university exams for distance learning students and it’s not like Thomson Prometric haven’t thought about how to do it either, even if they (mostly?) do computerised exams.

redcobra762

If you've been to one of these testing centers, you'd realize it's not easy to cheat, and the companies that run them take cheating seriously. The audacity of someone to cheat in that environment would be exceptionally high, and just from security theater alone I suspect almost no actual cheating takes place.

dgfitz

Way back like 25 years ago in what we call high school in the US, my statistics teacher tried her damndest to make final exams fair. I said next to someone I had a huge crush on, and offered to take their exam for them. I needed a ‘c’ to ace the class, and she needed an ‘a’ to pass. 3 different tests and sets of questions/scantrons. I got her the grade she needed, she did not get me the grade I needed.

So to your point, it’s easy to cheat even if the proctor tries to prevent it.

throwaway2037

Can you tell us: Is "remote study" a relatively recent phenom in AU -- COVID era, or much older? I am curious to learn more. And, what is the history behind it? Was it created/supported because AU is so vast and many people a state might not live near the campus?

Also: I think your suggestion is excellent. We may see this happen in the US if AI cheating gets out of control (which it well).

stevage

It definitely existed before, particularly as a revenue stream for some of the smaller universities such as USQ. I think for the big ones it was a bit beneath them, then suddenly COVID came and we had lockdown for a long time in Melbourne. Now it's an expectation that students can access everything from home, but the flipside is everyone complains about how much campus life has declined. Students are paying more for a lower quality education and less amenity.

dirkc

The same thing exists in South Africa, the university is called UNISA [1]. It has existed for a long time - my parents time. Lots of people that can't afford to go to university (as in, needs to earn an income) studies with them.

[1] - https://www.unisa.ac.za/sites/corporate/default

globalnode

Where I'm studying its proctored-online. They have a custom browser and take over your computer while you're doing the exam. Creepy AF but saves travelling 1,300 km to sit an exam.

dehrmann

Wouldn't spending $300 on a laptop to cheat on an exam for a class you're paying thousands for make sense? It would probably improve your grade more than the text book.

bigfatkitten

Not even just large cities. Decent sized towns have them too, usually with local high school teachers or the like acting as proctors.

math_dandy

Proctoring services done well could be valuable, but it’s smaller rural and remote communities that would benefit most. Maybe these services could be offered by local schools, libraries, etc.

null

[deleted]

mac-mc

It does feel like easy side money for local schools and teachers that will have empty classrooms after 5pm.

baq

nope. too much impact on profit.

aaplok

> Students don’t seem to mind this reversion.

Those I ask are unanimously horrified that this is the choice they are given. They are devastated that the degree for which they are working hard is becoming worthless yet they all assert they don't want exams back. Many of them are neurodivergent who do miserably in exam conditions and in contrast excel in open tasks that allow them to explore, so my sample is biased but still.

They don't have a solution. As the main victims they are just frustrated by the situation, and at the "solutions" thrown at it by folks who aren't personally affected.

aketchum

It is always interesting to me when people say they are "bad test takers". You mean you are bad at the part where we find out how much you know? Maybe you just don't know the material well enough.

caveat emptor - I am not ND so maybe this is a real concern for some, but in my experience the people who said this did not know the material. And the accommodations for tests are abused by rich kids more than they are utilized by those that need them.

doctorwho42

As a self proclaimed bad test taker, it's not that I don't know the information. It's that I am capable of second guessing myself in a particular way in which I can build a logical framework to suggest another direction or answer.

This presents itself as a bad test taker, I rarely ever got above a B+ on any difficult test material. But you put me in a lab, and that same skillset becomes a major advantage.

Minds come in a variety of configurations, id suggest considering that before taking your own experience as the definitive.

qwertycrackers

I think the reverse exists as well. I think I am a much better test taker than average, and this has very clearly given me some advantages that come from the structure of exam-focused education. Exam taking is a skill and it's possible to be good at it, independent of the underlying knowledge. Of course knowing the material is still required.

However you are correct in noticing that there are an anomalously high number of "bad test takers" in the world. Many students are probably using this as a flimsy excuse for poor performance. Overall I think the phenomenon does exist.

eutropia

datum: I'm ND, but I'm a good test-taker. There were plenty of tests for subjects where I didn't need to study because I was adept at reading the question and correctly assuming what the test-creator wanted answered, and using deduction to reduce possibilities down enough that I could be certain of an answer - or by using meta-knowledge of where the material from the recent lectures was to narrow things down, again, not because I knew the material all that well but because I could read the question. Effectively, I had a decent grasp of the "game" of test-taking, which is rather orthogonal to the actual knowledge of the class material.

542354234235

Tests are just a proxy for understanding and/or application of a concept. Being good at the proxy doesn’t necessarily mean you understand the concept, just like not being good at the proxy doesn’t mean you don’t. Finding other proxies we can use allows for decoupling knowledge from a specific proxy metric.

If I was evaluating the health of various companies, I wouldn’t use one metric for all of them, as company health is kind of an abstract concept and any specific metric would not give me a very good overall picture and there are multiple ways for a company to be healthy/successful. Same with people.

There are lots of different ways to utilize knowledge in real world scenarios, so someone could be bad at testing and bad at some types of related jobs but good at other types of related jobs. So unless “test taking” as a skill is what is being evaluated, it isn’t necessary to be the primary evaluation tool.

null

[deleted]

godelski

I don't think I understand, as a terrible test taker myself.

The solution I use when teaching is to let evaluation primarily depend on some larger demonstration of knowledge. Most often it is CS classes (e.g. Machine Learning), so I don't really give much care for homeworks and tests and instead be project driven. I don't care if they use GPT or not. The learning happens by them doing things.

This is definitely harder in other courses. In my undergrad (physics) our professors frequently gave takehome exams. Open book, open notes, open anything but your friends and classmates. This did require trust, but it was usually pretty obvious when people worked together. They cared more about trying to evaluate and push us if we cared than if we cheated. They required multiple days worth of work and you can bet every student was coming to office hours (we had much more access during that time too). The trust and understanding that effort mattered actually resulted in very little cheating. We felt respected, there was a mutual understanding, and tbh, it created healthy competition among us.

Students cheat because they know they need the grade and that at the end of the day they won't won't actually be evaluated on what they learned, but rather on what arbitrary score they got. Fundamentally, this requires a restructuring, but that's been a long time coming. The cheating literally happens because we just treated Goodhart's Law as a feature instead of a bug. AI is forcing us to contend with metric hacking, it didn't create it.

armchairhacker

IMO exams should be on the easier side and not require much computing (mainly knowledge, and not unnecessary memorization). They should be a baseline, not a challenge for students who understand the material.

Students are more accurately measured via long, take-home projects, which are complicated enough that they can’t be entirely done by AI.

Unless the class is something that requires quick thinking on the job, in which case there should be “exams” that are live simulations. Ultimately, a student’s GPA should reflect their competence in the career (or possible careers) they’re in college for.

2OEH8eoCRo0

> Many of them are neurodivergent who do miserably in exam conditions

Isn't this part of life? Learning to excel anyway?

JoshTriplett

Life doesn't tend to take place under exam conditions, either.

aaplok

I don't think so? I teach maths, not survival or social pressure. If a student in my class is a competent mathematician why should they not be acknowledged to be that?

thatfrenchguy

> Many of them are neurodivergent who do miserably in exam conditions

I mean, for every neurodivergent person who does miserably in exam conditions you have one that does miserably in homework essays because of absence of clear time boundaries.

BeFlatXIII

Autism vs. ADHD

math_dandy

We have an Accessible Testing Center that will administer and proctor exams under very flexible conditions (more time, breaks, quiet/privacy, …) to help students with various forms of neurodivergence. They’re very good and offer a valuable service without placing any significant additional burden on the instructor. Seems to work well, but I don’t have first hand knowledge about how these forms of accommodations are viewed by the neurodivergent student community. They certainly don’t address the problem of allowing « explorer » students to demonstrate their abilities.

aaplok

Yes I think the issue is as much that open tasks make learning interesting and meaningful in a way that exams hardly can do.

This is the core of the issue really. If we are in the business of teaching, as in making people learn, exams are a pretty blunt and ineffective instrument. However since our business is also assessing, proctoring is the best if not only trustworthy approach and exams are cheap in time, effort and money to do that.

My take is that we should just (properly) assess students at the end of their degree. Spend time (say, a full day) with them but do it only once in the degree (at the end), so you can properly evaluate their skills. Make it hard so that the ones who graduate all deserve it.

Then the rest of their time at university should be about learning what they will need.

BriggyDwiggs42

I’ve had access to that at my school and it’s night and day. Not being as stressed about time and being in a room alone bumps me up by a grade letter at least.

GeoAtreides

>Many of them are neurodivergent

if "many" are "divergent" then... are they really divergent? or are they the new typical?

aaplok

Many of the students I talk to. I don't claim they form a representative sample of the student cohort, on the contrary. I guess that the typical student is typical but I have not gone to check that.

jay_kyburz

I think having one huge exam at the end is the problem. An exam and assessment every week would be best.

Less stress at the end of the term, and the student can't leave everything to the last minute, they need to do a little work every week.

tbihl

Too much proctoring and grading, not enough holding students' hands for stuff they should have learned from reading the textbook.

aerhardt

I have a Software Engineering degree from Harvard Extension and I had to take quite a few exams in physically proctored environments. I could very easily manage in Madrid and London. It is not too hard for either the institution or the student.

I am now doing an Online MSc in CompSci at Georgia Tech. The online evaluation and proctoring is fine. I’ve taken one rather math-heavy course (Simulation) and it worked. I see the program however is struggling with the online evaluation of certain subjects (like Graduate Algorithms).

I see your point that a professor might prefer to have physical evaluation processes. I personally wouldn’t begrudge the institution as long as they gave me options for proctoring (at my own expense even) or the course selection was large enough to pick alternatives.

mountainb

Professional proctored testing centers exist in many locations around the world now. It's not that complicated to have a couple people at the front, a method for physically screening test-takers, providing lockers for personal possessions, providing computers for test administration, and protocols for checking multiple points of identity for each test taker.

This hybrid model is vastly preferable to "true" remote test taking in which they try to do remote proctoring to the student's home using a camera and other tools.

aerhardt

That’s what I did at HES and it was fine. Reasonable and not particularly stressful.

remarkEon

In my undergraduate experience, the location of which shall remain nameless, we had amble access to technology but the professors were fairly hostile to it and insisted on pencil and paper for all technical classes. There were some English or History classes here and there that allowed a laptop for writing essays during an "exam" that was a 3 hour experience with the professor walking around the whole time. Anyway, when I was younger I thought the pencil and paper thing to be silly. Why would we eschew brand new technology that can make us faster! And now that I'm an adult, I'm so thankful they did that. I have such a firm grasp of the underlying theory and the math precisely because I had to write it down, on my own, from memory. I see what these kids do today and they have been so woefully failed.

Teachers and professors: you can say "no". Your students will thank you in the future.

RHSeeger

Remote learning also opens up a lot of opportunities to people that would not otherwise be able to take advantage of them. So it's not _just_ the cash cow that benefits from it.

coderatlarge

is it ok for students to submit images of hand-written solutions remotely?

seriously it reminds me of my high school days when a teacher told me i shouldn’t type up my essays because then they couldn’t be sure i actually wrote them.

maybe we will find our way back to live oral exams before long…

plantwallshoe

I’m enrolled in an undergraduate CS program as an experienced (10 year) dev. I find AI incredibly useful as a tutor.

I usually ask it to grade my homework for me before I turn it in. I usually find I didn’t really understand some topic and the AI highlights this and helps set my understanding straight. Without it I would have just continued on with an incorrect understanding of the topic for 2-3 weeks while I wait for the assignment to be graded. As an adult with a job and a family this is incredibly helpful as I do homework at 10pm and all the office hours slots are in the middle of my workday.

I do admit though it is tough figuring out the right amount to struggle on my own before I hit the AI help button. Thankfully I have enough experience and maturity to understand that the struggle is the most important part and I try my best to embrace it. Myself at 18 would definitely not have been using AI responsibly.

davidcbc

When I was in college if AI was available I would have abused it way too much and been much worse off for it.

This is my biggest concert about GenAI in our field. As an experienced dev I've been around the block enough times to have a good feel of how things should be done and can catch when and LLM goes off on a tangent that is a complete rabbit hole, but if this had been available 20 years ago I would have never learned and become an experienced dev because I absolutely would have over relied on an LLM. I worry that 10 years from now getting mid career dev will be like trying to get a COBOL dev now, except COBOL is a lot easier to learn.

danielhep

I’m wondering how the undergrad CS course is as an experienced dev and why you decided to do that? I have been a software developer for 5 years with an EE degree, and as I do more software engineering and less EE I feel like I am missing some CS concepts that my colleagues have. Is this your situation too or did you have another reason? And why not a masters?

plantwallshoe

A mix of feeling I’m “missing” some CS concepts and just general intellectual curiosity.

I am planning on doing a masters but I need some undergrad CS credits to be a qualified candidate. I don’t think I’m going to do the whole undergrad.

Overall my experience has been positive. I’ve really enjoyed Discrete Math and coming to understand how I’ve been using set theory without really understanding it for years. I’m really looking forward to my classes on assembly/computer architecture, operating systems, and networks. They did make me take CS 101-102 as prereqs which was a total waste of time and money, but I think those are the only two mandatory classes with no value to me.

lispisok

Computer architecture and operating systems are really important classes imo. Maybe you dont touch the material again in your career but do you really want the thing you're supposed to be programming to be a black box? Personally I'm not ok working with black boxes.

aryamaan

as I am also thinking mildly about doing masters cause I want to break into ai research, I am curious what your motivations are, if you would be open to share those.

mathgeek

> And why not a masters?

Not GP, but in my experience most MSC programs will require that you have substantial undergrad CS coursework in order to be accepted. There are a few programs designed for those without that background.

glial

Shout out to the fantastic Georgia Tech online masters program in CS:

https://pe.gatech.edu/degrees/computer-science

(not affiliated, just a fan)

null

[deleted]

giraffe_lady

I have a friend who is self-medicating untreated adhd with street amphetamines and he talks about it similarly. I can't say with any certainty that either of you is doing anything wrong or even dangerous. But I do think you both are overconfident in your assessment of the risks.

sshine

I teach computer science / programming, and I don't know what a good AI policy is.

On the one hand, I use AI extensively for my own learning, and it's helping me a lot.

On the other hand, it gets work done quickly and poorly.

Students mistake mandatory assignments for something they have to overcome as effortlessly as possible. Once they're past this hurdle, they can mind their own business again. To them, AI is not a tutor, but a homework solver.

I can't ask them to not use computers.

I can't ask them to write in a language I made the compiler for that doesn't exist anywhere, since I teach at a (pre-university) level where that kind of skill transfer doesn't reliably occur.

So far we do project work and oral exams: Project work because it relies on cooperation and the assignment and evaluation is open-ended: There's no singular task description that can be plotted into an LLM. Oral exams because it becomes obvious how skilled they are, how deep their knowledge is.

But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer.

Teaching Linux basics doesn't suffer the same because the exam-preparing exercise is typing things into a terminal, and LLMs still don't generally have API access to terminals.

Maybe providing the IDE online and observing copy-paste is a way forward. I just don't like the tendency that students can't run software on their own computers.

timr

I'm not that old, and yet my university CS courses evaluated people with group projects, and in-person paper exams. We weren't allowed to bring computers or calculators into the exam room (or at least, not any calculators with programming or memory). It was fine.

I don't see why this is so hard, other than the usual intergenerational whining / a heaping pile of student entitlement.

If anything, the classes that required extensive paper-writing for evaluation are the ones that seem to be in trouble to me. I guess we're back to oral exams and blue books for those, but again...worked fine for prior generations.

NitpickLawyer

> and in-person paper exams.

Yup. ~25 years ago competitions / NOI / leet_coding as they call it now were in a proctored room, computers with no internet access, just plain old borland c, a few problems and 3h of typing. All the uni exams were pen & paper. C++ OOP on paper was fun, but iirc the scoring was pretty lax (i.e. minor typos were usually ignored).

intended

Thing is, this hits the scaling problem in education and fucking hard.

There’s such a shortfall of teachers globally, and the role is a public good, so it’s constantly underpaid.

And if you are good - why would you teach ? You’d get paid to just take advantage of your skills.

And now we have a tool that makes it impossible to know if you have taught anyone because they can pass your exams.

eru

> I don't see why this is so hard, other than the usual intergenerational whining / a heaping pile of student entitlement.

You know that grading paper exams is a lot more hassle _for the teachers_?

Your overall point might or might not still stand. I'm just responding to your 'I don't see why this is so hard'. Show some imagination for why other people hold their positions.

(I'm sure there's lots of other factors that come into play that I am not thinking of here.)

timr

...and yet, somehow we managed?

> Show some imagination for why other people hold their positions.

I say that as someone who has also graded piles of paper exams in graduate school (also not that long ago!)

I don't believe the argument you are making is true, but if the primary objection really is that teachers have to grade, then no, I don't have any sympathy.

bongodongobob

Why can't the teachers use LLMs to grade?

throwawayffffas

I'm not too old either and in my university, CS was my major, we did group projects and in person paper exams as well.

We wrote c++ on paper for some questions and were graded on it. Ofcourse the tutors were lenient on the syntax they cared about the algorithm and the data structures not so much for the code. They did test syntax knowledge as well but more in code reasoning segments, i.e questions like what's the value of a after these two statements or after this loop is run.

We also had exams in the lab with computers disconnected from the internet. I don't remember the details of the grading but essentially the teaching team was in the room and pretty much scored us then and there.

null

[deleted]

Aurornis

> Students mistake mandatory assignments for something they have to overcome as effortlessly as possible.

It has been interesting to see this idea propagate throughout online spaces like Hacker News, too. Even before LLMs, the topic of cheating always drew a strangely large number of pro-cheating comments from people arguing that college is useless, a degree is just a piece of paper, knowledge learned in classes is worthless, and therefore cheating is a rational decision.

Meanwhile, whenever I’ve done hiring or internships screens for college students it’s trivial to see which students are actually learning the material and which ones treat every stage of their academic and career as a game they need to talk their way through while avoiding the hard questions.

thresher

I teach computer science / programming, and I know what a good AI policy is: No AI.

(Dramatic. AI is fine for upper-division courses, maybe. Absolutely no use for it in introductory courses.)

Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and and extending it to more courses in the fall.

An upside: our exams are now auto-graded (professors are happy) and students get to compile/run/test code on exams (students are happy).

>Students mistake mandatory assignments for something they have to overcome as effortlessly as possible.

This is the real demon to vanquish. We're approaching course design differently now (a work in progress) to tie coding exams in the lab to the homework, so that solving the homework (worth a pittance of the grade) is direct preparation for the exam (the lion's share of the grade).

sshine

> Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and and extending it to more courses in the fall.

Excellent approach. It requires a big buy-in from the school.

Thanks for suggesting it.

I'm doing something for one kind of assignment inspired by the game "bashcrawl" where you have to learn Linux commands through an adventure-style game. I'm bundling it in a container and letting you submit your progress via curl commands, so that you pass after having run a certain set of commands. Trying to make the levels unskippable by using tarballs. Essentially, if you can break the game instead of beating it honestly, you get a passing grade, too.

timemct

>Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and extending it to more courses in the fall.

As an higher education (university) IT admin who is responsible for the CS program's computer labs and is also enrolled in this CS program, I would love to hear more about this setup, please & thank you. As recently as last semester, CS professors have been doing pen'n paper exams and group projects. This setup sounds great!

gchallen

We've been doing this at Illinois for 10 years now. Here's the website with a description of the facility: https://cbtf.illinois.edu/. My colleagues have also published multiple papers on the testing center—operations, policies, results, and so on.

It's a complete game changer for assessment—anything, really, but basic programming skills in particular. At this point I wouldn't teach without it.

photochemsyn

Isn't auto-grading cheating by the instructors? Isn't part of their job providing their expert feedback by actually reading the code the students have generating and providing feedback and suggestions for improvement even at for exams? A good educational program treats exams as learning opportunities, not just evaluations.

So if the professors can cheat and they're happy about having to do less teaching work, thereby giving the students a lower-quality educational experience, why shouldn't the students just get an LLM to write code that passes the auto-grader's checks? Then everyone's happy - the administration is getting the tuition, the professors don't have to grade or give feedback individually, and the students can finish their assignments in half an hour instead of having to stay up all night. Win win win!

gchallen

Immediate feedback from a good autograder provides a much more interactive learning experience for students. They are able to face and correct their mistakes in real time until they arrive at a correct solution. That's a real learning opportunity.

The value of educational feedback drops rapidly as time passes. If a student receives immediate feedback and the opportunity to try again, they are much more likely to continue attempting to solve the problem. Autograders can support both; humans, neither. It typically takes hours or days to manually grade code just once. By that point students are unlikely to pay much attention to the feedback, and the considerable expense of human grading makes it unlikely that they are able to try again. That's just evaluation.

And the idea that instructors of computer science courses are in a position to provide "expert feedback" is very questionable. Most CS faculty don't create or maintain software. Grading is usually done by either research-focused Ph.D. students or undergraduates with barely more experience than the students they are evaluating.

sshine

> Isn't auto-grading cheating by the instructors?

Certainly not. There's a misconception at play here.

Once you have graded a few thousand assignments, you realize that people make the same mistakes. You think "I could do a really good write-up for the next student to make this mistake," and so you do and you save it as a snippet, and soon enough, 90% of your feedback are elaborate snippets. Once in a while you realize someone makes a new mistake, and it deserves another elaborate snippet. Some snippets don't generalise. That's called personal feedback. Other snippets generalise insanely. That's called being efficient.

Students don't care if their neighbors got the same feedback if the feedback applies well and is excellent. The difficult part is making that feedback apply well. A human robot will make that job better. And building a bot that gives the right feedback based on patterns is... actually a lot of work, even compared to copy-pasting snippets thousands of times.

But if you repeat an exercise enough times, it may be worth it.

Students are incentivised to put in the work in order to learn. Students cannot learn by copy-pasting from LLMs.

Instructors are incentivised to put in the work in order to provide authentic, valuable feedback. Instructors can provide that by repeating their best feedback when applicable. If instructors fed assignments to an LLM and said "give feedback", that'd be in the category of bullshit behavior we're criticising students for.

nprateem

Is it in a Faraday cage too or do you just confiscate their phones. Or do you naively believe they aren't just using AI on their phones?

intended

You should look at the cheating stories that come out of India, China, South Korea and other places that have been dealing with this dynamic for decades upon decades.

I know of a time where america didn’t have this problem and I could see it ramping up, because of my experience in India.

People will spend incredible efforts to cheat.

Like stories of parents or conspirators scaling buildings to whisper answers to students from windows.

throwawayffffas

I don't know what they do, but when we did it back in the 2000's there was a no phone policy and the exams were proctored.

People could try to cheat, but it would be pretty stupid to think they would not catch you.

JoshTriplett

> Oral exams because it becomes obvious how skilled they are, how deep their knowledge is.

Assuming you have access to a computer lab, have you considered requiring in-class programming exercises, regularly? Those could be a good way of checking actual skills.

> Maybe providing the IDE online and observing copy-paste is a way forward. I just don't like the tendency that students can't run software on their own computers.

And you'll frustrate the handful of students who know what they're doing and want to use a programmer's editor. I know that I wouldn't have wanted to type a large pile of code into a web anything.

Aeolun

> I know that I wouldn't have wanted to type a large pile of code into a web anything.

I might not have liked that, but I sure would have liked to see my useless classmates being forced to learn without cheating.

mac-mc

You can provide vscode, vim and emacs all in some web interface, and those are plenty good enough for those use cases. Choosing the plugin list for each would also be a good bikeshedding exercise for the department.

Even IntelliJ has gateway

noisy_boy

> Even IntelliJ has gateway

By IntelliJ's own (on-machine) standards, Gateway is crap. I use the vi emulation mode (using ideavim) and the damn thing gets out of sync unless you type at like 20wpm or something. Then it tries to rollback whatever you type until you restart it and retry. I can't believe it is made by the same Jetbrains known for their excellent software.

rKarpinski

> But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life

Wow.

paulluuk

Yeah, I've had teachers like that, who tell you that you're a "waste of life" and "what are you doing here?" and "you're dumb", so motivational.

I guess this "tough love" attitude helps for some people? But I think mostly it's just that people think it works for _other_ people, but rarely people think that this works when applied to themselves.

Like, imagine the school administration walking up to this teacher and saying "hey dum dum, you're failing too many students and the time you've spent teaching them is a waste of life."

Many teachers seem to think that students go to school/university because they're genuinely interested in motivated. But more often then not, they're there because of societal pressure, because they know they need a degree to have any kind of decent living standard, and because their parents told them to. Yeah you can call them names, call them lazy or whatever, but that's kinda like pointing at poor people and saying they should invest more.

noisy_boy

> Yeah, I've had teachers like that, who tell you that you're a "waste of life" and "what are you doing here?" and "you're dumb", so motivational.

I'm sure GP isn't calling them dum-dum to their face. If they can't even do basic stuff, which seems to be their criteria here for the name calling, maybe a politely given reality-check isn't that bad. Some will wake up to the gravity of their situation and put in the hard work and surprise their teacher.

> Yeah you can call them names, call them lazy or whatever, but that's kinda like pointing at poor people and saying they should invest more.

They _should_ invest more because in this case, the "investment' is something that the curriculum simply demands - dedication and effort. I mean unless one is a genius, since when that demand is unreasonable? You want to work with people who got their degree without knowing their shit? (not saying that everyone who doesn't have a degree isn't knowledgeable - I've worked with very smart self-taught people).

chasd00

I have a hard time sympathizing with a student who cheated for 3 semesters then hit a brick wall when they finally can’t cheat. A student struggling with the material is one thing but a student finally getting caught after cheating through three semesters is another. “Dum dum” is being kind IMO.

mwigdahl

You are misrepresenting what the original poster said. He did not say that he actually called kids "dum-dums" or that the kids were, themselves, a waste of life. He said that using AI to blast through assignments without learning anything from them was a waste of life.

Frankly I applaud that approach. Classes are to convey knowledge, even if the student only gives a shit about the diploma at the end of the road. At least someone cares enough to tell these students the truth about where that approach is going to take them in life.

simoncion

In response to the statement

> ...I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life

you said:

> Yeah, I've had teachers like that, who tell you that you're a "waste of life"...

You'd do well to slow down and re-read more carefully whenever you find something that offends you. You've failed to correctly identify the target of the "waste of life" commentary.

Moreover, were I paying tens of thousands of dollars a year for a self-improvement product, I'd be furious if the folks operating that product failed to notify me until three years into a four year program that I'd been getting next-to-nothing from the program. That's the sort of information you need right away, rather than thirty, sixty, (or more) thousand dollars in.

null

[deleted]

NegativeLatency

I had numerous in person paper exams in CS (2009 - 2013) where we had to not only pseudo code an algorithm from a description, but also do the reverse of saying/describing what a chunk of pseudo code would do.

Aeolun

You can get one of those card punching machines and have them hand in stacks of cards?

downboots

And don't forget to get on their case with accusations of technology use that equate to the Turing test

sas224dbm

Grandpa can help with that too

protocolture

When I was studying games programming we used an in house framework developed by the lecturers for OGRE.

At the time it was optional, but I get the feeling that if they still use that framework, it just became mandatory, because it has no internet facing documentation.

That said, I imagine they might have chucked it in for Unity before AI hit, in which case they are largely out of luck.

>But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer.

This happened to me with my 3d maths class, and I was able to power through a second run. But I am not sure I learned anything super meaningful, other than I should have been cramming better.

ghusto

I've always though that the education system was broken and next to worthless. I've never felt that teachers ever tried to _teach_ me anything, certainly not how to think. In fact I saw most attempts at thought squashed because they didn't fit neatly into the syllabus (and so couldn't be graded).

The fact that AI can do your homework should tell you how much your homework is worth. Teaching and learning are collaborative exercises.

mrweasel

> The fact that AI can do your homework should tell you how much your homework is worth.

Homework is there to help you practise these things and have help you progress, find the areas where you're in need of help and more practise. It is collaborative, it's you, your fellow students and your teachers/professors.

I'm sorry that you had bad teachers, or had needs that wasn't being meet by the education system. That is something that should be addressed. I just don't think it's reasonable to completely dismiss a system that works for the majority. Being mad at the education system isn't really a good reason for say "AI/computers can do all these things, so why bother practising them?"

Schools should learn kids to think, but if the kids can't read or reasonably do basic math, then expecting them to have independent critical thinking seems a way of. I don't know about you, but one of the clear lessons in "problem math" in schools was to learn to reason about numbers and result, e.g. is it reasonable that a bridge span 43,000km? If not, you probably did something wrong in your calculations.

Aurornis

These conversations are always eye-opening for the number of people who don’t understand homework. You’re exactly right that it’s practice. The test is the test (obviously) and the homework is practice with a feedback loop (the grade).

Giving people credit for homework helps because it gives students a chance to earn points outside of high pressure test times and it also encourages people to do the homework. A lot of people need the latter.

My friends who teach university classes have experimented with grading structures where homework is optional and only exam scores count. Inevitably, a lot of the class fails the exams because they didn’t do any practice on their own. They come begging for opportunities to make it up. So then they circle back to making the homework required and graded as a way to get the students to practice.

ChatGPT short circuits this once again. Students ChatGPT their homework then fail the first exam. This time there is little to do, other than let those students learn the consequences of their actions.

kamaal

>>You’re exactly right that it’s practice.

Thinking is a incremental process, you make small changes to things, verify if they are logically consistent and work from there.

What is to practice here? If you know something is true, practicing the mechanical aspects of it is text book definition of rote learning.

This whole thing reads like the academic system thinks making new science(Math, Physics etc) is for special geniuses and the remainder has to be happy watching the whole thing like some one demonstrating a 'sleight of hand' of hand trick.

Teach people how to discover new truths. Thats the point of thinking.

jmmcd

> The fact that AI can do your homework should tell you how much your homework is worth.

A lot of people who say this kind of thing have, frankly, a very shallow view of what homework is. A lot of homework can be easily done by AI, or by a calculator, or by Wikipedia, or by looking up the textbook. That doesn't invalidate it as homework at all. We're trying to scaffold skills in your brain. It also didn't invalidate it as assessment in the past, because (eg) small kids don't have calculators, and (eg) kids who learn to look up the textbook are learning multiple skills in addition to the knowledge they're looking up. But things have changed now.

camjw

Completely agree - I always thought the framing of "exercises" is the right one, the point is that your brain grows by doing. It's been possible for a long time to e.g. google a similar algebra problem and find a very relevant math stackexchange post, doesn't mean the exercises were useless.

"The fact that forklift truck can lift over 500kg should tell you how worthwhile it is for me to go to a gym and lift 100kg." - complete non-sequitur.

criddell

> A lot of homework can be easily done by AI

Then maybe the homework assignment has been poorly chosen. I like how the article's author has decided to focus on the process and not the product and I think that's probably a good move.

I remember one of my kids' math teachers talked about wanting to switch to in inverted classroom. The kids would be asked to read a some part of their textbook as homework and then they would work through exercise sheets in class. To me, that seemed like a better way to teach math.

> But things have changed now.

Yep. Students are using AIs to do their homework and teachers are using AIs to grade.

seb1204

Yep, making time to sit down to do homework, forming an understanding of planning the doing part, forming good habits of doing them, knowing how to look up stuff, in a book index or on Wikipedia or by searching or asking AI. The expectation is still that some kind of text output needs to be found and then read, digested.

tgv

> The fact that AI can do your homework should tell you how much

you still have to learn. The goal of learning is not to do a job. It's to enrich you, broaden your mind, and it takes work on your part.

In similar reasoning, you could argue that you can take a car to go anywhere, or let everything be delivered on your doorstep, so why should I my child learn to walk?

thomastjeffery

Let me rephrase their point, then:

The fact that AI can replace the work that you are measured on should tell you something about the measurement itself.

The goal of learning should be to enrich the learner. Instead, the goal of learning is to pass measure. Success has been quietly replaced with victory. Now LLMs are here to call that bluff.

tgv

And learning does do that. It is an economic compromise, though. Most of us have average (or worse) teachers. I have the feeling that that's what your arguing against, not learning per se.

> LLMs are here to call that bluff

Students have been copying from e.g. encyclopedias for as long as anyone can remember. That doesn't mean that an encyclopedia removes the need to learn. Even rote memorization has its use. But it's difficult to make school click for everybody.

Cthulhu_

Homework isn't about doing the homework, it's teaching you to learn and evidence that you have and can learn. Yeah you can have an AI do it just as much as you can have someone else do it, but that doesn't teach you anything and if you earn the paper at the end of it, it's effectively worthless.

Unis should adjust their testing practices so that their paper (and their name) doesn't become worthless. If AI becomes a skill, it should be tested, graded, and certified accordingly. That is, separate the computer science degree from the AI Assisted computer science degree.

aerhardt

Current AI can ace math and programming psets at elite institutions, and yet prior to GPT not only did I learn loads from the homework, I often thoroughly enjoyed it too. I don’t see how you can make that logical leap.

vonneumannstan

Its a problem of incentives. For many courses the psets make up a large chunk of your grade. Grades determine your suitability for graduate school, internships, jobs, etc. So if your final goal is one of those then you are highly incentivized to get high grades, not necessarily to learn the material.

hirvi74

I think you somewhat touched upon what I believe is the root of the problem:

> highly incentivized to get high grades, not necessarily to learn the material

Based on my own experiences and observations, I think grading is a far larger issue than cheating. I am not convinced that good grades are necessarily reflective of enrichment nor how much material has been learned. If a person makes the a high grade in a particular class, what does that actually mean?

I made high grades in plenty of classes that I couldn't tell you anything about what I actually learned.

karaterobot

> The fact that AI can do your homework should tell you how much your homework is worth.

I mean... if you removed the substring "home" from that sentence, is it still true in your opinion?

That is, do you believe that because AI can perform some task, that task must not have any value? If there's a difference, help me understand it better please.

thomastjeffery

> Teaching and learning are collaborative exercises.

That's precisely where we went wrong. Capitalism has redefined our entire education system as a competition; just like it does with everything else. The goal is not success, it's victory.

jumploops

A bit off-topic, but I think AI has the potential to supercharge learning for the students of the future.

Similar to Montessori, LLMs can help students who wander off in various directions.

I remember often being “stuck” on some concept (usually in biology and chemistry), where the teacher would hand-wave something as truth, this dismissing my request for further depth.

Of course, LLMs in the current educational landscape (homework-heavy) only benefit the students who are truly curious…

My hope is that, with new teaching methods/styles, we can unlock (or just maintain!) the curiosity inherent in every pupil.

(If anyone knows of a tool like this, where an LLM stays on a high-level trajectory of e.g. teaching trigonometry, but allows off-shoots/adventures into other topical nodes, I’d love to know about it!)

analog31

>>> Of course, LLMs in the current educational landscape (homework-heavy) only benefit the students who are truly curious

I think you hit on a major issue: Homework-heavy. What I think would benefit the truly curious is spare time. These things are at odds with one another. Present-day busy work could easily be replaced by occupying kids' attention with continual lessons that require a large quantity of low-quality engagement with the LLM. Or an addictive dopamine reward system that also rewards shallow engagement -- like social media.

I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.

And there's something else I think might be missing, which is effort. For me, music and electronics were not easy. There was no exam, but I could measure my own progress -- either the circuit worked or it didn't. Without some kind of "external reference" I'm not sure that in-depth research through LLMs will result in any true understanding. I'm a physicist, and I've known a lot of people who believe that they understand physics because they read a bunch of popular books about it. "I finally understand quantum mechanics."

alexchantavy

> I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.

I see both sides of this. When I was a teenager, I went to a pretty bad middle school where there were fights everyday, and I wasn’t learning anything from the easy homework. On the upside, I had tons of free time to teach myself how to make websites and get into all kinds of trouble botting my favorite online games.

My learning always hit a wall though because I wasn’t able to learn programming on my own. I eventually asked my parents to send me to a school that had a lot more structure (and a lot more homework), and then I properly learned math and logic and programming from first principles. The upside: I could code. The downside: there was no free time to apply this knowledge to anything fun

protocolture

>I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.

Yeah I feel like teachers are going to try and use LLMs as an excuse to push more of the burden of schooling to their pupils homelife somehow. Like, increasing homework burdens to compensate.

seb1204

Spare time, haha, most people nowadays have a hard time having some dead time. The habitual checking of socials or feeds has killed the mind wandering time. People feel uncomfortable or consiser life boring with the device induced dopamine fix. Corporations got us by the balls.

TimorousBestie

The last thing I need when researching a hard problem is an interlocutor who might lie to me, make up convincing citations to nowhere, and tell me more or less what I want to hear.

HPsquared

Still better than the typical classroom experience. And you can always ask again, there's no need to avoid offending the person who has a lot of power over you.

const_cast

Typical classroom experience works and has worked for thousands of years.

Edutech is pretty new and virtually all of it has been a disaster. Sitting in a lecture and taking notes on paper is tried, tested, and research backed. It works. Not for everyone, but for a lot of people.

QuadmasterXLII

The longer I go without seeing cases of ai supercharging learning, the more suspicious I get that it just won’t. And no, self reports that it makes internet denizens feel super educated, don’t count.

nyarlathotep_

Wasn't this the promise of MOOCs in the 2010s?

Tryk

The problem is that many students come to university unequipped with the discipline it takes to actually study. Teaching students how to effectively learn is a side-effect of university education.

jumploops

Yes, I think curiosity dies well before university for most students.

The specific examples I recall most vividly were from 4th grade and 7th grade.

mcdeltat

> I remember often being “stuck” on some concept (usually in biology and chemistry), where the teacher would hand-wave something as truth, this dismissing my request for further depth.

This resonates with me a lot. I used to dismiss AI as useless hogwash, but have recently done a near total 180 as I realised it's quite useful for exploratory learning.

Not sure about others but a lot of my learning comes from comparison of a concept with other related concepts. Reading definitions off a page usually doesn't do it for me. I really need to dig to the heart of my understanding and challenge my assumptions, which is easiest done talking to someone. (You can't usually google "why does X do Y and not Z when ABC" and then spin off from that onto the next train of reasoning).

Hence ChatGPT is surprisingly useful. Even if it's wrong some of the time. With a combination of my baseline knowledge, logic, cross referencing, and experimentation, it becomes useful enough to advance my understanding. I'm not asking ChatGPT to solve my problem, more like I'm getting it to bounce off my thoughts until I discover a direction where I can solve my problem.

epiecs

Indeed. I never really used AI until recently but now I use it sometimes as a smarter search engine that can give me abstracts.

Eg. it's easy to ask copilot: can you give me a list of free, open source mqtt brokers and give me some statistics in the form of a table

And copilot (or any other ai) does this quite nicely. This is not something that you can ask a traditional search engine.

Offcourse you do need to know enough of the underlying material and double check what output you get for when the AI is hallucinating.

brilee

I am building such an AI tutoring experience, focusing on a Socratic style with product support for forking conversations onto tangents. Happy to add you to the waitlist, will probably publish an MVP in a few weeks.

Footprint0521

Do you have capacity for more developers? I’ve been wanting to help make this for a long time

ericmcer

yeah this is a good point, just adjust coursework from multiple choice tests and fill in the blank homework to larger scale projects.

Putting together a project using the AI help will be a very close mimicry of what real work will be like and if the teacher is good they will learn way more than being able to spout information from memory.

yazantapuz

I teach on a small university. These are some of the measures we take:

- Hand written midterms and exams.

- The students should explain how they designed and how they coded their solutions to programming exercises (we have 15-20 students per class, with more students it become more difficult).

- Presentations of complex topics (after that the rest of the students should comment something, ask some question, anything related to the topic)

- Presentation of a handwritten one page hand written notes, diagram, mindmap, etc., about the content discussed.

- Last minute changes to more elaborated programming labs that should be resolved in-class (for example, "the client" changed its mind about some requirement or asked a new feature).

The real problem is that it is a (lot) more work for the teachers and not everyone is willing to "think outside of the box".

(edit: format)

squigz

I hope by 'handwritten' you don't literally mean pen and paper?

xtracto

Back when I was doing my BSc in Software Engineering, we had a teacher who did her Data Structure and Algorithms exams with pen and paper. On one of them, she basically wrote 4 coding problems (which would be solved in 4 short ~30 LOC).

We had to write the answer with pen and paper, writing the whole program in C. And the teacher would score it by transcribing the verbatim text in her computer, and if it had one single error (missed semicolon) or didn't compile for some reason, the whole thing was considered wrong (each question was 25% of the exam score)

I remember I got 1 wrong (missed semicolon :( ) and got a 75% (1-100 pointing system). It's crazy how we were able to do that sort of thing in the old days.

We definitely exercised our attention to detail and concentration muscles with that teacher.

squigz

Yeah, this is absurd. And if you have poor handwriting, the chances of "syntax errors" goes up.

My above comment is getting downvoted, and it's honestly a bit baffling. I'd be furious if I were paying tens of thousands of dollars to receive a university-level education in software engineering in 2025... and I had to write programs with pen and paper. It is so far detached from the reality of, not only the industry, but the practice itself, so as to be utterly absurd.

TallonRain

Yes, pen and paper. The approach is to pseudocode the solution, minor syntax errors aren’t punished (and indeed are generally expected anyway). The point is to simply show that you understand and can work through the concepts involved, it’s not being literally compiled.

Writing a small algorithm with pen & paper on programming exams in universities of all sizes was still common when I was in uni in the 2010s and there’s no reason to drop that practice now.

yazantapuz

Yes, pen and paper.

nkrisc

If the trend continues, it seems like most college degrees will be completely worthless.

If students using AI to cheat on homework are graduating with a degree, then it has lost all value as a certificate that the holder has completed some minimum level of education and learning. Institutions that award such degrees will be no different than degree mills of the past.

I’m just grateful my college degree has the year 2011 on it, for what it’s worth.

lolinder

All of the best professors I had either did not grade homework or weighted it very small and often on a did-you-do-it-at-all basis and did not grade attendance at all. They provided lectures and assignments as a means to learn the material and then graded you based on your performance in proctored exams taken either in class or at the university testing center.

For most subjects at the university level graded homework (and graded attendance) has always struck me as somewhat condescending and coddling. Either it serves to pad out grades for students who aren't truly learning the material or it serves to force adult students to follow specific learning strategies that the professor thinks are best rather than giving them the flexibility they deserve as grown adults.

Give students the flexibility to learn however they think is best and then find ways to measure what they've actually learned in environments where cheating is impossible. Cracking down on cheating at homework assignments is just patching over a teaching strategy that has outgrown its usefulness.

fn-mote

> rather than giving them the flexibility they deserve as grown adults

I have had so many very frustrating conversations with full grown adults in charge of teaching CS. I have no faith at all that students would be able to choose an appropriate method of study.

My issue with the instruction is the very narrow belief in the importance of certain measurable skills. VERY narrow. I won’t go into details, for my own sanity.

RobinL

When hiring, I would very much like to hire people who have figured out how to learn things for themselves using whatever techniques work for them, and don't need nannying.

So I'm perfectly happy with a system of higher education that strongly rewards this behaviour

carlosjobim

> I have no faith at all that students would be able to choose an appropriate method of study.

That is their problem, not your problem. You're not their nanny.

jay_kyburz

I'm sure this will be an unpopular opinion, but just like junior employees, I think university students should clock in at 9am and finish working at 5pm.

I think they would really benefit learning how to work a full day and develop some work life balance.

gilbetron

> All of the best professors I had either did not grade homework or weighted it very small and often on a did-you-do-it-at-all basis and did not grade attendance at all. They provided lectures and assignments as a means to learn the material and then graded you based on your performance in proctored exams taken either in class or at the university testing center.

I have the opposite experience - the best professors focused on homework and projects and exams were minimal to non-existent. People learn different ways, though, so you might function better having the threat/challenge of an exam, whereas I hated having to put everything together for an hour of stress and anxiety. Exams are artificial and unlike the real world - the point is to solve problems, not to solve problems in weirdly constrained situations.

nkrisc

I don’t disagree, but in most cases degrees are handed out based on grades which in turn are based on homework.

I agree that something will have to change to avert the current trend.

__loam

Most of the college courses I took had the bulk of the grade be based on exams or projects. Homework was usually a small proportion to give students a little buffer and to actually prepare them for the exams. AI might have helped on coding projects but a lot of my grades were based on exams using pencil and paper in a room of 30-200 other people. It also just seems like a waste of your own time and money to avoid the act of learning by skipping all the hard parts with a corporate token generator.

ryandrake

Maybe schools and universities need to stop considering homework to be evidence of subject matter mastery. Grading homework never made sense to me. What are you measuring, really, and how confident are you of that measurement?

You can't put the toothpaste back into the tube. Universities need to accept that AI exists, and adjust their operations accordingly.

bee_rider

Grading homework has two reasonable objectives:

Provide an incentive for students to do the thing they should be doing anyway.

Give an opportunity to provide feedback on the assignment.

It is totally useless as an evaluation mechanic, because of course the students that want to can just cheat. It’s usually pretty small, right? IIRC when I did tutoring we only gave like 10-20% for the aggregate homework grade.

SketchySeaBeast

The annoyance with 10-20% means that in order to be an "A" student you have to do all the homework instead of just ace the exams which is obnoxious if you actually know the material. Edge case, I know, but that last 20% is a ton of extra work.

kenjackson

In most of my classes the HW was far more valuable of a measure of ability -- assuming cheating didn't occur. For example, my compilers HW assignments much more greatly captured my learning. I just feel like a semester writing an optimizing compiler is just going to be better than the 90-120 minute final exam.

Aeolun

I can say that making my homework part of my grade is a great way to actually get me to do it.

__loam

How do you suggest we measure whether the students have actually learned the stuff then?

pona-a

In person, pen and paper exams? They are closer to how most certifications are conducted.

dghlsakjg

Tests, both oral and written.

downboots

Captcha, of course. \s

Aurornis

> If the trend continues, it seems like most college degrees will be completely worthless.

I suspect the opposite: Known-good college degrees will become more valuable. The best colleges will institute practices that confirm the material was learned, such as emphasizing in-person testing over at-home assignments.

Cheating has actually been rampant at the university level for a long time, well before LLMs. One of the key differentiators of the better institutions is that they are harder to cheat to completion.

At my local state university (where I have friends on staff) it’s apparently well known among the students that if they pick the right professors and classes they can mostly skate to graduation with enough cheating opportunity to make it an easy ride. The professors who are sticklers about cheating are often avoided or even become the targets of ratings-bombing campaigns

barrenko

I've tried re-enrolling in a STEM major last year, after a higher education "pause" of 16-ish years. 85% of the class used GPTs to solve homework, and it was quite obvious most of them haven't even read the assignment.

The immediate effect was the distrust of the professors towards most everyone and lots classes felt like some kind of babysitting scheme, which I did not appreciate.

busyant

> If students using AI to cheat on homework

This is not related to "AI", but I have an amusing story about online cheating.

* I have a nephew who was switched into online college classes at the beginning of the pandemic.

* As soon as they switched to online, the class average on the exams shot up, but my nephew initially refused to cheat.

* Eventually he relented (because everyone else was doing it) and he pasted a multitude of sticky notes on the wall at the periphery of his computer monitor.

* His father walks into his room, looks at all the sticky notes and declares, "You can't do this!!! It'll ruin the wallpaper!"

tylerflick

TBF this problem doesn’t seem that new to me. I was forced to do my lab work in Vim and C via SSH because the faculty felt that Java IDEs with autocomplete were doing a disservice to learning.

fn-mote

> the faculty felt that Java IDEs with autocomplete were doing a disservice to learning

Sounds laughably naive now, doesn’t it?

ai-christianson

Aren't the jobs they'll get be expecting them to use AI?

nkrisc

If you’re hiring humans just to use AI, why even hire humans? Either AI will replace them or employers will realize that they prefer employees who can think. In either case, being a human who specializes in regurgitating AI output seems like a dead end.

TimorousBestie

“Prompt Engineer” as a serious job title is very strange to me. I don’t have an explanation as to why it would be a learnable skill—there’s a little, but not a lot of insight into why an LLM does what it does.

throwaway290

> If you’re hiring humans just to use AI, why even hire humans

You hire humans to help train AI and when done you fire humans.

ai-christianson

Employers are employees too

myaccountonhn

Even if you just use AI, you need to know the right prompts to ask.

Ekaros

And how to verify the output and think through it. I hear time after time that someone asked something from AI. It came up with something and then when corrected apologized and printed out it was wrong...

But how do you correct it if you do not know what is right or wrong...

__loam

Would you rather be the guy using AI as a crutch or the guy who actually knows how to do things without it?

neom

I've been hiring people for the better part for 15 years and I never considered them to be valuable outside of the fact that it appears you're able to do one project for a sustained period of time. My impressions was unless your degree confers something such that you are in a job that human risk can be involved, most degrees are worth very little and most serious people know that.

nkrisc

To be clear, I think that most college degrees were generally low value (even my own), but still had some value. The current trend will be towards zero value unless something changes.

throwaway290

It doesn't matter if your boss's policy is to require a degree.

agrippanux

I use AI to help my high-school age son with his AP Lang class. Crucially, I cleared all of this with his teacher beforehand. The deal was that he would do all his own work, but he'd be able to use AI the help him edit it.

What we do is he first completes an essay by himself, then we put it into a Claude chat window, along with the grading rubric and supporting documents. We instruct Claude to not change his structure or tone but edit for repetitive sentences, word count, correct grammar, spelling, and make sure his thesis is sound and pulled throughout the piece. He then takes that output and compares it against his original essay paragraph-by-paragraph, and he looks to see what changes were made and why, and crucially, if he thinks its better than what he originally had.

This process is repeated until he arrives at an essay that he's happy with. He spends more time doing things this way than he did when he just rattled off essays and tried to edit on his own. As a result, he's become a much better writer, and it's helped him in his other classes as well. He took the AP test a few weeks ago and I think he's going to pass.

marcus_holmes

My essay-writing process for my MBA was:

- decide what I wanted to say about the subject, from the set of opinions I already possess

- search for enough papers that could support that position. Don't read the papers, just scan the abstracts.

- write the essay. Scan the reference papers for the specific bit of it that best supported the point I want to make.

There was zero learning involved in this process. The production of the essay was more about developing journal search skills than absorbing any knowledge about the subject. There are always enough papers to support any given point of view, the trick was finding them.

I don't see how making this process even more efficient by delegating the entire thing to an LLM is affecting any actual education here.

protocolture

I literally wrote a friends psychology paper when I had no idea of the subject and they got a HD for it.

All I did was follow the process you outlined.

My mother used to do it as a service for foreign language students. They would record their lectures, and she would write their papers for them.

munksbeer

Confession. I became disillusioned with a teacher of a subject in school, who I was certain had taken a disliking to me.

I tested it by getting hold of a paper which had received an A from another school on the same subject, copying it verbatim and submitting it for my assignment. I received a low grade.

Despite confirming what I suspected, it somehow still wasn't a good feeling.

protocolture

I attended a catholic high school for several years, and I noticed a pattern. If I submitted an assignment to certain teachers and the subject related to a non catholic religion, I would get a pass, at the lowest score possible, regardless of the quality of the content.

So I just kept submitting assignments on the wrong religions. Write up about a saint? Pick a russian orthodox saint. Write up on marriage customs? Use islam. That way I could never fail.

TrackerFF

To be honest, that's a problem on your part. It is completely possible to write a paper on anything, using the scientific method as your framework.

But the problem is that in many cases, the degrees (like MBA, which I too hold) are merely formalities to move up the corporate ladder, or pivot to something else. You don't get rewarded extra for actually doing science. And, yes, I've done the exact same thing you did, multiple times, in multiple different classes. Because I knew that if what I did just looked and sounded proper enough, I'd get my grade.

To be fair, one of the first things I noticed when entering the "professional" workforce, was that the methodology was the same: Find proof / data that supports your assumptions. And if you can't find any, find something close enough and just interpret / present it in a way that supports your assumptions.

No need for any fancy hypothesis testing, or having to conclude that your assumptions were wrong. Like it is not your opinion or assumption anyway, and you don't get rewarded for telling your boss or clients that they're wrong.

marcus_holmes

Is there even such a thing as the "science of business"? One can form a hypothesis, and then conduct an experiment, but the experimental landscape is so messy that eliminating all other considerations is impossibly hard.

For example, there's a popular theory that the single major factor in startup success is timing - that the market is "ready" for ideas at specific times, and getting that timing right is the key factor in success. But it's impossible to predict when the market timing is right, you only find out in retrospect. How would you ever test this theory? There are so many other factors, half of which are outside the control of the experimenter, that you would have to conduct the experiment hundreds of time (effectively starting and failing at hundreds of startups) to exclude the confounding factors.

intended

I’m sorry for that.

May I ask a different question, why didn’t, or what stoped, you from engaging with the material itself ?

marcus_holmes

To be honest, I found "the material" irrelevant, mostly. There's vast swathes of papers written about obscure and tiny parts of the overall subject. Any given paper is probably correct, but covering such a tiny part of the subject that spending the time reading all of them is inefficient (if not impossible).

Also, given that the subject in question is "business", and the practice of business was being changed (as it is again now) by the application of new technology, so a lot of what I was reading was only borderline applicable any more.

MBAs are weird. To qualify to do one you need to have years of practical experience managing in actual business. But then all of that knowledge and experience is disregarded, and you're expected to defer to papers written by people who have only ever worked in academia and have no practical experience of what they're studying. I know this is the scientific process, and I respect that. But how applicable is the scientific process to management? Is there even a "science" of management?

So, like all my colleagues, I jumped through the hoops set in front of me as efficiently as possible in order to get the qualification.

I'm not saying it was worthless. I did learn a lot. The class discussions, hearing about other people's experiences, talking about specific problems and situations, this was all good solid learning. But the essays were not.

hyperbovine

> - search for enough papers that could support that position. Don't read the papers, just scan the abstracts.

Wrote wrote those papers? How did they learn to write them? At some point, somebody along the chain had to, you know, produce an actual independent thought.

marcus_holmes

Interesting question. It seems to me that the entire business academia could be following the method I've outlined and no-one would notice. Or care.

It's not like the hard sciences - no-one is able to refute anything, because you can't conduct experiments. You can always find some evidence for any given hypothesis, as the endless stream of self-help (and often contradictory) business books show.

None of the academics I was reading had actually run a business or had any practical experience of business. They were all lifelong academics who were writing about it from an academic perspective, referencing other academics.

Business is not short of actual independent thought. Verification is the thing it's missing. How does anyone know that the brilliant idea they just had is actually brilliant? The only way is to go and build a business around it and see if it works. Academics don't do that. How is this science then?

czhu12

To offer a flip side of the coin, I can't imagine I would have the patience outside of school, to have learned Rust this past year without AI.

Having a personal tutor who I can access at all hours of the day, and who can answer off hand questions I have after musing about something in the shower, is an incredible asset.

At the same time, I can totally believe if I was teleported back to school, it would become a total crutch for me to lean on, if anything just so I don't fall behind the rest of my peers, who are acing all the assignments with AI. It's almost a game theoretic environment where, especially with bell curve scaling, everyone is forced into using AI.

lacker

Same here. AI is a great tool for learning, but a challenge for education.

jamesgill

The fundamental question that AI raises for me, but nobody seems to answer:

In our competitive, profit-driven world--what is the value of a human being and having human experiences?

AI is neither inevitable nor necessary--but it seems like the next inevitable step in reducing the value of a human life to its 'outputs'.

nicbou

Someone needs to experience the real world and translate it into LLM training data.

ChatGPT can’t know if the cafe around the corner has banana bread, or how it feels to lose a friend to cancer. It can’t tell you anything unless a human being has experienced it and written it down.

It reminds me of that scene from Good Will Hunting: https://www.imdb.com/de/title/tt0119217/quotes/?item=qt04081...

turtletontine

I’m similarly worried about businesses all making “rational” decisions to replace their employees with “AI”, wherever they think they can get away with it. (Note that’s not the same thing as wherever “AI” can do the job well!)

But I think one place where this hits a wall is liability and accountability. Lots of low stakes things will be enshittified by “AI” replacements for actual human work. But for things like airline pilots, cancer diagnoses, heart surgery - the cost of mistakes is so large, that humans in the loop are absolutely necessary. If nothing else, at least as an accountability shield. A company that makes a tumor-detector black box wants to be an assistive tool to improve doctor’s “efficiency”, not the actual front line medical care. If the tool makes a mistake, they want no liability. They want all the blame on the doctor for trusting their tool and not double checking its opinion. I hear that’s why a lot of “AI” tools in medicine are actually reducing productivity: double checking an “AI’s” opinion is more work than just thinking and evaluating with your own brain.

eternauta3k

No, we already have autonomous cars driving around even though they've already killed people.

RationPhantoms

This is a poor take. They are objectively safer drives then their human counterpart. Yes, with those unfortunate deaths included.

Nasrudith

The funny thing is my first thought was "maybe reduced nominal productivity by increased throughness is exactly what we need when evaluating potential tumors". Keeping doctors off autopilot and not so focused that radiologists fail to see hidden gorillas in x-rays. And yes that was a real study.

jaza

The "value of a human" - same in this age as it has always been - is our ability to be truly original and to think outside the box. (That's also what makes us actually quite smart, and what makes current cutting-edge "AI" actually quite dumb).

AI is incapable of producing anything that's not basically a statistical average of its inputs. You'll never get an AI Da Vinci, Einstein, Kant, Pythagoras, Tolstoy, Kubrick, Mozart, Gaudi, Buddha, nor (most ironically?) Turing. Just to name a few historical humans whose respective contributions to the world are greater than the sum of the world's respective contributions to them.

jobigoud

Have you tried image generation? It can easily apply high level concepts from one area to another area and produce something that hasn't been done before.

Unless you loosen the meaning of statistical average so much that it ends up including human creativity. At the end of the day it's basically the same process of applying an idea from one field to another.

Most humans are not Da Vinci, Einstein, Kant, etc. Does that make them not valuable as humans?

jaza

Yes, I've tried AI image generation, and while it's impressive, it's also - at the end of the day - just as bland and unoriginal a mashup of existing material as AI text generation is.

All humans (I believe!) have the potential to be that amazing. And all humans come up with amazing ideas and produce amazing works in their life, just that 99% of us aren't appreciated as much as the famous 1% are. We're all valuable.

probably_wrong

IMO you're coming at it from the wrong angle.

Capitalism barely concerns itself with humans and whether human experiences exist or not is largely irrelevant for the field. As far as capitalism knows, humans are nothing but a noisy set of knobs that regulate how much profit one can make out of a situation. While tongue-in-cheek, this SMBC comic [1] about the Ultimatum game is an example of the type of paradoxes one gets when looking at life exclusively from an economics perspective.

The question is not "what's the value of a human under capitalism?" but rather "how do we avoid reducing humans to their economic output?". Or in different terms: it is not the blender's job to care about the pain of whatever it's blending, and if you find yourself asking "what's the value of pain in a blender-driven world?" then you are solving the wrong problem.

[1] https://www.smbc-comics.com/?id=3507

tenebrisalietum

You should determine your own value if you don't want to be controlled by anyone else.

If you don't want to determine your own value, you're probably no worse off letting an AI do that than anything else. Religion is probably more comfortable, but I'm sure AI and religion will mix before too long.

randcraw

A good start for this debate would be to reconsider the term "AI", perhaps choosing a term that's more intuitive, like "automation" or "robot assistant". It's obvious that learning to automate a task is no way to learn how to do it yourself. Nor is asking a robot to do it for you.

Students need to understand that learning to write requires the mastery of multiple distinct cognitive and organizational skills, only the last of which is to generate text that doesn't sound stupid.

Each of writing's component tasks must be understood and explicitly addressed by the student, to wit: (1) choosing a topic to argue, and the component points to make a narrative, (2) outlining the research questions needed to answer each point, and finally, (3) choosing ONLY the relevant points that are necessary AND sufficient to the argument AND based on referenced facts, and that ONLY THEN can be threaded into a coherent logical narrative exposition that makes the intended argument and that leads to the desired conclusion.

Only then has the student actually mastered the craft of writing an essay. If they are not held responsible for implementing each and every one of these steps in the final product, they have NOT learned how to write. Their robot did. That essay is a FAIL because the robot has earned the grade; not they. They just came along for the ride, like ballast in a sailing ship.