How University Students Use Claude
597 comments
·April 9, 2025dtnewman
enjo
> it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.
Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.
Zanfa
ChatGPT is laughably terrible at double entry accounting. A few weeks ago I was trying to use it to figure out a reasonable way to structure accounts for a project given the different business requirements I had. It kept disappearing money when giving examples. Pointing it out didn’t help either, it just apologized and went on to make the same mistake in a different way.
andai
Using a system based on randomness for a process that must occur deterministically is probably the wrong solution.
I'm running into similar issues trying to use LLMs for logic and reasoning.
They can do it (surprisingly well, once you disable the friendliness that prevents it), but you get a different random subset of correct answers every time.
I don't know if setting temperature to 0 would help. You'd get the same output every time, but it would be the same incomplete / wrong output.
Probably a better solution is a multi phase thing, where you generate a bunch of outputs and then collect and filter them.
Suppafly
>Pointing it out didn’t help either, it just apologized and went on to make the same mistake in a different way.
They really should modify it to take out that whole loop where it apologizes, claims to recognize its mistake, and then continues to make the mistake that it claimed to recognize.
vintermann
You'd think accounting students would catch on.
davedx
> me, just submitted my taxes for last year with a lot of help from ChatGPT: :eyes:
samuel
I guess this students don't pass, do they? I don't think that's a particularly hard concern. It will take a bit more, but will learn the lesson (or drop out).
I'm more worried about those who will learn to solve the problems with the help of an LLM, but can't do anything without one. Those will go under the radar, unnoticed, and the problem is, how bad is it, actually? I would say that a lot, but then I realize I'm pretty useless driver without a GPS (once I get out of my hometown). That's the hard question, IMO.
Stubbs
As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator, and it's the same problem, but there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.
I don't see the former as that much of a problem.
lr4444lr
How many people are "good drivers" outside their home town? I am not that old, but old enough to remember all adults taking wrong turns trying to find new destinations for the first time.
shinycode
For your GPS at worst you follow directions road sign by road sign. For a job without the core knowledge what’s the goal of hiring one person vs an unqualified one doing just prompts or worse, hiring no one and let agents do the prompting ?
xhkkffbf
All tech becomes a crutch. People can't wash their clothes without a machine. People can't cook without a microwave. Tech is both a gift and a curse.
9rx
Back in my day they worried about kids not being able to solve problems without a calculator, because you won't always have a calculator in your pocket.
...But then.
DSingularity
This is now reality -- fighting to change the students is a losing battle. Besides in terms of normalizing grade distributions this is not that complicated to solve.
Target the cheaters with pop quizzes. Prof can randomly choose 3 questions from assignments. If students cant get enough marks on 2/3 of them they are dealt a huge penalty. Students that actually work through the problems will have no problems with scoring enough marks on 2/3 of the questions. Students that lean irresponsibly on LLMs will lose their marks.
cellularmitosis
Why not just grade solely based on live performance? (quizzes and tests)
Homework would still be assigned as a learning tool, but has no impact on your grade.
rrr_oh_man
Maybe we'll revert to Soviet bilet-style oral exams...
el_benhameen
I wonder to what extent this is students who would have stuck it out now taking the easy way and to what extent it’s students who would have just failed now trying to stick it out.
anon35
This is an extremely important question, and you’ve phrased it nicely.
We’re either handicapping our brightest, or boosting our dumbest. One part is concerning, the other encouraging .
woodrowbarlow
my partner teaches high school math and regularly gets answers with calculus symbols (none of the students have taken any calculus). these students aren't putting a single iota of thought into the answers they're getting back from these tools.
pc86
To me this is the bigger problem. Using LLMs is going to happen and there's nothing anyone can do to stop it. So it's important to make people understand how to use them, and to find ways to test that students still understand the underlying concepts.
I'm in a 100%-online grad school but they proctor major exams through local testing centers, and every class is at least 50% based on one or more major exams. It's a good way to let people use LLMs, because they're available, and trying to stop it is a fool's errand, while requiring people to understand the underlying concepts in order to pass.
iNic
The solution is making all homework optional and having an old-school end of semester exam.
Ekaros
You can always give extra points for homework which then compensate from lacking in tests. If you get perfect points in test, well maximum grade. If less than perfect, you can up grade with those extra points. Fair for everyone.
Suppafly
>The solution is making all homework optional and having an old-school end of semester exam.
Not really. While doing something to ensure that students are actually learning is important, plenty of the smartest people still don't always test well. End of semester exams also tend to not be the best way to tell if people are learning along the way and then fall off part way through for whatever reason.
bko
When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
Whenever we have a new technology there's a response "why do I need to learn X if I can always do Y", and more or less, it has proven true, although not immediately.
For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography.
I believe LLMs are different (I am still stuck in the moral panic phase), but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection). So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
kibwen
The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.
Like, Socrates may have been against writing because he thought it made your memory weak, but at least I, an individual, am perfectly capable of manufacturing my own writing implements with a modest amount of manual labor and abundantly-available resources (carving into wood, burning wood into charcoal to write on stone, etc.). But I ain't perfectly capable of doing the same to manufacture an integrated circuit, let alone a digital calculator, let alone a GPU, let alone an LLM. Anyone who delegates their thought to a corporation is permanently hitching their fundamental ability to think to this wagon.
hackyhacky
> The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.
Yes, but that horse has long ago left the barn.
I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?
The history of civilization is the history of specialization. No one can re-build all the tools they rely on from scratch. We either let other people specialize, or we let machines specialize. LLMs are one more step in the latter.
The Luddites were right: the machinery in cotton mills was a direct threat to their livelihood, just as LLMs are now to us. But society marches on, textile work has been largely outsourced to machines, and the descendants of the Luddites are doctors and lawyers (and coders). 50 years from new the career of a "coder" will evoke the same historical quaintness as does "switchboard operator" or "wainwright."
gh0stcat
Why do people keep parroting this reduction of Socrates' thoughts... I don't think it was just as simple as he thought writing was bad. And we already know that writing isn't everything, anyone who as done any study of a craft can tell you that reading and writing don't teach you the feel of the art form, but can also nonetheless aid in the study. It's not black and white, even though people like to make it out to be.
SOCRATES: You know, Phaedrus, writing shares a strange feature with painting. The offsprings of painting stand there as if they are alive, but if anyone asks them anything, they remain most solemnly silent. The same is true of written words. You’d think they were speaking as if they had some understanding, but if you question anything that has been said because you want to learn more, it continues to signify just that very same thing forever. When it has once been written down, every discourse roams about everywhere, reaching indiscriminately those with understanding no less than those who have no business with it, and it doesn’t know to whom it should speak and to whom it should not. And when it is faulted and attacked unfairly, it always needs its father’s support; alone, it can neither defend itself nor come to its own support.
PHAEDRUS: You are absolutely right about that, too.
SOCRATES: Now tell me, can we discern another kind of discourse, a legitimate brother of this one? Can we say how it comes about, and how it is by nature better and more capable?
PHAEDRUS: Which one is that? How do you think it comes about?
SOCRATES: It is a discourse that is written down, with knowledge, in the soul of the listener; it can defend itself, and it knows for whom it should speak and for whom it should remain silent.
[link](https://newlearningonline.com/literacies/chapter-1/socrates-...)
bko
I don't know, most of the things I'm reliant on, from my phone, ISP, automobile, etc are built on fragile interdependent supply chains provided by for-profit companies. If you're really worried about this, you should learn survival skills not the academic topics I'm talking about.
So if you're not bothering to learn how to farm, dress some wild game, etc, chances are this argument won't be convincing for "why should I learn calculus"
Zambyte
For what it's worth, locally runnable language models are becoming exceptionally capable these days, so if you assume you will have some computer to do computing, it seems reasonable to assume that it will enable you to do some language model based things. I have a server with a single GPU running language models that easily blow GPT 3.5 out of the water. At that point, I am offloading reasoning tasks to my computer in the same way that I offload memory take to my computer through my note taking habits.
notyourwork
Although I agree, convincing children to learn using that rationalization won’t work.
1oooqooq
nobody ever said that. that's ai apologist history revisionism.
johndough
Use it or lose it. With the invention of the calculator, students lost the ability to do arithmetic. Now, with LLMs, they lose the ability to think.
This is not conjecture by the way. As a TA, I have observed that half of the undergraduate students lost the ability to write any code at all without the assistance of LLMs. Almost all use ChatGPT for most exercises.
Thankfully, cheating technology is advancing at a similarly rapid pace. Glasses with integrated cameras, WiFi and heads-up display, smartwatches with polarized displays that are only readable with corresponding glasses, and invisibly small wireless ear-canal earpieces to name just a few pieces of tech that we could have only dreamed about back then. In the end, the students stay dumb, but the graduation rate barely suffers.
I wonder whether pre-2022 degrees will become the academic equivalent to low-background radiation steel: https://en.wikipedia.org/wiki/Low-background_steel
wrp
"Technology can do X more conveniently than people, so why should children practice X?" has been a point of controversy in education at least since pocket calculators became available.
I try to explain by shifting the focus from neurological to musculoskeletal development. It's easy to see that physical activity promotes development of children's bodies. So although machines can aid in many physical tasks, nobody is suggesting we introduce robots to augment PE classes. People need to recognize that complex tasks also induce brain development. This is hard to demonstrate but has been measured in extensive tasks like learning languages and music performance. Of course, this argument is about child development, and much of the discussion here is around adult education, which has some different considerations.
nicbou
My last calculator had a "solve" button and we could bring it in an exam.
You still needed to know what to ask it, and how to interpret the output. This is hard to do without an understanding of how the underlying math works.
The same is true with LLMs. Without the fundamentals, you are outsourcing work that you can't understand and getting an output that you can't verify.
boredhedgehog
I would add that we don't pretend PE or gyms serve any higher purpose besides individual health and well-being, which is why they are much more game-ified than formal education. If we acknowledge that it doesn't particularly matter how a mind is being used, the structure of school would change fundamentally.
Footprint0521
This is the motivation I needed right now
OptionOfT
The problem with GPS is that you never learn to orient yourself. You don't learn to have a sense of place, direction or elapsed distance. [0]
As to writing, just the action of writing something down with a pen, on paper, has been proven to be better for memorization than recording it on a computer [1].
If we're not teaching these basic skills because an LLM does it better, how do learn to be skeptical of the output of the LLM. How do we validate it?
How do we bolster ourselves against corporate influences when asking which of 2 products is healthier? How do we spot native advertising? [2]
[0]: https://www.nature.com/articles/531573a
[1]: https://www.sciencedirect.com/science/article/abs/pii/S00016...
[2]: Example: https://www.nytimes.com/paidpost/netflix/women-inmates-separ...
light_hue_1
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc.
I'm the polar opposite. And I'm AI researcher.
The reason you can't answer your kid when he asks about LLMs is because the original position was wrong.
Being able to write isn't optional. It's a critical tool for thought. Spelling is very important because you need to avoid confusion. If you can't spell no spell checker can save you when it inserts the wrong word. And this only gets far worse the more technical the language is. And maps are crucial too. Sometimes, the best way to communicate is to draw a map. In many domains like aviation maps are everything, you literally cannot progress without them.
LLMs are no different. They can do a little bit of thinking for us and help us along the way. But we need to understand what's going on to ask the right questions and to understand their answers.
noitpmeder
This is an insane take.
The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.
The AI becomes their brain, such that they cannot function without it.
I'd never want to work with someone who is this reliant on technology.
bko
Maybe 40 years ago there were programmers that would not work with anyone that use IDEs or automated memory management. When presented with a programming task that requires these things and you're WITHOUT your IDE or whatever, they will fall apart.
Look, I agree with you, I'm just trying to articulate to someone why they should learn X if they believe an LLM could help them and "an LLM won't always be around" isn't a good argument, because lets be honest, it likely will. This is the same thing as "you won't walk around all day with a calculator in your pocket so you need to learn math"
Vvector
Do you have the skills and knowledge to survive like a pioneer from 200 years ago?
Technology is rapidly changing humanity. Maybe for the worse.
null
lordnacho
Do you wear glasses? Or use artificial light?
Or do you have perfect vision and get all your work done during the sunlight hours?
Technology is everywhere, nobody is independent from it.
mbesto
Do you work with people who can multiply 12.3% * 144,005.23 rapidly without a calculator?
> The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.
The parent poster is positing that for 90% of cases they WILL have their AI assistant because its in their pocket, just like a calculator. It's not insane to think that and its a fair point to ponder.
II2II
Perhaps that mode of thinking is wrong, even if it is accepted.
Take rote memorization. It is hard. It sucks in so many ways (just because you memorized something doesn't mean you can reason using that information). Yet memorization also provides the foundations for growth. At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for? How can you assess the validity of a source if you don't know the fundamentals? How can you avoid falling prey to propaganda if your only knowledge of a subject is what is in front of your face? None of that is to say that we should dismiss search and depend upon memorization. We need both.
I can't tell you what to say to your children about LLMs. For one thing, I don't know what is important to them. Yet it is important to remember that it isn't an either-or thing. LLMs are probably going to be essential to manage the profoundly unmanagable amount of information our world creates. Yet it is also important to remember that they are like the person who memorizes but lacks the ability to reason. They may be able to impress people with their fountain of facts, yet they will be unable to create a mark on the world since they will lack the ability to create anything unique.
viraptor
> At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for?
That's actually pretty doable. Almost every resource provides more context than just the exact thing you're asking. You build on that knowledge and continue asking. Nobody knows everything - we've been doing the equivalent of this kind of research forever.
> How can you assess the validity of a source if you don't know the fundamentals?
Learn about the fundamentals until you get to the level you're already familiar with. You're describing an adult outside of school environment learning basically anything.
palmotea
> When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
And those people are wrong, in a similar way to how it's wrong to say: "There's no point in having very much RAM, as you can just page to disk."
It's the cognitive equivalent of becoming morbidly obese (another popular decision in today's world).
srveale
IMO it's so easy to ChatGPT your homework that the whole education model needs to flip on its head. Some teachers already do something like this, it's called the "Flipped classroom" approach.
Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable. It means less class time for instruction, but students have a tutor in their pocket anyway.
I've also talked with a bunch of teachers and a couple admins about this. They agree it's a huge problem. By the same token, they are using AI to create their lesson plans and assignments! Not fully of course, they edit the output using their expertise. But it's funny to imagine AI completing an AI assignment with the humans just along for the ride.
The point is, if you actually want to know what a student is capable of, you need to watch them doing it. Assigning homework has lost all meaning.
sixpackpg
The education model at high school and undergrad uni has not changed in decades, I hope AI leads to a fundamental change. Homework being made easy by AI is a symptom of the real issues. Being taught by uni students who learned the curriculum last year, lecturers who only lecture due to obligation and haven't changed a slide in years. Lecturers who refuse to upload lecture recordings or slides. Just a few glaring issues, the sad part these are rather superficial easy to fix cases of poor teaching.
I feel AI has just revealed how poor the teaching is, though I don't expect any meaningful response to be made by teaching establishments. If anything AI will lead to bigger differences in student learning. Those who learn core concepts and to critically think will be become more valuable and the people who just AI everything will become near worthless.
Unis will release some handbook policy changes to the press and will continue to pump out the bell curve of students and get paid.
doctorpangloss
And yet all the people who created all the advances in AI have extremely traditional, extremely good, fancy educations, and did absolutely bonkers amount of homework. The thing you are talking about is very aspirational.
Workaccount2
I don't see a future that doesn't involve some form of AR glasses and individual tuned learning. Forget teachers, you will just don your learning glasses and have an AI that walks you through assignments and learning everyday.
That is if learning-to-become-a-contributing-member-of-society doesn't become obsolete anyway.
hackyhacky
> it's called the "Flipped classroom" approach.
Flipped classroom is just having the students give lectures, instead of the teacher.
> Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable.
This is called "proctored exams" and it's been pretty common in universities for a few centuries.
None of this addresses the real issue, which is whether teachers should be preventing students from using AIs.
srveale
> Flipped classroom is just having the students give lectures, instead of the teacher.
Not quite. Flipped classroom means more instruction outside of class time and less homework.
> This is called "proctored exams" and it's been pretty common in universities for a few centuries. None of this addresses the real issue
Proctored exams is part of it. In-class assignments is another. Asynchronous instruction is another.
And yes, it addresses the issue. Students can use AI however they see fit, to learn or to accomplish tasks or whatever, but for actual assessment of ability they cannot use AI. And it leaves the door open for "open-book" exams where the use of AI is allowed, just like a calculator and textbook/cheat-sheet is allowed for some exams.
bryanlarsen
Flipped classroom means you watch the recorded lecture outside of class time and you do your homework during class time.
null
vonneumannstan
>Not fully of course, they edit the output using their expertise
Surely this is sarcasm, but really your average schoolteacher is now a C student Education Major.
srveale
I was talking about people I know and talk with, mostly friends and family, who are smart, hard working, and their students are lucky to have them.
aj7
I’m a physicist. I can align and maximize ANY laser. I don’t even think when doing this task. Long hours of struggle, 50 years ago. Without struggle there is nothing. You can bullshit your way in. But you will be ejected.
ketzo
barely related to your point but “I can align and maximize ANY laser” is such an incredibly specific flex, I love it
ramraj07
Especially because it's not a skill everyone gets just because they practice. I know because I tried for years lol.
marksbrown
A master blacksmith can shoe a horse an' all. Laser alignment is also a solved problem with a machine. Just because something can be done by hand does not mean it has any intrinsic value.
hobo_in_library
The challenge is that while LLMs do not know everything, they are likely to know everything that's needed for your undergraduate education.
So if you use them at that level you may learn the concepts at hand, but you won't learn _how to struggle_ to come up with novel answers. Then later in life when you actually hit problem domains that the LLM wasn't trained in, you'll not have learned the thinking patterns needed to persist and solve those problems.
Is that necessarily a bad thing? It's mixed: - You lower the bar for entry for a certain class of roles, making labor cheaper and problems easier to solve at that level. - For more senior roles that are intrinsically solving problems without answers written in a book or a blog post somewhere, you need to be selective about how you evaluate the people who are ready to take on that role.
It's like taking the college weed out classes and shifting those to people in the middle of their career.
Individuals who can't make the cut will find themselves stagnating in their roles (but it'll also be easier for them to switch fields). Those who can meet the bar might struggle but can do well.
Business will also have to come up with better ways to evaluate candidates. A resume that says "Graduated with a degree in X" will provide less of a signal than it did in the past
psygn89
Agreed, the struggle often leads us to poke and prod an issue from many angles until things finally click. It lets us think critically. In that journey you might've learned other related concepts which further solidifies your understanding.
But when the answer flows out of thin air right in front of you with AI, you get the "oh duh" or "that makes sense" moments and not the "a-ha" moment that ultimately sticks with you.
Now does everything need an "a-ha" moment? No.
However, I think core concepts and fundamentals need those "a-ha" moments to build a solid and in-depth foundation of understanding to build upon.
porridgeraisin
Yep. People love to cut down this argument by saying that a few decades ago, people said the same thing about calculators. But that was a problem too! People losing a large portion of their mental math faculty is definitely a problem. If mental math was required daily, we wouldn't see such obvious BS numbers in every kind of reporting(media/corporate/tech benchmarks) that people don't bat an eye at. How much the problem is _worth_ though, is what matters for adoption of these kinds of tech. Clearly, the problem above wasn't worth much. We now have to wait and see how much the "did not learn through cuts and scratches" problem is worth.
taftster
Absolutely this. AI can help reveal solutions that weren't seen. An a-ha moment can be as instrumental to learning as the struggle that came before.
Academia needs to embrace this concept and not try to fight it. AI is here, it's real, it's going to be used. Let's teach our students how to benefit from its (ethical) use.
yapyap
> I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.
In the end the willingness to struggle will set apart the truly great Software Engineer from the AI-crutched. Now of course this will most of the time not be rewarded, when a company looks at two people and sees “passable” code from both but one is way more “productive” with it (the AI-crutched engineer) they’ll inititally appreciate this one more.
But in the long run they won’t be able to explain the choices made when creating the software, we will see the retraction from this type of coding when the first few companies’ security falls apart like a house of cards due to AI reliance.
It’s basically the “instant gratification vs delayed gratification” argument but wrapped in the software dev box.
JohnMakin
I don't wholly disagree with this post, but I'd like to add a caveat, observing my own workflow with these tools.
I guess I'd qualify to you as someone "AI crutched" but I mostly use it for research and bouncing ideas (or code complete, which I've mentioned before - this is a great use of the tool and I wouldn't consider it a crutch, personally).
For instance, "parse this massive log output, and highlight anything interesting you see or any areas that may be a problem, and give me your theories."
Lots of times its wrong. Sometimes its right. Sometimes, its response gives me an idea that leads to another direction. It's essentially how I was using google + stack overflow ten years ago - see your list of answers, use your intuition, knowledge, and expertise to find the one most applicable to you, continue.
This "crutch" is essentially the same one I've always used, just in different form. I find it pretty good at doing code review for myself before I submit something more formal, to catch any embarrassing or glaringly obvious bugs or incorrect test cases. I would be wary of the dev that refused to use tools out of some principled stand like this, just as I'd be wary of a dev that overly relied on them. There is a balance.
Now, if all you know are these tools and the workflow you described, yea, that's probably detrimental to growth.
Cthulhu_
> But that struggling was ultimately necessary to really learn the concepts.
This is what isn't explained or understood properly (...I think) to students; on the surface you go to college/uni to learn a subject, but in reality, you "learn to learn". The output that you're asked to submit is just to prove that you can and have learned.
But you don't learn to learn by using AI tools. You may learn how to craft stuff that passes muster, gets you a decent grade and eventually a piece of paper, but you haven't learned to learn.
Of course, that isn't anything new, loads of people try and game the system, or just "do the work, get the paper". A box ticking exercise instead of something they actually want to learn.
defgeneric
After reading the whole article I still came away with the suspicion that this is a PR piece that is designed to head-off strict controls on LLM usage in education. There is a fundamental problem here beyond cheating (which is mentioned, to their credit, albeit little discussed). Some academic topics are only learned through sustained, even painful, sessions where attention has to be fully devoted, where the feeling of being "stuck" has to be endured, and where the brain is given space and time to do the real work of synthesizing, abstracting, and learning, or, in short, thinking. The prompt-chains where students are asking "show your work" and "explain" can be interpreted as the kind of back-and-forth that you'd hear between a student and a teacher, but they could also just be evidence of higher forms of "cheating". If students are not really working through the exercises at the end of each chapter, but instead offloading the task to an LLM, then we're going to have a serious competency issue. Nobody ever actually learns anything.
Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).
P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.
SamBam
I feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them.
In the article, I guess this would be buried in
> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.
"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.
(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)
vunderba
Exactly. There's a big difference between a student having a back-and-forth dialogue with Claude around "the extent to which feudalism was one of the causes of the French Revolution.", versus another student using their smartphone to take a snapshot of the actual homework assignment, pasting it into Claude and calling it a day.
PeterStuer
From what I could observe, the latter is endemic amongst high school students. And don't kid yourself. For many it is just a step up from copy/pasting the first Google result.
They never could be arsed to learn how to input their assignments into Wolfram Alpha. It was always the ux/ui effort that held them back.
chii
THe question is would those students have done any better or worse if there hadn't been LLM for them to "copy" off?
In other words, is the school certificaftion meant to distinguish those who genuinely learnt, or was it merely meant to signal (and thus, those who used to copy pre-llm are going to do the same, and thus reach the same level of certification regardless of whether they learnt or not)?
radioactivist
Most of their categories have straightforward interpretations in terms of students using the tool to cheat. They don't seem to want to/care to analyze that further and determine which are really cheating and which are more productive uses.
I think that's a bit telling on their motivations (esp. given their recent large institutional deals with universities).
SamBam
Indeed. I called out the second-top category, but you could look at the top category as well:
> We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, editing essays, or summarizing academic material.
Sure, throwing a paragraph of an essay at Claude and asking it to turn it into a 3-page essay could have been categorized as "editing" the essay.
And it seems pretty naked the way they lump "editing an essay" in with "designing practice questions," which are clearly very different uses, even in the most generous interpretation.
I'm not saying that the vast majority of students do use AI to cheat, but I do want to say that, if they did, you could probably write this exact same article and tell no lies, and simply sweep all the cheating under titles like "create and improve educational content."
ignoramous
> feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them
You're right.
Quite incredibly, they also do the opposite, in that they hype-up / inflate the capability of their LLMs. For instance, they've categorised "summarisation" as "high-order thinking" ("Create", per Bloom's Taxonomy). It patently isn't. Comical they'd not only think so, but also publicly blog about it.
xpe
> Bloom's taxonomy is a framework for categorizing educational goals, developed by a committee of educators chaired by Benjamin Bloom in 1956. ... In 2001, this taxonomy was revised, renaming and reordering the levels as Remember, Understand, Apply, Analyze, Evaluate, and Create. This domain focuses on intellectual skills and the development of critical thinking and problem-solving abilities. - Wikipedia
This context is important: this taxonomy did not emerge from artificial intelligence nor cognitive science. So its levels are unlikely to map to how ML/AI people assess the difficulty of various categories of tasks.
Generative models are, by design, fast (and often pretty good) at generation (creation), but this isn't the same standard that Bloom had in mind with his "creation" category. Bloom's taxonomy might be better described as a hierarchy: proper creation draws upon all the layers below it: understanding, application, analysis, and evaluation.
xpe
Here is one key take-away, phrased as a question: when a student uses an LLM for "creation", are underlying aspects (understanding, application, analysis, and evaluation) part of the learning process?
walleeee
> Students primarily use AI systems for creating (using information to learn something new)
this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say
> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.
and later they report
> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions - Provide direct answers to English language test questions - Rewrite marketing and business texts to avoid plagiarism detection
kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something
null
walleeee
I should say, disable you- the tone did not reflect that it can happen to anyone, and that it can not only be a wedge between people but also (and only by virtue of being) between personal trajectories, conditional on the way one uses it
zebomon
The writing is irrelevant. Who cares if students don't learn how to do it? Or if the magazines are all mostly generated a decade from now? All of that labor spent on writing wasn't really making economic sense.
The problem with that take is this: it was never about the act of writing. What we lose, if we cut humans out of the equation, is writing as a proxy for what actually matters, which is thinking.
You'll soon notice the downsides of not-thinking (at scale!) if you have a generation of students who weren't taught to exercise their thinking by writing.
I hope that more people come around to this way of seeing things. It seems like a problem that will be much easier to mitigate than to fix after the fact.
A little self-promo: I'm building a tool to help students and writers create proof that they have written something the good ol fashioned way. Check it out at https://itypedmypaper.com and let me know what you think!
janalsncm
How does your product prevent a person from simply retyping something that ChatGPT wrote?
I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable: in-class discussions, in-person writing (with pen and paper or locked down computers), way less weight given to remote assignments on Canvas or other software. Attributing authorship from text alone (or keystroke patterns) is not possible.
zebomon
It may be possible that with enough data from the two categories (copied from ChatGPT and not), your keystroke dynamics will differ. This is an open question that my co-founder and I are running experiments on currently.
So, I would say that while I wouldn't fully dispute your claim that attributing authorship from text alone is impossible, it isn't yet totally clear one way or the other (to us, at least -- would welcome any outside research).
Long-term -- and that's long-term in AI years ;) -- gaze tracking and other biometric tracking will undoubtedly be necessary. At some point in the near future, many people will be wearing agents inside earbuds that are not obvious to the people around them. That will add another layer of complexity that we're aware of. Fundamentally, it's more about creating evidence than creating proof.
We want to give writers and students the means to create something more detailed than they would get from a chatbot out-of-the-box, so that mimicking the whole act of writing becomes more complicated.
pr337h4m
At this point, it would be easier to stick to in-person assignments.
logicchains
>I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable
It won't be long 'til we're at the point that embodied AI can be used for scalable face-to-face assessment that can't be cheated any easier than a human assessor.
ketzu
> The writing is irrelevant.
In my opinion this is not true. Writing is a form of communicating ideas. Structuring and communicating ideas with others is really important, not just in written contexts, and it needs to be trained.
Maybe the way universities do it is not great, but writing in itself is important.
zebomon
Kindly read past the first line, friend :)
ketzu
I did. :)
(And I am aware of the irony in failing to communicate when mentioning that studying writing is important to be good at communication.) Maybe I should have also cited this part:
> writing as a proxy for what actually matters, which is thinking.
In my opinion, writing is important not (only) as a proxy for thinking, but as a direct form of communicating ideas. (Also applies to other forms of communication though.)
knowaveragejoe
Paul Graham had a recent blogpost about this, and I find it hard to disagree with.
aprilthird2021
What we lose if we cut humans out of the equation is the soul and heart of reflection, creativity, drama, comedy, etc.
All those have, at the base of them, the experience of being human, something an LLM does not and will never have.
zebomon
I agree!
jillesvangurp
Students will work in a world where they have to use AI to do their jobs. This is not going to be optional. Learning to use AIs effectively is an important skill and should be part of their education.
And it's an opportunity for educators to raise the ambition level quite a bit. It indeed obsoletes some of the tests they've been using to evaluate students. But they too now have the AI tools to do a better job and come up with more effective tests.
Think of all that time freed up having to actually read all those submitted papers. I can tell you from experience (I taught a few classes as a post doc way back): not fun. Minimum you can just instantly fail the ones that are obviously poorly written, are full of grammatical errors, and feature lots of flawed reasoning. Most decent LLMs do a decent job of doing that. Is using an LLM for that cheating if a teacher does it? I think that should just be expected at this point. And if it is OK for the teacher, it should be OK for the student.
If you expect LLMs to be used, it raises the bar for the acceptable quality level of submitted papers. They should be readable, well structured, well researched, etc. There really is no excuse for those papers not being like that. The student needs to be able to tell the difference. That actually takes skill to ask for the right things. And you can grill them on knowledge of their own work. A little 10 minute conversation maybe. Which should be about the amount of time a teacher would have otherwise spent on evaluating the paper manually and is definitely more fun (I used to do that; give people an opportunity to defend their work).
And if you really want to test writing skills, put students in a room with pen and paper. That's how we did things in the eighties and nineties. Most people did not have PCs and printers then. Poor teachers had to actually sit down and try to decipher my handwriting. Which even when that skill had not atrophied for a few decades, wasn't great.
LLMs will force change in education one way or another. Most of that change will be good. People trying to cheat is a constant. We just need to force them to be smarter about it. Which at a meta level isn't that bad of a skill to learn when you are educating people.
spongebobstoes
Writing is not necessary for thinking. You can learn to think without writing. I've never had a brilliant thought while writing.
In fact, I've done a lot more thinking and had a lot more insights from talking than from writing.
Writing can be a useful tool to help with rigorous thinking. In my opinion, is mostly about augmenting the author's effective memory to be larger and more precise.
I'm sure the same effect could be achieved by having AI transcribe a conversation.
Unearned5161
I'm not settled on transcribed conversation being an adequate substitute for writing, but maybe it's better than nothing.
There's something irreplaceable about the absoluteness of words on paper and the decisions one has to do to write them out. Conversational speak is, almost by definition, more relaxed and casual. The bar is lower and as such, the bar for thoughts is lower, in order of ease of handwaving I think it goes: mental, speech, writing.
Furthermore there's the concept of editing which I'm unsure how it could be carried out in a conversational sense in graceful manner. Being able to revise words, delete, move around, can't be done with conversation unless you count "forget I said that, it's actually more like this..." as suitable.
karn97
I literally never write while thinking lol stop projecting this hard
moojacob
How can I, as a student, avoid hindering my learning with language models?
I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.
In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.
I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?
dwaltrip
Only use LLMs for half of your work, at most. This will ensure you continue to solidify your fundamentals. It will also provide an ongoing reality check.
I’d also have sessions / days where I don’t use AI at all.
Use it or lose it. Your brain, your ability to persevere through hard problems, and so on.
rglynn
I definitely catch myself reaching for the LLM because thinking is too much effort. It's quite a scary moment for someone who prides themself on their ability to think.
knowaveragejoe
It's a hard question to answer and one I've been mindful of in using LLMs as tutoring aids for my own learning purposes. Like everything else around LLM usage, it probably comes down to careful prompting... I really don't want the answer right away. I want to propose my own thoughts and carefully break them down with the LLM. Claude is pretty good at this.
"productive struggle" is essential, I think, and it's hard to tease that out of models that are designed to be as immediately helpful as possible.
noisy_boy
I don't think the pain of losing points is a good learning incentive, powerful sure but not effective.
You would learn more if you tell Claude to not give outright answers but generate more problems where you are weak for you to solve. That reduction in errors as you go along will be the positive reinforcement that will work long term.
neves
I don't know. I remember much more my failures than my successes. There are errors in important tests that I remember for life the correct answer.
bionhoward
IMHO yes you’re “losing neurons” and the obvious answer is to stop using Claude. The work you do with them benefits them more than it benefits you. You’re paying them to have conversations with a chatbot which has stricter copyright than you do. That means you’re agreeing to pay to train their bot to replace you in the job market. Does that sound like a good idea in the long term? Anthropic is an actual brain rape system, just like OpenAI, Grok, and all the rest, they cannot be trusted
azemetre
Can you do all this without relying on any LLM usage? If so then you’re fine.
quantumHazer
As a student, I use LLMs as little as possible and try to rely on books whenever possible. I sometimes ask LLMs questions about things that don't click, and I fact-check their responses. For coding, I'm doing the same. I'm just raw dogging the code like a caveman because I have no corporate deadlines, and I can code whatever I want. Sometimes I get stuck on something and ask an LLM for help, always using the web interface rather than IDEs like Cursor or Windsurf. Occasionally, I let the LLMs write some boilerplate for boring things, but it's really rare and I tend not to use them too much. This isn't due to Luddism but because I want to learn, and I don't want slop in my way.
lunarboy
This sounds fine? Copy pasting LLM output without understanding is a short term dopamine hit that only hurts you long term if you don't understand it. If you struggle first, or strategically ping-pong with the LLM to arrive at the answer, and can ultimately understand the underlying reasoning.. why not use it?
Of course the problem is the much lower barrier for that to turn into cutting corners or full on cheating, but always remember it ultimately hurts you the most long term.
namaria
> can ultimately understand the underlying reasoning
This is at the root of the Dunnin-Kruger effect. When you read an explanation you feel like you understand it. But it's an illusion, because you never developed the underlying cognition, you just saw the end result.
Learning is not about arriving at the result, or knowing the answers. These are by products of the process of learning. If you just short cut to the end by products, you get the appearance of learning. And you might be able to play the system and come out with a diploma. But you didn't actually develop cognitive skills at all.
istjohn
I believe conversation is a one of the best ways to really learn a topic, so long as it is used deliberately.
My folk theory of education is that there is a sequence you need to complete to truly master a topic.
Step 1: You start with receptive learning where you take in information provided to you by a teacher, book, AI or other resource. This doesn't have to be totally passive. For examble, it could take the form of Socratic questioning to guide you towards an understanding.
Step 2: Then you digest the material. You connect it to what you already know. You play with the ideas. This can happen in an internal monologue as you read a textbook, in a question and answer period after a lecture, in a study group conversation, when you review your notes, or as you complete homework questions.
Step 3: Finally, you practice applying the knowledge. At this stage, you are testing the understanding and intuition you developed during digestion. This is where homework assignments, quizes, and tests are key.
This cycle can occur over a full semester, but it can also occur as you read a single textbook paragraph. First, you read (step 1). Then you stop and think about what this means and how it connects to what you previously read. You make up an imaginary situation and think about what it implies (step 2). Then you work out a practice problem (step 3).
Note that it is iterative. If you discover in step 3 a misunderstanding, you may repeat the loop with an emphasis on your confusion.
I think AI can be extremely helpful in all three stages of learning--in particular, for steps 2 and 3. It's invaluable to have quick feedback at step 3 to understand if you are on the right trail. It doesn't make sense to wait for feedback until a teacher's aid gets around to grading your HW if you can get feedback right now with AI.
The danger is if you don't give yourself a chance to struggle through step 3 before getting feedback. The amount of struggle that is appropriate will vary and is a subtle question.
Philosophers, mathematicians, and physicists in training obviously need to learn to be comfortable finding their way through hairy problems without any external source of truth to guide them. But this is a useful muscle that arguably everyone should exercise to some extent. On the other hand, the majority of learning for the majority of students is arguably more about mastering a body of knowledge than developing sheer brain power.
Ultimately, you have to take charge of your own learning. AI is a wonderful learning tool if used thoughtfully and with discipline.
stv_123
Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills. I could easily see conversations that they outline as "Collaborative" primarily being a user walking Claude through multi-part problems or asking it to produce justifications for answers that students add to assignments.
tmpz22
Direct quote I heard from an undergrad taking statistics:
"Snapchat AI couldn't get it right so I skipped the assignment"
moffkalast
Well if statistics can't understand itself, then what hope do the rest of us have?
dvngnt_
back in my day we used snap to send spicy photos now they're using AI to cheat on homework. im not sure what's worse
MikeTheGreat
Well, I can tell you for sure which one's better :)
mppm
> Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills.
No shit. This is anecdotal evidence, but I was recently teaching a university CS class as a guest lecturer (at a somewhat below-average university), and almost all the students were basically copy-pasting task descriptions and error messages into ChatGPT in lieu of actually programming. No one seemed to even read the output, let alone be able to explain it. "Foundational skills" were near zero, as a result.
Anyway, I strongly suspect that this report is based on careful whitewashing and would reveal 75% cheating if examined more closely. But maybe there is a bit of sampling bias at play as well -- maybe the laziest students just never bother with anything but ChatGPT and Google Colab, while students using Claude have a little more motivation to learn something.
colonial
CS/CE undergrad here who entered university right when ChatGPT hit. Things are bad at my large state school.
People who spent the past two years offloading their entry-level work onto LLMs are now taking 400-level systems programming courses and running face-first into a capability wall. I try my best to help, but there's only so much I can do when basic concepts like structs and pointer manipulation get blank stares.
> "Oh, the foo field in that struct should be signed instead of unsigned."
< "Struct?"
> "Yeah, the type definition of Bar? It's right there."
< "Man, I had ChatGPT write this code."
> "..."
jjmarr
Put the systems level programming in year 1, honestly. Either you know the material going in, or you fail out.
yieldcrv
> I think it downplays the incidence of students using Claude as an alternative to building foundational skills
I think people will get more utility out of education programs that allow them to be productive with AI, at the expense of foundational knowledge
Universities have a different purpose and are tone deaf to why their students use universities for the last century: which is that the corporate sector decided university degrees were necessary despite 90% of the cross disciplinary learning being irrelevant.
Its not the university’s problem and they will outlive this meme of catering to the middle class’ upwards mobility at all. They existed before and will exist after.
The university may never be the place for a human to hone the skill of being augmented with AI but a trade school or bootcamp or other structured learning environment will be, for those not self started enough to sit through youtube videos and trawl discord servers
fallinditch
Yes, AI tools have shifted the education paradigm and cognition requirements. This is a 'threat' to universities, but I would also argue that it's an opportunity for universities to reinvent the experience of further education.
ryandrake
Yea, the solution here is to embrace the reality that these tools exist and will be used regardless of what the university wants, and use it as an opportunity to level up the education and experience.
The clueless educational institutions will simply try to fight it, like they tried to fight copy/pasting from Google and like they probably fought calculators.
pugio
I've used AI for one of the best studying experiences I've had in a long time:
1. Dump the whole textbook into Gemini, along with various syllabi/learning goals.
2. (Carefully) Prompt it to create Anki flashcards to meet each goal.
3. Use Anki (duh).
4. Dump the day's flashcards into a ChatGPT session, turn on voice mode, and ask it to quiz me.
Then I can go about my day answering questions. The best part is that if I don't understand something, or am having a hard time retaining some information, I can immediately ask it to explain - I can start a whole side tangent conversation deepening my understanding of the knowledge unit in the card, and then go right back to quizzing on the next card when I'm ready.
It feels like a learning superpower.
jay_kyburz
This sounds great! If I were learning something I would also use something like this.
I would double check every card at the start though, to make sure it didn't hallucinate anything that you then cram in your brain.
azemetre
Flash cards are some of the least effective ways to learn FYI and retain info.
tmpz22
My family member is a third year med student (US) near the top of their class and makes heavy heavy use of Anki (which is crowdsourced in the Med School community to create very very comprehensive decks).
ramblerman
I'll bite. Would you care to back that up somehow? Or at least elaborate.
Spaced repetition as it's more commonly known has been quite studied, and is anecdotally very popular on HN and reddit. Albeit more for some subject than others
azemetre
Give me another day and I'll respond in full; but my thesis is taken from the book "Make It Stick: The Science of Successful Learning" which was written by a group of neuro- and cognitive scientists on what are the most effective ways to learn.
The one chapter that stood out very clear, especially in a college setting, was how inefficient flash cards were compare to other methods like taking a practice exam instead.
There are a lot of executive summaries on the book and I've posted comments in support of their science backed methods as well.
It's also something I'm personally testing myself this year regarding programming since I've had great success doing their methods in other facets of my life.
rcxdude
I've always viewed them as a good option if you just have a set of facts you need to lodge into your brain (especially with spaced repetition), not so good if you need to develop understanding.
bdangubic
I used flashcards with my daughter since she was 1.5 years old. she is 12 now and religiously uses flashcards for all learning. and I’d size her up against anyone using any other technique for learning whatsoever
jurgenaut23
I think most people miss the bigger picture on the impact of AI on the learning process, especially in engineering disciplines.
Doing things that could be in principle automated by AI is still fundamentally valuable, because they bring two massive benefits:
- *Understanding what happens under the hood*: if you want to be an effective software engineer, you need to understand the whole stack. This is true of any engineering discipline really. Civil engineers take classes in fluid dynamics and material science classes although they will mostly apply pre-defined recipes on the job. You wouldn't be comfortable if the engineer who signed off on the blueprints of dam upstream of your house had no idea about the physics of concrete, hydrodynamic scour, etc.
- *Having fun*: there is nothing like the joy of discovering how things work, even though a perfectly fine abstraction that hides these details underneath. It is a huge part of the motivation for becoming an engineer. Even by assuming that Vibe Coding could develop into something that works, it would be a very tedious job.
When students use AI to do the hard work on their behalf, they miss out on those. We need to be extremely careful with this, as we might hurt a whole generation of students, both in terms of their performance and their love of technology.
sebstefan
>Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%
The only thing I care about is the ratio between those two things and you decide to group them together in your report? Fuck that
karpour
My take: While AI tools can help with learning, the vast majority of students use it to avoid learning
janalsncm
I agree with you, but I hope schools also take the opportunity to reflect on what they teach and how. I used to think I hated writing, but it turns out I just hated English class. (I got a STEM degree because I hated English class so much, so maybe I have my high school English teacher to thank for it.)
Torturing students with five paragraph essays, which is what “learning” looks like for most American kids, is not that great and isn’t actually teaching critical thinking which is most valuable. I don’t know any other form of writing that is like that.
Reading “themes” into books that your teacher is convinced are there. Looking for 3 quotes to support your thesis (which must come in the intro paragraph, but not before the “hook” which must be exciting and grab the reader’s attention!).
jillesvangurp
Most of us here took their education before AI. Students trying to avoid having to do work is a constant and as old as the notion of schools is. Changing/improving the tools just means teachers have to escalate the counter measures. For example by raising the ambition level in terms of quality and amount of work expected.
And teachers should use AIs too. Evaluating papers is not that hard for an LLM.
"Your a teacher. Given this assignment (paste /attach the file and the student's paper), does this paper meet the criteria. Identify flaws and grammatical errors. Compose a list of ten questions to grill the student on based on their own work and their understanding of the background material."
A prompt like that sounds like it would do the job. Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.
chii
> Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.
what's the point of the teacher then? Courses could entirely be taught via LLM in this case!
A student's willingness to learn is orthogonal to the availability of cheating devices. If a student is willing, they will know when to leverage the LLM for tutoring, and when to practise without it.
A student who's unwilling cannot be stopped from cheating via LLM now-a-days. Is it worth expending resources to try prevent it? The only reason i can think of is to ensure the validity of school certifications, which is growing increasingly worthless anyway.
jillesvangurp
> what's the point of the teacher then?
Coaching the student on their learning journey, kicking their ass when they are failing, providing independent testing/certification of their skills, answering questions they have, giving lectures, etc.
But you are right, you don't have to wait for a teacher to tell you stuff if you want to self educate yourself. The flip side is that a lot of people lack the discipline to teach themselves anything. Which is why going to school & universities is a good idea for many.
And I would expect good students that are naturally curious to be using LLM based tools a lot to satisfy their curiosity. And I would hope good teachers would encourage that instead of just trying to fit students into some straight jacket based on whatever the bare minimum standards say they should know, which of course is what a lot of teaching boils down to.
hervature
This has been observation about the internet. Growing up in a small town without access to advanced classes, having access to Wikipedia felt like the greatest equalizer in the world. 20 years post internet, seeing the most common outcome be that people learn less as a result of unlimited access to information would be depressing if it did not result in my own personal gain.
karpour
I would say a big difference of the Internet around 2000 and the internet now is that most people shared information in good faith back then, which is not the case anymore. Maybe back then people were just as uncritical of information, but now we really see the impact of people being not critical.
chii
> having access to Wikipedia felt like the greatest equalizer in the world. 20 years post internet, seeing the most common outcome be that people learn less
when wikipedia was initially made, many schools/teachers explicitly denied wikipedia as a source for citing in essays. And obviously, plenty of kids just plagerized wikipedia articles for their essay topics (and was easily discovered at the time).
With the advent of LLM, this sort of pseudo-learning is going to be more and more common. The unsupervised tests (like online tests, or take home assignments) cannot prevent cheating. The end result is that students would pass, but without _actually_ learning the material at all.
I personally think that perhaps the issue is not with the students, but with the student's requirement for certification post-school. Those who are genuinely interested would be able to leverage LLM to the maximum for their benefit, not just to cheat a test.
nthingtohide
My take : AI is the REPL interface for learning activities. All the points which Salman Khan talked about apply here.
dagw
My wife works at a European engineering university with students from all over the world and is often a thesis advisor for Masters students. She says that up until 2 years ago a lot of her time was spent on just proofreading and correcting the student's English. Now everybody writes 'perfect' English and all sound exactly the same in an obvious ChatGPT sort way. It is also obvious that they use AI when she asks them why they used a certain 'big' word or complicated sentence structure, and they just stare blankly and cannot answer.
To be clear the students almost certainly aren't using ChatGPT to write their thesis for them from scratch, but rather to edit and improve their bad first drafts.
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.
I built a popular product that helps teachers with this problem.
Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.