Skip to content(if available)orjump to list(if available)

Study mode

Study mode

650 comments

·July 29, 2025

jacobedawson

An underrated quality of LLMs as study partner is that you can ask "stupid" questions without fear of embarrassment. Adding in a mode that doesn't just dump an answer but works to take you through the material step-by-step is magical. A tireless, capable, well-versed assistant on call 24/7 is an autodidact's dream.

I'm puzzled (but not surprised) by the standard HN resistance & skepticism. Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions. This is leaps and bounds ahead of that experience.

Should we trust the information at face value without verifying from other sources? Of course not, that's part of the learning process. Will some (most?) people rely on it lazily without using it effectively? Certainly, and this technology won't help or hinder them any more than a good old fashioned textbook.

Personally I'm over the moon to be living at a time where we have access to incredible tools like this, and I'm impressed with the speed at which they're improving.

romaniitedomum

> Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions. This is leaps and bounds ahead of that experience.

But now, you're wondering if the answer the AI gave you is correct or something it hallucinated. Every time I find myself putting factual questions to AIs, it doesn't take long for it to give me a wrong answer. And inevitably, when one raises this, one is told that the newest, super-duper, just released model addresses this, for the low-low cost of $EYEWATERINGSUM per month.

But worse than this, if you push back on an AI, it will fold faster than a used tissue in a puddle. It won't defend an answer it gave. This isn't a quality that you want in a teacher.

So, while AIs are useful tools in guiding learning, they're not magical, and a healthy dose of scepticism is essential. Arguably, that applies to traditional learning methods too, but that's another story.

cvoss

> But now, you're wondering if the answer the AI gave you is correct

> a healthy dose of scepticism is essential. Arguably, that applies to traditional learning methods too, but that's another story.

I don't think that is another story. This is the story of learning, no matter whether your teacher is a person or an AI.

My high school science teacher routinely mispoke inadvertently while lecturing. The students who were tracking could spot the issue and, usually, could correct for it. Sometimes asking a clarifying question was necessary. And we learned quickly that that should only be done if you absolutely could not guess the correction yourself, and you had to phrase the question in a very non-accusatory way, because she had a really defensive temper about being corrected that would rear its head in that situation.

And as a reader of math textbooks, both in college and afterward, I can tell you you should absolutely expect errors. The errata are typically published online later, as the reports come in from readers. And they're not just typos. Sometimes it can be as bad as missing terms in equations, missing premises in theorems, missing cases in proofs.

A student of an AI teacher should be as engaged in spotting errors as a student of a human teacher. Part of the learning process is reaching the point where you can and do find fault with the teacher. If you can't do that, your trust in the teacher may be unfounded, whether they are human or not.

tekno45

How are you supposed to spot errors if you don't know the material?

You're telling people to be experts before they know anything.

johnnyanmac

>I don't think that is another story. This is the story of learning, no matter whether your teacher is a person or an AI.

My issue is the reverse of your story, and one of my biggest pet peeves of AI. AI as this business construct is very bad at correcting the user. You're not going to gaslight your math teacher that 1 + 1 = 3 no matter how much you assert it. an AI will quickly relent. That's not learning, that's coddling. Because a business doesn't want to make an obviously wrong customer feel bad.

>Part of the learning process is reaching the point where you can and do find fault with the teacher.

And without correction, this will lead to turmoil. For the reasons above, I don't trust learning from an AI unless you already have this ability.

demarq

To be honest I now see more hallucinations from humans on online forums than I do from LLMs.

A really great example of this is from on twitter constantly debunking human “hallucinations” all day.

PaulRobinson

Despite the name of "Generative" AI, when you ask LLMs to generate things, they're dumb as bricks. You can test this by asking them anything you're an expert at - it would dazzle a novice, but you can see the gaps.

What they are amazing at though is summarisation and rephrasing of content. Give them a long document and ask "where does this document assert X, Y and Z", and it can tell you without hallucinating. Try it.

Not only does it make for an interesting time if you're in the World of intelligent document processing, it makes them perfect as teaching assistants.

ay

My favourite story of that involved attempting to use LLM to figure out whether it was true or my hallucination that the tidal waves were higher in Canary Islands than in Caribbean, and why; it spewed several paragraphs of plausibly sounding prose, and finished with “because Canary Islands are to the west of the equator”.

This phrase is now an inner joke used as a reply to someone quoting LLMs info as “facts”.

ricardobeat

This is meaningless without knowing which model, size, version and if they had access to search tools. Results and reliability vary wildly.

In my case I can’t even remember last time Claude 3.7/4 has given me wrong info as it seems very intent on always doing a web search to verify.

teleforce

Please check this excellent LLM-RAG AI-driven course assistant at UIUC for an example of university course [1]. It provide citations and references mainly for the course notes so the students can verify the answers and further study the course materials.

[1] AI-driven chat assistant for ECE 120 course at UIUC (only 1 comment by the website creator):

https://news.ycombinator.com/item?id=41431164

ink_13

Given the propensity of LLMs to hallucinate references, I'm not sure that really solves anything

QuantumGood

I often ask first, "discuss what it is you think I am asking" after formulating my query. Very helpful for getting greater clarity and leads to fewer hallucinations.

flir

I go with "please write a brief summary of X". I think of it (probably wrongly) as priming the conversation with some text that nudges us into the right part of whatever phase space we're interacting with.

(That recent paper where injecting unrelated facts about cats make answers way less reliable suggests we might both be doing the right thing).

jimmaswell

> you're wondering if the answer the AI gave you is correct or something it hallucinated

Regular research has the same problem finding bad forum posts and other bad sources by people who don't know what they're talking about, albeit usually to a far lesser degree depending on the subject.

bradleyjg

The difference is that llms mess with our heuristics. They certainly aren’t infallible but over time we develop a sense for when someone is full of shit. The mix and match nature of llms hides that.

y1n0

Yes but that is generally public, with other people able to weigh in through various means like blog posts or their own paper.

Results from the LLM are your eyes only.

m463

You should practice healthy skepticism with rubber ducks as well:

https://en.wikipedia.org/wiki/Rubber_duck_debugging

friendzis

LLMs, by design, are peak Duning-Kruegers, which means they can be any good of a study partner for basic introductory lessons and topics. Yet they still require handholding and thorough verification, because LLMs will spit out factually incorrect information with confidence and will fold on correct answers when prodded. Yet the novice does not posses the skill to handhold the LLM. I think there's a word for that, but chadgbt is down for me today.

Furthermore, forgetting curve is a thing and therefore having to piece information together repetitively, preferably in a structured manner, leads to a much better information retention. People love to claim how fast they are "learning" (more like consuming tiktoks) from podcasts at 2x speed and LLMs, but are unable to recite whatever was presented few hours later.

Third, there was a paper circulating even here on HN that showed that use of LLMs literally hinder brain activation.

zvmaz

The fear of asking stupid questions is real, especially if one has had a bad experience with humiliating teachers or professors. I just recently saw a video of a professor subtly shaming and humiliating his students for answering questions to his own online quiz. He teaches at a prestigious institution and has a book that has a very good reputation. I stopped watching his video lectures.

johnnyanmac

So instead of correcting the teachers with better training, we retreat from education and give it to technocrats? Why are we so afraid of punishing bad, unproductive, and even illegal behavior in 2025?

puszczyk

Looks like we were unable to correct them over the last 3k years. What has changes in 2025 that you think we will succeed in correcting that behavior?

Not US based, Central/Eastern Europe: the selection to the teacher profession is negative, due to low salary compared to private sector; this means that the unproductive behaviors are likely going to increase. I'm not saying the AI is the solution here for low teacher salaries, but training is def not the right answer either, and it is a super simplistic argument: "just train them better".

baby

You might also be working with very uncooperative coworkers, or impatient ones

quietthrow

I agree with all that you say. It’s an incredible time indeed. Just one thing I can’t wrap my mind around is privacy. We all seem to be asking sometimes stupid and some times incredibly personal questions to these llms. Questions that we may not even speak out loud from embarrassment or shame or other such emotions to even our closest people. How are these companies using our data ? More importantly what are you all doing to protect yourself from misuse of your information? Or is it if you want to use it you have to give up such privacy and uncomfortableness ?

teiferer

> Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions.

That trained and sharpened invaluable skills involving critical thinking and grit.

samuria

And also taught people how to actually look for information online. The average person still does not know how to google, I still see people writing whole sentences in the search bar.

wiseowise

It didn’t. Only frustrated and slowed down students.

jlebar

> [Trawling around online for information] trained and sharpened invaluable skills involving critical thinking and grit.

Here's what Socrates had to say about the invention of writing.

> "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

https://www.historyofinformation.com/detail.php?id=3439

I mean, he wasn't wrong! But nonetheless I think most of us communicating on an online forum would probably prefer not to go back to a world without writing. :)

You could say similar things about the internet (getting your ass to the library taught the importance of learning), calculators (you'll be worse at doing arithmetic in your head), pencil erasers (https://www.theguardian.com/commentisfree/2015/may/28/pencil...), you name it.

johnnyanmac

>I mean, he wasn't wrong! But nonetheless I think most of us communicating on an online forum would probably prefer not to go back to a world without writing. :)

What social value is an AI chatbot giving to us here, though?

>You could say similar things about the internet (getting your ass to the library taught the importance of learning)

Yes, and as we speak countries are determining how to handle the advent of social media as this centralized means of propaganda, abuse vector, and general way to disconnect local communities. It clearly has a different magnitude of impact than etching on a stone tablet. The UK made a particularly controversial decision recently.

I see AI more in that camp than in the one of pencil erasers.

SwtCyber

The freedom to ask "dumb" questions without judgment is huge, and it's something even the best classrooms struggle to provide consistently

400thecat

I sometimes intentionally ask naive questions, eve if I think I alredy know the answer. Sometimes the naive question provokes a revealing answer that I have not even considered. Asking naive questions is a learning hack!

danny_codes

Consider the adoption of conventional technology in the classroom. The US has spent billions on new hardware and software for education, and yet there has been no improvement in learning outcomes.

This is where the skepticism arises. Before we spend another $100 billion on something that ended up being worthless, we should first prove that it’s actually useful. So far, that hasn’t conclusively been demonstrated.

johnnyanmac

billions on tech but not on making sure teachers can pay rent. Even the prestige or mission oriented structure of teaching has been weathered over the decades as we decided to shame teachers as government funded babysitters instead of the instructors of our future generations.

Truly a mystery why America is falling behind.

ImaCake

The article states that Study Mode is free to use. Regardless of b2b costs, this is free for you as an individual.

breve

> Adding in a mode that doesn't just dump an answer but works to take you through the material step-by-step is magical

Except these systems will still confidently lie to you.

The other day I noticed that DuckDuckGo has an Easter egg where it will change its logo based on what you've searched for. If you search for James Bond or Indiana Jones or Darth Vader or Shrek or Jack Sparrow, the logo will change to a version based on that character.

If I ask Copilot if DuckDuckGo changes its logo based on what you've searched for, Copilot tells me that no it doesn't. If I contradict Copilot and say that DuckDuckGo does indeed change its logo, Copilot tells me I'm absolutely right and that if I search for "cat" the DuckDuckGo logo will change to look like a cat. It doesn't.

Copilot clearly doesn't know the answer to this quite straightforward question. Instead of lying to me, it should simply say it doesn't know.

mediaman

This is endlessly brought up as if the human operating the tool is an idiot.

I agree that if the user is incompetent, cannot learn, and cannot learn to use a tool, then they're going to make a lot of mistakes from using GPTs.

Yes, there are limitations to using GPTs. They are pre-trained, so of course they're not going to know about some easter egg in DDG. They are not an oracle. There is indeed skill to using them.

They are not magic, so if that is the bar we expect them to hit, we will be disappointed.

But neither are they useless, and it seems we constantly talk past one another because one side insists they're magic silicon gods, while the other says they're worthless because they are far short of that bar.

breve

The ability to say "I don't know" is not a high bar. I would say it's a basic requirement of a system that is not magic.

czhu12

I'll personally attest: LLM's have been absolutely incredible to self learn new things post graduation. It used to be that if you got stuck on a concept, you're basically screwed. Unless it was common enough to show up in a well formed question on stack exchange, it was pretty much impossible, and the only thing you can really do is keep paving forward and hope at some point, it'll make sense to you.

Now, everyone basically has a personal TA, ready to go at all hours of the day.

I get the commentary that it makes learning too easy or shallow, but I doubt anyone would think that college students would learn better if we got rid of TA's.

no_wizard

>Now, everyone basically has a personal TA, ready to go at all hours of the day

This simply hasn't been my experience.

Its too shallow. The deeper I go, the less it seems to be useful. This happens quick for me.

Also, god forbid you're researching a complex and possibly controversial subject and you want it to find reputable sources or particularly academic ones.

scarmig

I've found it excels at some things:

1) The broad overview of a topic

2) When I have a vague idea, it helps me narrow down the correct terminology for it

3) Providing examples of a particular category ("are there any examples of where v1 in the visual cortex develops in a disordered way?")

4) "Tell me the canonical textbooks in field X"

5) Posing math exercises

6) Free form branching--while talking about one topic, I want to shift to another that is distinct but related.

I agree they leave a lot to be desired when digging very deeply into a topic. And my biggest pet peeve is when they hallucinate fake references ("tell me papers that investigate this topic" will, for any sufficiently obscure topic, result in a bunch of very promising paper titles that are wholely invented).

CJefferson

These things are moving so quickly, but I teach a 2nd year combinatorics course, and about 3 months ago I tried th latest chatGPT and Deepseek -- they could answer very standard questions, but were wrong for more advanced questions, but often in quite subtle ways. I actually set a piece of homework "marking" chatGPT, which went well and students seemed to enjoy!

johnnyanmac

outside of 5), I concur. It's good for discovery, as is Google for discovering topics while weighing on proper profesionally resources and articles for the learning.

It's too bad people are trying to substitute the latter with the chatGPT output itself. And I absolutely cannot trust any machine that is willing to lie to me rather than admit ignorance on a subject.

bryanrasmussen

>When I have a vague idea, it helps me narrow down the correct terminology for it

so the opposite of Stack Overflow really, where if you have a vague idea your question gets deleted and you get reprimanded.

Maybe Stack Overflow could use AI for this, help you formulate a question in the way they want.

andy_ppp

I’ve found the AI is particularly good at explaining AI, better than quite a lot of other coding tasks.

narcraft

I find 2 invaluable for enhancing search, and combined with 1 & 4, it's a huge boost to self-learning.

jjfoooo4

It's a floor raiser, not a ceiling raiser. It helps you get up to speed on general conventions and consensus on a topic, less so on going deep on controversial or highly specialized topics

null

[deleted]

SLWW

My core problem with LLMs is as you say; it's good for some simpler concepts, tasks, etc. but when you need to dive into more complex topics it will oversimplify, give you what you didn't ask for, or straight up lie by omission.

History is a great example, if you ask an LLM about a vaguely difficult period in history it will just give you one side and act like the other doesn't exist, or if there is another side, it will paint them in a very negative light which often is poorly substantiated; people don't just wake up and decide one day to be irrationally evil with no reason, if you believe that then you are a fool... although LLMs would agree with you more times than not since it's convenient.

The result of these things is a form of gatekeeping, give it a few years and basic knowledge will be almost impossible to find if it is deemed "not useful" whether that's an outdated technology that the LLM doesn't seem talked about very much anymore or a ideological issue that doesn't fall in line with TOS or common consensus.

scarmig

A few weeks ago I was asking an LLM to offer anti-heliocentric arguments, from the perspective of an intelligent scientist. Although it initially started with what was almost a parody of writing from that period, with some prompting I got it to generate a strong rendition of anti-heliocentric arguments.

(On the other hand, it's very hard to get them to do it for topics that are currently politically charged. Less so for things that aren't in living memory: I've had success getting it to offer the Carthaginian perspective in the Punic Wars.)

pengstrom

The part about history perspectives sounds interesting. I haven't noticed this. Please post any concrete/specific examples you've encountered!

maxsilver

> people don't just wake up and decide one day to be irrationally evil with no reason, if you believe that then you are a fool

The problem with this, is that people sometimes really do, objectively, wake up and device to be irrationally evil. It’s not every day, and it’s not every single person — but it does happen routinely.

If you haven’t experienced this wrath yourself, I envy you. But for millions of people, this is their actual, 100% honest truthful lived reality. You can’t rationalize people out of their hate, because most people have no rational basis for their hate.

(see pretty much all racism, sexism, transphobia, etc)

neutronicus

History in particular is rapidly approaching post-truth as a knowledge domain anyway.

There's no short-term incentive to ever be right about it (and it's easy to convince yourself of both short-term and long-term incentives, both self-interested and altruistic, to actively lie about it). Like, given the training corpus, could I do a better job? Not sure.

andrepd

> History is a great example, if you ask an LLM about a vaguely difficult period in history it will just give you one side and act like the other doesn't exist, or if there is another side, it will paint them in a very negative light which often is poorly substantiated

Which is why it's so terribly irresponsible to paint these """AI""" systems as impartial or neutral or anything of the sort, as has been done by hypesters and marketers for the past 3 years.

jay_kyburz

People _do_ just wake up one day and decide some piece of land should belong to them, or that they don't have enough money and can take yours, or they are just sick of looking at you and want to be rid of you. They will have some excuse or justification, but really they just want more than they have.

People _do_ just wake up and decide to be evil.

epolanski

I really think that 90% of such comments come from a lack of knowledge on how to use LLMs for research.

It's not a criticism, the landscape moves fast and it takes time to master and personalize a flow to use an LLM as a research assistant.

Start with something such as NotebookLM.

johnnyanmac

And if we assume this is a knowledgable, technical community: how do you feel about the general populaces ability to use LLM's for research, without the skepticism needed to correct it?

no_wizard

I use them and stay up to date reasonably. I have used NotebookLM, I have access to advanced models through my employer and personally, and I have done alot of research on LLMs and using them effectively.

They simply have limitations, especially on deep pointed subject matters where you want depth not breadth, and honestly I'm not sure why these limitations exist but I'm not working directly on these systems.

Talk to Gemini or ChatGPT about mental health things, thats a good example of what I'm talking about. As recently as two weeks ago my colleagues found that even when heavily tuned, they still managed to become 'pro suicide' if given certain lines of questioning.

II2II

> Also, god forbid you're researching a complex and possibly controversial subject and you want it to find reputable sources or particularly academic ones.

That's fine. Recognize the limits of LLMs and don't use them in those cases.

Yet that is something you should be doing regardless of the source. There are plenty of non-reputable sources in academic libraries and there are plenty of non-reputable sources from professionals in any given field. That is particularly true when dealing with controversial topics or historical sources.

tsumnia

IT can be beneficial for making your initial assessment, but you'll need to dig deeper for something meaningful. For example, I recently used Gemini's Deep Research to do some literature review on educational Color Theory in relation to PowerPoint presentations [1]. I know both areas rather well, but I wanted to have some links between the two for some research that I am currently doing.

I'd say that companies like Google and OpenAI are aware of the "reputable" concerns the Internet is expressing and addressing them. This tech is going to be, if not already is, very powerful for education.

[1] http://bit.ly/4mc4UHG

fakedang

Taking a Gemini Deep Research output and feeding it to NotebookLM to create audio overviews is my current podcast go-to. Sometimes I do a quick Google and add in a few detailed but overly verbose documents or long form YouTube videos, and the result is better than 99% of the podcasts out there, including those by some academics.

gojomo

Grandparent testimony of success, & parent testimony of frustration, are both just wispy random gossip when they don't specify which LLMs delivered the reported experiences.

The quality varies wildly across models & versions.

With humans, the statement "my tutor was great" and "my tutor was awful" reflect very little on "tutoring" in general, and are barely even responses to each other withou more specificity about the quality of tutor involved.

Same with AI models.

no_wizard

Latest OpenAI, Latest Gemini models, also tried with latest LLAMA but I didn’t expect much there.

I have no access to anthropic right now to compare that.

It’s an ongoing problem in my experience

kenjackson

“The deeper I go, the less it seems to be useful. This happens quick for me. Also, god forbid you're researching a complex and possibly controversial subject and you want it to find reputable sources or particularly academic ones.”

These things also apply to humans. A year or so ago I thought I’d finally learn more about the Israeli/Palestinians conflict. Turns out literally every source that was recommended to me by some reputable source was considered completely non-credible by another reputable one.

That said I’ve found ChatGPT to be quite good at math and programming and I can go pretty deep at both. I can definitely trip it into mistakes (eg it seems to use calculations to “intuit” its way around sometimes and you can find dev cases where the calls will lead it the wrong directions), but I also know enough to know how to keep it on rails.

9dev

> Turns out literally every source that was recommended to me by some reputable source was considered completely non-credible by another reputable one.

That’s the single most important lesson by the way, that this conflict just has two different, mutually exclusive perspectives, and no objective truth (none that could be recovered FWIW). Either you accept the ambiguity, or you end up siding with one party over the other.

jonahx

> learn more about the Israeli/Palestinians

> to be quite good at math and programming

Since LLMs are essentially summarizing relevant content, this makes sense. In "objective" fields like math and CS, the vast majority of content aligns, and LLMs are fantastic at distilling the relevant portions you ask about. When there is no consensus, they can usually tell you that ("this is nuanced topic with many perspectives...", etc), but they can't help you resolve the truth because, from their perspective, the only truth is the content.

drc500free

Israel / Palestine is a collision between two internally valid and mutually exclusive worldviews. It's kind of a given that there will be two camps who consider the other non-reputable.

FWIW, the /r/AskHistorians booklist is pretty helpful.

https://www.reddit.com/r/AskHistorians/wiki/books/middleeast...

Liftyee

Re: conflicts and politics etc.

I've anecdotally found that real world things like these tend to be nuanced, and that sources (especially on the internet) are disincentivised in various ways from actually showing nuance. This leads to "side-taking" and a lack of "middle-ground" nuanced sources, when the reality lies somewhere in the middle.

Might be linked to the phenomenon where in an environment where people "take sides", those who display moderate opinions are simply ostracized by both sides.

Curious to hear people's thoughts and disagreements on this.

adamsb6

When ChatGPT came out it was like I had the old Google back.

Learning a new programming language used to be mediated with lots of useful trips to Google to understand how some particular bit worked, but Google stopped being useful for that years ago. Even if the content you're looking for exists, it's buried.

GaggiX

And the old ChatGPT was nothing compared to what we have today, nowadays reasoning models will eat through math problems no problem when this was a major limitation in the past.

jennyholzer

I don't buy it. Open AI doesn't come close to passing my credibility check. I don't believe their metrics.

ainiriand

I've learnt Rust in 12 weeks with a study plan that ChatGPT designed for me, catering to my needs and encouraging me to take notes and write articles. This way of learning allowed me to publish https://rustaceo.es for Spanish speakers made from my own notes.

I think the potential in this regard is limitless.

koakuma-chan

I learned Rust in a couple of weeks by reading the book.

paxys

Yeah regardless of time taken the study plan for Rust already exists (https://doc.rust-lang.org/book/). You don't need ChatGPT to regurgitate it to you.

koakuma-chan

But I agree though, I am getting insane value out of LLMs.

IshKebab

Doubtful. Unless you have very low standards of "learn".

BeetleB

Now this is a ringing endorsement. Specific stuff you learned, and actual proof of the outcome.

(Only thing missing is the model(s) you used).

nitwit005

I'd tend to assume the null hypothesis, that if they were capable of learning it, they'd have likely done fine without the AI writing some sort of lesson plan for them.

The psychic reader near me has been in business for a long time. People are very convinced they've helped them. Logically, it had to have been their own efforts though.

ainiriand

Standard ChatGPT 4o.

ai_viewz

yes Chat GPT has helped me learn about actix web a framework similar to FastAPI in rust.

andix

Absolutely. I used to have a lot of weird IPv6 issues in my home network I didn't understand. ChatGPT helped me to dump some traffic with tcpdump and explained what was happening on the network.

In the process it helped me to learn many details about RA and NDP (Router Advertisments/Neighbor Discovery Protocol, which mostly replace DHCP and ARP from IPv4).

It made me realize that my WiFi mesh routers do quite a lot of things to prevent broadcast loops on the network, and that all my weird issues could be attributed to one cheap mesh repeater. So I replaced it and now everything works like a charm.

I had this setup for 5 years and was never able to figure out what was going on there, although I really tried.

mvieira38

Would you say you were using the LLM as a tutor or as tech support, in that instance?

andix

Probably both. I think ChatGPT wouldn't have found the issue by itself. But I noticed some specific things, asked for some tutoring and then it helped my to find the issues. It was a team effort, either of "us" alone wouldn't have finished the job. ChatGPT had some really wrong ideas in the process.

PaulRobinson

As somebody who has done both tech support, and lectured a couple of semesters at a business school on a technical topic... they're not that far removed from each other, it's just context and audience changes. The work is pretty similar.

So why not have tech support that teaches you, or a tutor that helps with you with a specific example problem you're having?

Providing you don't just rely on training data and can reduce hallucinations, this is the angle of attack that is likely the killer app some people are already seeing.

Vibe coding is nonsense because it's not teaching you to maintain and extend that application when the LLM runs out of steam. Use it to help you fix your problem in a way that you understand and can learn from? Rocket fuel to my mind. We're maybe not far away...

kridsdale1

I agree. I recently bought a broken Rolex and asked GPT for a list of tools I should get on Amazon to work on it.

I tried using YouTube to find walk through guides for how to approach the repair as a complete n00b and only found videos for unrelated problems.

But I described my issues and took photos to GPT O3-Pro and it was able to guide me and tell me what to watch out for.

I completed the repair (very proud of myself) and even though it failed a day later (I guess I didn’t re-seat well enough) I still feel far more confident opening it and trying again than I did at the start.

Cost of broken watch + $200 pro mode << Cost of working watch.

KaiserPro

what was broken on it?

threetonesun

> the only thing you can really do is keep paving forward and hope at some point, it'll make sense to you.

I find it odd that someone who has been to college would see this as a _bad_ way to learn something.

qualeed

"Keep paving forward" can sometimes be fruitful, and at other times be an absolutely massive waste of time.

I'm not sold on LLMs being a replacement, but post-secondary was certainly enriched by having other people to ask questions to, people to bounce ideas off of, people that can say "that was done 15 years ago, check out X", etc.

There were times where I thought I had a great idea, but it was based on an incorrect conclusion that I had come to. It was helpful for that to be pointed out to me. I could have spent many months "paving forward", to no benefit, but instead someone saved me from banging my head on a wall.

abeppu

In college sometimes asking the right question in class or in a discussion section led by a graduate student or in a study group would help me understand something. Sometimes comments from a grader on a paper would point out something I had missed. While having the diligence to keep at it until you understand is valuable, the advantage of college over just a pile of textbooks is in part that there are other resources that can help you learn.

BeetleB

Imagine you're in college, have to learn calculus, and you can't afford a textbook (nor can find a free one), and the professor has a thick accent and makes many mistakes.

Sure, you could pave forward, but realistically, you'll get much farther with either a good textbook or a good teacher, or both.

IshKebab

In college you can ask people who know the answer. It's not until PhD level that you have to struggle without readily available answers.

czhu12

The main difference in college was that there were office hours

kelthuzad

I share your experience and view in that regard! There is so much criticism of LLMs and some of it is fair, like the problem of hallucinations, but that weakness can be reframed as a learning opportunity. It's like discussing a subject with a personal scientist who may at certain times test you, by making claims that may be simplistic or outright wrong, to keep the student skeptical and check if they are actually paying attention.

This requires a student to be actually interested in what they are learning tho, for others, who blindly trust its output, it can have adverse effects like the illusion of having understood a concept while they might have even mislearned it.

mym1990

"It used to be that if you got stuck on a concept, you're basically screwed."

There seems to be a gap in problem solving abilities here...the process of breaking down concepts into easier to understand concepts and then recompiling has been around since forever...it is just easier to find those relationships now. To say it was impossible to learn concepts you are stuck on is a little alarming.

simonw

I think I got the system prompt out for this (I tried a few different approaches and they produced the same output): https://gist.github.com/simonw/33d5fb67d6b8e1b1e2f6921ab0ccb...

Representative snippet:

> DO NOT GIVE ANSWERS OR DO HOMEWORK FOR THE USER. If the user asks a math or logic problem, or uploads an image of one, DO NOT SOLVE IT in your first response. Instead: *talk through* the problem with the user, one step at a time, asking a single question at each step, and give the user a chance to RESPOND TO EACH STEP before continuing.

mkagenius

I wish each LLM provider would add "be short and not verbose" to their system prompts. I am a slow reader, it takes a toll on me to read through every non-important detail whenever I talk to an AI. The way they render everything so fast gives me an anxiety.

Will also reduce the context rot a bit.

ksynwa

Yeah these chatbots are by default geared towards doing your work for you instead of filling the gaps in your knowledge (something they would be excellent at). I feel it must be symptomatic of the vision these vendors have for their products, one of fully autonomous replacements for workers rather than of tools to enhance the worker.

tech234a

This was in the linked prompt: "Be warm, patient, and plain-spoken; don't use too many exclamation marks or emoji. [...] And be brief — don't ever send essay-length responses. Aim for a good back-and-forth."

draebek

I was under the impression that, at least for models without "reasoning", asking them to be terse hampered their ability to give complete and correct answers? Not so?

mptest

Anthropic has a "style" choice, one of which is "concise"

skybrian

On ChatGPT at least, you can add "be brief" to the custom prompt in your settings. Probably others, too.

mkagenius

I guess what I actually meant to say was to make LLMs know when to talk more and when to be brief. When I ask it to write an essay, it should actually be an essay length essay.

gh0stcat

I love that caps actually seem to matter to the LLM.

simonw

Hah, yeah I'd love to know if OpenAI ran evals that were fine-grained enough to prove to themselves that putting that bit in capitals made a meaningful difference in how likely the LLM was to just provide the homework answer!

danenania

I've found that a lot of prompt engineering boils down to managing layers of emphasis. You can use caps, bold, asterisks, precede instructions with "this is critically important:", and so on. It's also often necessary to repeat important instructions a bunch of times.

How exactly you do it is often arbitrary/interchangeable, but it definitely does have an effect, and is crucial to getting LLMs to follow instructions reliably once prompts start getting longer and more complex.

nixpulvis

Just wait until it only responds to **COMMAND**!

SalariedSlave

I'd be interested to see, what results one would get, using that prompt with other models. Is there much more to ChatGPT Study Mode than a specific system prompt? Although I am not a student, I have used similar prompts to dive into topics I wish to learn, with I feel, positive results indeed. I shall give this a go with a few models.

bangaladore

I just tried in AI Studio (https://aistudio.google.com/) where you can for free use 2.5 Pro and edit the system prompt and it did very well.

varenc

Interesting that it spits the instructions out so easily and OpenAI didn't seem to harden it to prevent this. It's like they intended this to happen, but for some reason didn't want to share the system instructions explicitly.

brumar

I got this one which seems to confirm yours : https://gist.github.com/brumar/5888324c296a8730c55e8ee24cca9...

can16358p

If I were OpenAI, I would deliberately "leak" this prompt when asked for the system prompt as a honeypot to slow down competitor research whereas I'd be using a different prompt behind the scenes.

Not saying it is indeed reality, but it could simple be programmed to return a different prompt from the original, appearing plausible, but perhaps missing some key elements.

But of course, if we apply Occam's Razor, it might simply really be the prompt too.

simonw

That kind of thing is surprisingly hard to implement. To date I've not seen any provider been caught serving up a fake system prompt... which could mean that they are doing it successfully, but I think it's more likely that they determined it's not worth it because there are SO MANY ways someone could get the real one, and it would be embarrassing if they were caught trying to fake it.

Tokens are expensive. How much of your system prompt do you want to waste on dumb tricks trying to stop your system prompt from leaking?

danenania

Probably the only way to do it reliably would be to intercept the prompt with a specially trained classifier? I think you're right that once it gets to the main model, nothing really works.

brumar

I like the idea but that seems complex to put in place and would risk degrading the perfs.

You can test this prompt yourself elsewhere, you will notice that you get sensibly the same experience.

poemxo

As a lifelong learner, experientially it feels like a big chunk of time spent studying is actually just searching. AI seems like a good tool to search through a large body of study material and make that part more efficient.

The other chunk of time, to me anyway, seems to be creating a mental model of the subject matter, and when you study something well you have a strong grasp on the forces influencing cause and effect within that matter. It's this part of the process that I would use AI the least, if I am to learn it for myself. Otherwise my mental model will consist of a bunch of "includes" from the AI model and will only be resolvable with access to AI. Personally, I want a coherent "offline" model to be stored in my brain before I consider myself studied up in the area.

lbrito

>big chunk of time spent studying is actually just searching.

This is a good thing in many levels.

Learning how to search is (was) a good skill to have. The process of searching itself also often leads to learning tangentially related but important things.

I'm sorry for the next generations that won't have (much of) these skills.

sen

That was relevant when you were learning to search through “information” for the answer to your question, eg the digital version of going through the library or digging through a reference book.

I don’t think it’s so valuable now that you’re searching through piles of spam and junk just to try find anything relevant. That’s a uniquely modern-web thing created by Google in their focus of profit over user.

Unless Google takes over libraries/books next and sells spots to advertisers on the shelves and in the books.

johnnyanmac

[delayed]

ImaCake

> searching through piles of spam and junk

In the same way that I never learnt the Dewey decimal system because digital search had driven it obsolete. It may be that we just won't need to do as much sifting through spam in the future, but being able to finesse Gemini into burping out the right links becomes increasingly important.

ascorbic

Searching is definitely a useful skill, but once you've been doing it for years you probably don't need the constant practice and are happy to avoid it.

ieuanking

yeah this is literally why I built -- app.ubik.studio -- searching is everything, and understanding what you are reading is more important than conversing with a chatbot. i cannot even imagine being a student in 2025, especially at 14 years old omg would be so hard not to just cheat on everything

ethan_smith

Spaced repetition systems would be the perfect complement to your approach - they're specifically designed to help build that "offline" mental model by systematically moving knowledge from AI-assisted lookup to permanent memory.

qingdao99

I think this account is a bot.

thorum

Isn’t the goal of Study Mode exactly that, though? Instead of handing you the answers, it tries to guide you through answering it on your own; to teach the process.

Most people don’t know how to do this.

marcusverus

This is just good intellectual hygiene. Delegating your understanding is the first step toward becoming the slave of some defunct fact broker.

throwawaysleep

Or just to dig up things you’ve never would’ve considered that are related, but you don’t have to keywords for.

jryio

I would like to see randomized control group studies using study mode.

Does it offer meaningful benefits to students over self directed study?

Does it out perform students who are "learning how to learn"?

What affect does allowing students to make mistakes have compared to being guided through what to review?

I would hope Study Mode would produce flash card prompts and quantize information for usage in spaced repetition tools like Mochi [1] or Anki.

See Andy's talk here [2]

[1] https://mochi.cards

[2] https://andymatuschak.org/hmwl/

righthand

It doesn’t do any of that, it just captures the student market more.

They want a student to use it and say “I wouldn’t have learned anything without study mode”.

This also allows them to fill their data coffers more with bleeding edge education. “Please input the data you are studying and we will summarize it for you.”

LordDragonfang

> It doesn’t do any of that

Not to be contrarian, but do you have any evidence of this assertion? Or are you just confidently confabulating a response for something outside of the data you've been exposed to? Because a commentor below provided a study that directly contradicts this.

righthand

A study that directly contradicts what exactly?

echelon

Such a smart play.

precompute

Bingo. The scale they're operating at, new features don't have to be useful, they only need to look like they are for the first few minutes.

theodorewiles

https://www.nature.com/articles/s41598-025-97652-6

This isn't study mode, it's a different AI tutor, but:

"The median learning gains for students, relative to the pre-test baseline (M = 2.75, N = 316), in the AI-tutored group were over double those for students in the in-class active learning group."

Aachen

I wonder how much this was a factor:

"The occurrence of inaccurate “hallucinations” by the current [LLMs] poses a significant challenge for their use in education. [...] we enriched our prompts with comprehensive, step-by-step answers, guiding the AI tutor to deliver accurate and high-quality explanations (v) to students. As a result, 83% of students reported that the AI tutor’s explanations were as good as, or better than, those from human instructors in the class."

Not at all dismissing the study, but to replicate these results for yourself, this level of gain over a classroom setting may be tricky to achieve without having someone make class materials for the bot to present to you first

Edit: the authors further say

"Krupp et al. (2023) observed limited reflection among students using ChatGPT without guidance, while Forero (2023) reported a decline in student performance when AI interactions lacked structure and did not encourage critical thinking. These previous approaches did not adhere to the same research-based best practices that informed our approach."

Two other studies failed to get positive results at all. YMMV a lot apparently (like, all bets are off and your learning might go in the negative direction if you don't do everything exactly as in this study)

purplerabbit

In case you find it interesting: I deployed an early version of a "lesson administering" bot deployed on a college campus that guides students through tutored activities of content curated by a professor in the "study mode" style -- that is, forcing them to think for themselves. We saw an immediate student performance gain on exams of about 1 stdev in the course. So with the right material and right prompting, things are looking promising.

energy123

OpenAI should figure out how to onboard teachers. Teacher uploads context for the year, OpenAI distributes a chatbot to the class that's perma fixed into study mode. Basically like GPT store but with an interface and UX tuned for a classroom.

posix86

There's studies showing that LLM makes experienced devs slower in their work. I wouldn't be surprised if it was the same for self study.

However consider the extent to which LLMs make the learning process more enjoyable. More students will keep pushing because they have someone to ask. Also, having fun & being motivated is such a massive factor when it comes to learning. And, finally, keeping at it at 50% the speed for 100% the material always beats working at 100% the speed for 50% the material. Who cares if you're slower - we're slower & faster without LLMs too! Those that persevere aren't the fastest; they're the ones with the most grit & discipline, and LLMs make that more accesible.

SkyPuncher

The study you're referencing doesn't make that conclusion.

It concludes theres a learning curve that generally takes about 50 hours of time to figure out. The data shows that the one engineer who had more than 50 hours of experience with Cursor actually worked faster.

This is largely my experience, now. I was much slower initially, but I've now figured out the correct way to prompt, guide, and fix the LLM to be effective. I produce way more code and am mentally less fatigued at the end of each day.

snewman

I presume you're referring to the recent METR study. One aspect of the study population, which seems like an important causal factor in the results, is that they were working in large, mature codebases with specific standards for code style, which libraries to use, etc. LLMs are much better at producing "generic" results than matching a very specific and idiosyncratic set of requirements. The study involved the latter (specific) situation; helping people learn mainstream material seems more like the former (generic) situation.

(Qualifications: I was a reviewer on the METR study.)

bretpiatt

*slower with Sonnet 3.7 on large open source code bases where the developer is a senior member of the project core team.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

I believe we'll see the benefits and drawbacks of AI augmentation to humans performing various tasks will vary wildly based on the task, the way the AI is being asked to interact, and the AI model.

graerg

People keep citing this study (and it was on the top of HN for a day). But this claim falls flat when you find out that the test subjects had effectively no experience with LLM equipped editors and the 1-2 people in the study that actually did have experience with these tools showed a marked increase in productivity.

Like yeah, if you’ve only ever used an axe you probably don’t know the first thing about how to use a chainsaw, but if you know how to use a chainsaw you’re wiping the floor with the axe wielders. Wholeheartedly agree with the rest of your comment; even if you’re slow you lap everyone sitting on the couch.

daedrdev

It was a 16 person study on open source devs that found 50 hours of experience with the tool made people more productive

viccis

I would be interested to see if there have already been studies about the efficacy of tutors at good colleges. In my experience (in academia), the students who make it into an Ivy or an elite liberal arts school make extensive use of tutor resources, but not in a helpful way. They basically just get the tutor to work problems for them (often their homework!) and feel like they've "learned" things because tough questions always seems so obvious when you've been shown the answer. In reality, what it means it that they have no experience being confused or having to push past difficult things they were stuck on. And those situations are some of the most valuable for learning.

I bring this up because the way I see students "study" with LLMs is similar to this misapplication of tutoring. You try something, feel confused and lost, and immediately turn to the pacifier^H^H^H^H^H^H^H ChatGPT helper to give you direction without ever having to just try things out and experiment. It means students are so much more anxious about exams where they don't have the training wheels. Students have always wanted practice exams with similar problems to the real one with the numbers changed, but it's more than wanting it now. They outright expect it and will write bad evals and/or even complain to your department if you don't do it.

I'm not very optimistic. I am seeing a rapidly rising trend at a very "elite" institution of students being completely incapable of using textbooks to augment learning concepts that were introduced in the classroom. And not just struggling with it, but lashing out at professors who expect them to do reading or self study.

apwell23

it makes difference to students who are already motivated. that was the case with youtube.

unfortunately that group is tiny and getting tinier due to dwindling attention span.

CobrastanJorji

Come on. Asking an educational product to do a basic sanity test as to whether it helps is far too high a bar. Almost no educational app does that sort of thing.

tempfile

I would also be interested to see whether it outperforms students doing literally nothing.

roadside_picnic

My key to LLM study has been to always primarily use a book and then let the LLM allow you to help with formulae, ask questions about the larger context, and verify your understanding.

Helping you parse notation, especially in new domains, is insanely valuable. I do a lot of applied math in statistics/ML, but when I open a physics book the notation and comfort with short hand is a real challenge (likewise I imagine the reverse is equally as annoying). Having an LLM on demand to instantly clear up notation is a massive speed boost.

Reading German Idealist philosophy requires an enormous amount of context. Being able to ask an LLM questions like "How much of this section of Mainländer is coming directly from Schopenhauer?" is a godsend in helping understand which parts of the writing a merely setting up what is already agreed upon vs laying new ground.

And the most important for self study: verifying your understanding. Backtracking because you misunderstood a fundamental concept is a huge time sync in self study. Now, every time I read a formula I can go through all of my intuitions and understanding about it, write them down, and verify. Even a "not quite..." from an LLM is enough to make me realize I need to spend more time on that section.

Books are still the highest density information source and best way to learn, but LLMs can do a lot to accelerate this.

Workaccount2

An acquaintance of mine has a start-up in this space and uses OpenAI to do essentially the same thing. This must look like, and may well be, the guillotine for him...

It's my primary fear building anything on these models, they can just come eat your lunch once it looks yummy enough. Tread carefully

senko

This is actually a public validation for your friend's startup.

A proper learning tool will have history of conversation with the student, understand their knowledge level, have handcrafted curricula (to match whatever the student is supposed to learn), and be less susceptible to hallucination.

OpenAI have a bunch of other things to worry about and won't just pivot to this space.

mpalmer

No disrespect to your acquaintance, but when I heard about this, I didn't think "oh a lot of startups are gonna go under", I thought "OAI added an option to use a hard-coded system prompt and they're calling it a 'mode'??"

null

[deleted]

potatolicious

> "they can just come eat your lunch once it looks yummy enough. Tread carefully"

True, and worse, they're hungry because it's increasingly seeming like "hosting LLMs and charging by the token" is not terribly profitable.

I don't really see a path for the major players that isn't "Sherlock everything that achieves traction".

falcor84

Thanks for introducing me to the verb Sherlock! I'm one of today's lucky 10,000.

> In the computing verb sense, refers to the software Sherlock, which in 2002 came to replicate some of the features of an earlier complementary program called Watson.[1]

[1] https://en.wiktionary.org/wiki/Sherlock

thimabi

But what’s the future in terms of profitability of LLM providers?

As long as features like Study Mode are little more than creative prompting, any provider will eventually be able to offer them and offer token-based charging.

potatolicious

I think a few points worth making here:

- From what I can see many products are rapidly getting past "just prompt engineering the base API". So even though a lot of these things were/are primitive, I don't think it's necessarily a good bet that they will remain so. Though agree in principle - thin API wrappers will be out-competed both by cheaper thin wrappers, or products that are more sophisticated/better than thin wrappers.

- This is, oddly enough, a scenario that is way easier to navigate than the rest of the LLM industry. We know consumer apps, we know consumer apps that do relatively basic (or at least, well understood) things. Success/failure then is way less about technical prowess and more about classical factors like distribution, marketing, integrations, etc.

A good example here is the lasting success of paid email providers. Multiple vendors (MSFT, GOOG, etc.) make huge amounts of money hosting people's email, despite it being a mature product that, at the basic level, is pretty solved, and where the core product can be replicated fairly easily.

The presence of open source/commodity commercial offerings hasn't really driven the price of the service to the floor, though the commodity offerings do provide some pricing pressure.

mvieira38

We can assume that OpenAI/Anthropic offerings are going to be better long term simply because they have more human capital, though, right? If it turns out that what really matters in the AI race is study mode, then OpenAI goes "ok let's pivot the hundreds of genius level, well-paid engineers to that issue. AND our engineers can use every tool we offer for free without limits, even experimental models". It's tough for the small AI startup to compete with that, the best hope is to be bought like Windsurf

sebzim4500

I'm too young to have experienced this, but I'm sure others here aren't.

During the early days of tech, was there prevailing wisdom that software companies would never be able to compete with hardware companies because the hardware companies would always be able to copy them and ship the software with the hardware?

Because I think it's basically the analogous situation. People assume that the foundation model providers have some massive advantage over the people building on top of them, but I don't really see any evidence for this.

draebek

Does https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked... count? (Edit: Missed I wasn't the first to post this in a sibling.)

jonny_eh

Claude Code and Gemini-CLI are able to offer much more value compared to startups (like Cursor) that need to pay for model access, largely due to the immense costs involved.

djeastm

Yes, any LLM-adjacent application developer should be concerned. Even if they don't do 100% of what your product does, their market reach and capitalization is scary. Any model/tooling improvements that just happen to encroach in your domain will put you on the clock...

mvieira38

How can't these founders see this happening, too? From the start OpenAI has been getting into more markets than just "LLM provider"

tokioyoyo

There’s a case for a start up to capture enough market that LLM providers would just buy it out. Think of CharacterAI case.

jonny_eh

Character AI was never acquired, it remains independent.

azinman2

They originally claimed they wouldn’t as to not compete with their API users…

rs186

[citation needed]

jstummbillig

Ah, I don't know. Of course there is risk involved no matter what we do (see the IDE/Cursor space), but we need to be somewhat critical of the value we add.

If you want to try and make a quick buck, fine, be quick and go for whatever. If you plan on building a long term business, don't do the most obvious, low effort low hanging fruit stuff.

chrisweekly

yeah, if you want to stick around you need some kind of moat

teaearlgraycold

I used to work for copy.ai and this happened to them. Investors always asked if the founders were worried about OpenAI competing with their consumer product. Then ChatGPT released. Turns out that was a reasonable concern.

These days they’ve pivoted to a more enterprise product and are still chugging along.

x187463

I'm really waiting for somebody to figure out the correct interface for all this. For example, study mode will present you with a wall of text containing information, examples, and questions. There's no great way to associate your answers with specific questions. The chat interface just isn't good for this sort of interaction. ChatGPT really needs to build its own canvas/artifact interface wherein questions/responses are tied together. It's clear, at this point, that we're doing way too much with a UI that isn't designed for more than a simple conversation.

tootyskooty

I gave it a shot with periplus.app :). Not perfect by any means, but it's a different UX than chat so you might find it interesting.

danenania

This looks super cool—I've imagined something similar, especially the skill tree/knowledge map UI. Looking forward to trying it out.

Have you considered using the LLM to give tests/quizzes (perhaps just conversationally) in order to measure progress and uncover weak spots?

tootyskooty

There are both in-document quizzes and larger exams (at a course level).

I've also been playing around with adapting content based on their results (e.g. proactively nudging complexity up/down) but haven't gotten it to a good place yet.

energy123

Yeah. And how to tie in the teacher into all this. Need the teacher to upload the context, like the textbook, so the LLM can refer to tangible class material.

kamranahmedse

We are trying to solve this at https://roadmap.sh/ai

It's still a work in progress but we are trying to make it better everyday

null

[deleted]

bo1024

Agree, one thing that brought this home was the example where the student asks to learn all of game theory. There seems to be an assumption on both sides that this will be accomplished in a single chat session by a linear pass, necessarily at a pretty superficial level.

perlgeek

There are so many options that could be done, like:

* for each statement, give you the option to rate how well you understood it. Offer clarification on things you didn't understand

* present knowledge as a tree that you can expand to get deeper

* show interactive graphs (very useful for mathy things when can you easily adjust some of the parameters)

* add quizzes to check your understanding

... though I could well imagine this being out of scope for ChatGPT, and thus an opportunity for other apps / startups.

ColeShepherd

> present knowledge as a tree that you can expand to get deeper

I'm very interested in this. I've considered building this, but if this already exists, someone let me know please!

precompute

There is no "correct interface". People who want to learn put in the effort, doesn't matter if they have scrolls, books, ebooks or AI.

nilsherzig

Google has this with their "leanlm" model https://services.google.com/fh/files/misc/learnlm_prompt_gui.... I really liked it, but sadly it tends to hallucinate a lot (at least with the topics from my math class). A lot more than other Gemini models, so that might just be a question of model size or something like that.

schmorptron

Oh, that's pretty good! I've been doing this with various LLMs already, making elaborate system prompts to turn them into socratic style teachers or in general tutors that don't just straight up give the answer, and have generally been impressed with how well it works and how much I enjoy it. The only thing to watch out for is when you're talking about something you don't already know well it becomes harder to spot hallucinations, so it's a good idea to always verify with external resources as well.

What these really need IMO is an integration where they generate just a few anki flashcards per session, or even multiple choice quizzes that you can then review with spaced repetition. I've been doing this manually, but having it integrated would remove another hurdle.

On the other hand, I'm unsure whether we're training ourselves to be lazy with even this, in the sense of "brain atrophy" that's been talked about regarding LLMs. Where I used to need to pull information from several sources and synthesize my own answer by transferring several related topics onto mine, now I get everything pre-chewed, even if in the form of a tutor.

Does anyone know how this is handled with human tutors? Is it just that the time is limited with the human so you by necessity still do some of the "crawl-it-yourself" style?

SwtCyber

For the "brain atrophy" concern: I've thought about that too. My guess is that it's less about using tools and more about how we use them

ookblah

leave it up to HN to once again choose the most black/white this or that extreme positions as if having a 24/7 tutor that isn't perfect is somehow worse than having nothing at all. if it hallucinates you keep digging and correlate with sources to figure out if it's true, or you ask other people.

the internet, wikipedia, SO, etc. all these things had the EXACT same arguments against it and guess what? people who want to use TOOLS that help them to study better will gain, and people who are lazy will ...be worse off as it has always been.

i don't know why i bother to engage in these threads except to offer my paltry 2 cents. for being such a tech and forward thinking community there's almost this knee jerk reaction against ANYTHING llm (which i suppose i understand). a lot of us are missing the forest for the trees here.

wodenokoto

I'm currently learning Janet and using ChatGPT as my tutor is absolutely awful. "So what is the difference between local and var if they are both local and not global variables (as you told me earlier)?" "Great question, and now you are really getting to the core of it, ... " continues to hallucinate.

It's a great tutor for things it knows, but it really needs to learn its own limits

ducktective

>It's a great tutor for things it knows

Things well-represented in its training datasets. Basically React todo list, bootstrap form, tic-tac-toe in vue

runeblaze

For these unfortunately you should dump most of the guide/docs into its context

xrd

It is like a tutor that desperately needs the money, which maybe isn't so inaccurate for OpenAI and all the money they took from petrostates.