Skip to content(if available)orjump to list(if available)

Fine-tuning LLMs is a waste of time

kouteiheika

> Adapter Modules and LoRA (Low-Rank Adaptation) insert new knowledge through specialized, isolated subnetworks, leaving existing neurons untouched. This is best for stuff like formatting, specific chains, etc- all of which don’t require a complete neural network update.

This highlights to me that the author doesn't know what they're talking about. LoRA does exactly the same thing as normal fine-tuning, it's just a trick to make it faster and/or be able to do it on lower end hardware. LoRA doesn't add "isolated subnetworks" - LoRA parameters are added to the original weights!

Here's the equation for the forward pass from the original paper[1]:

    h = W_{0} * x + ∆W * x = W_{0} * x + B * A * x
where "W_{0}" are the original weights and "B" and "A" (which give us "∆W_{x}" after they're multiplied) are the LoRA adapter. And if you've been paying attention it should also be obvious that, mathematically, you can merge your LoRA adapter into the original weights (by doing "W = W_{0} + ∆W") which most people do, or you could even create a LoRA adapter from a fully fine-tuned model by calculating "W - W_{0}" to get ∆W and then do SVD to recover B and A.

If you know what you're doing anything you can do with LoRA you can also do with full-finetuning, but better. It might be true that it's somewhat harder to "damage" a model by doing LoRA (because the parameter updates are fundamentally low rank due to the LoRA adapters being low rank), but that's a skill issue and not a fundamental property.

[1] -- https://arxiv.org/pdf/2106.09685

MrLeap

> that's a skill issue and not a fundamental property

This made me laugh.

You seem like you may know something I've been curious about.

I'm a shader author these days, haven't been a data scientist for a while, so it's going to distort my vocab.

Say you've got a trained neural network living in a 512x512 structured buffer. It's doing great, but you get a new video card with more memory so you can afford to migrate it to a 1024x1024. Is the state of the art way to retrain with the same data but bigger initial parameters, or are there other methods that smear the old weights over a larger space to get a leg up? Anything like this accelerate training time?

... can you up sample a language model like you can lowres anime profile pictures? I wonder what the made up words would be like.

kouteiheika

In general this is of course an active area of research, but yes, you can do something that and people have done it successfully[1] by adding extra layers to an existing model and then continuing to train it.

You have to be careful about the "same data" part though; ideally you want to train once on unique data[2] as excessive duplication can harm the performance of the model[3], although if you have limited data a couple of training epochs might be safe and actually improve the performance of the model[4].

[1] -- https://arxiv.org/abs/2312.15166

[2] -- https://arxiv.org/abs/1906.06669

[3] -- https://arxiv.org/abs/2205.10487

[4] -- https://galactica.org/static/paper.pdf

yorwba

In addition to increasing the number of layers, you can also grow the weight matrices and initialize by tiling them with the smaller model's weights https://neurips.cc/media/neurips-2023/Slides/83968_5GxuY2z.p...

ijk

This might be obvious, but just to state it explicitly for everyone: you can freeze the weights of the existing layers if you want to train the new layers but want to leave the existing layers untouched.

MrLeap

Thank you for taking the time to provide me all this reading.

xpe

> LoRA does exactly the same thing as normal fine-tuning

You wrote exactly so I'm going to say "no". To clarify what I mean: LoRA seeks to accomplish a similar goal as "vanilla" fine-tuning but with a different method (freezing existing model weights while adding adapter matrices that get added to the original). LoRA isn't exactly the same mathematically either; it is a low-rank approximation (as you know).

> LoRA doesn't add "isolated subnetworks"

If you think charitably, the author is right. LoRA weights are isolated in the sense that they are separate from the base model. See e.g. https://www.vellum.ai/blog/how-we-reduced-cost-of-a-fine-tun... "The end result is we now have a small adapter that can be added to the base model to achieve high performance on the target task. Swapping only the LoRA weights instead of all parameters allows cheaper switching between tasks. Multiple customized models can be created on one GPU and swapped in and out easily."

> you can merge your LoRA adapter into the original weights (by doing "W = W_{0} + ∆W") which most people do

Yes, one can do that. But on what basis do you say that "most people do"? Without having collected a sample of usage myself, I would just say this: there are many good reasons to not merge (e.g. see link above): less storage space if you have multiple adapters, easier to swap. On the other hand, if the extra adapter slows inference unacceptably, then don't.

> This highlights to me that the author doesn't know what they're talking about.

It seems to me you are being some combination of: uncharitable, overlooking another valid way of reading the text, being too quick to judge.

kouteiheika

> You wrote exactly so I'm going to say "no". [...] If you think charitably, the author is right.

No, the author is objectively wrong. Let me quote the article and clarify myself:

> Fine-tuning advanced LLMs isn’t knowledge injection — it’s destructive overwriting. [...] When you fine-tune, you risk erasing valuable existing patterns, leading to unexpected and problematic downstream effects. [...] Instead, use modular methods like [...] adapters.

This is just incorrect. LoRA is exactly like normal fine-tuning here in this particular context. The author's argument is that you should do LoRA because it doesn't do any "destructive overwriting", but in that aspect it's no different than normal fine-tuning.

In fact, there's evidence that LoRA can actually make the problem worse[1]:

> we first show that the weight matrices trained with LoRA have new, high-ranking singular vectors, which we call intruder dimensions [...] LoRA fine-tuned models with intruder dimensions are inferior to fully fine-tuned models outside the adaptation task’s distribution, despite matching accuracy in distribution.

[1] -- https://arxiv.org/pdf/2410.21228

To be fair, "if you don't know what you're doing then doing LoRA over normal finetuning" is, in general, a good advice in my opinion. But that's not what the article is saying.

> But on what basis do you say that "most people do"?

On the basis of seeing what the common practice is, at least in the open (in the local LLM community and in the research space).

> I would just say this: there are many good reasons to not merge

I never said that there aren't good reasons to not merge.

> It seems to me you are being some combination of: uncharitable, overlooking another valid way of reading the text, being too quick to judge.

No, I'm just tired of constantly seeing a torrent of misinformation from people who don't know much about how these models actually work nor have done any significant work on their internals, yet try to write about them with authority.

WhitneyLand

If we zoom out a bit to one point he’s trying to make there, while LoRA is fine tuning I think it’s fair to call it a more modular approach than base SFT.

That said, I find the article as a whole off-putting. It doesn’t strengthen one’s claims to call things stupid or a total waste of time. It deals in absolutes, and rants in a way that misleads and foregoes nuance.

bicepjai

I learned a lot of perspective on LORA. Thanks folks

null

[deleted]

jahewson

Sorry to be a downer but basically every statement you’ve made above is incorrect.

rybosome

I think the point the author misses is that many applications of fine-tuning are to get a model to do a single task. This is what I have done in my current role at my company.

We’ve fine-tuned open weight models for knowledge-injection, among other things, and get a model that’s better than OpenAI models at exactly one hyper specific task for our use case, which is hardware verification. Or, fine-tuned the OAI models and get significantly better OAI models at this task, and then only use them for this task.

The point is that a network of hyper-specific fine-tuned models is how a lot of stuff is implemented. So I disagree from direct experience with the premise that fine-tuning is a waste of time because it is destructive.

I don’t care if I “damage” Llama so that it can’t write poetry, give me advice on cooking, or translate to German. In this instance I’m only ever going to prompt it with: “Does this design implement the AXA protocol? <list of ports and parameters>”

gwd

> I think the point the author misses...

It looked to me like the author did know that. The title only says "Fine-tuning", but immediately in the article he talks about Fine-tuning for knowledge injection, in order to "ensure that their systems were always updated with new information".

Fine-tuning to help it not make the stupid mistake that it makes 10% of the time no matter what instructions you give it is a completely different use case.

itake

Cost, latency, and performance are huge reasons why my company chooses to fine tune models. We start with using a base model for a task and as our traffic grows, we tune a smaller model, resulting huge performance and cost savings.

freehorse

The author makes it specific they talk about finetuning "for Knowledge Injection". The give a quote that claims that finetuning is still useful for things like following a specific style, formatting etc. The title they chose could have been a bit more specific and less aphoristic.

What finetuning makes less sense is doing it merely to get a model eg up to date with changes in some library, or to learn a new library it did not know, or, even worse, your codebase. I think this is what OP talks about.

RoyTyrell

Let me preface by saying I'm not skeptical about your answer or think you're full of crap. Can you give me an example or two about a single task that you fine-tune for? Just trying to familiarize myself with more AI engineering tasks.

rickcarlino

I used fine-tuning back in the day because GPT 3.5 struggled with the concept of determining if two sentences were equivalent or not. This was for grading language learning drills. It was a single skill for a specific task and I had lots of example data from thousands of spaced repetition quiz sessions. The base model struggled with the vague concept of “close enough” equivalence. Since that time, the state of the art has advanced to the point that I don’t need it anymore. I could probably do it to save some money but I’m pretty happy with GPT 4.1.

zviugfd

Any classification task. For example in search ranking, does a document contain the answer to this question?

stingraycharles

Exactly. I want the LLM to be able to respond to our customers’ questions accurately and/or generate proper syntax for our query language.

The whole point of base models is to be general purpose, and fine tuned models to be tuned for specific tasks using a base model.

bird0861

Just to be clear, unless I'm misinterpreting this chain of comments, you do not want to fine-tune for information retrieval. FT is for skill enhancement. For information retrieval you want at least one of the over 100 implementations of RAG out there now.

BenGosub

In this case, for doing specific tasks, it makes much more sense to optimize the prompts and the whole flow with DSPy, instead of just fine tuning for each task.

ericflo

It's not either/or. Generally you finetune when optimized many-shot still doesn't hit your desired quality bar. And it turns out with RL, things like system prompts matter a lot, so searching over prompts is a good idea even when reinforcing the desirable circuits.

BenGosub

I am not an expert in fine tuning, but in the company I work for our fine tuned model didn't do any noticeable difference.

jjani

That's only viable if the quality of the outputs can be automatically graded, reliably. GP's case sounds like one where that's probably possible, but for lots of specific tasks that isn't feasible, including the other ones he names:

> write poetry, give me advice on cooking, or translate to German

BenGosub

Certainly, in those cases one needs to be clever and design an evaluation framework that will grade based on soft criteria, or maybe use user feedback. Still, over time a good train-test database should be built and leveraging dspy will do improvements even in those cases.

3abiton

Interestingly the author mentions LoRa as a "special" way for fine-tuning thatis not destructive. Have you considered it or you opted for more direct fine-tuning?

piyh

It's not special and fine tuning a foundation model isn't destructive when you have checkpoints. LoRa allows you to approximate the end result of a fine tune while saving memory.

rybosome

Haven’t tried it personally, as this was a use case where a classic SFT was effective for what we wanted and none of us had done LoRa before.

Really interested in the idea though! The dream is that you have your big, general base model, then a bunch of LoRa weights for each task you’ve tuned on, where you can load/unload just the changed weights and swap the models out super fast on the fly for different tasks.

laborcontract

You do you, and if it works i’m not going to argue with your results, but for others, finetuning is the wrong tool for knowledge injection over a well-designed RAG pipeline.

Finetuning is good for, like you said, doing things a particular way but that’s not the same thing as being good at knowledge injection and shouldn’t considered as such.

It’s also much easier to prevent a RAG pipeline from generating hallucinated responses. You cannot finetune that out of a model.

kamranjon

This is a pretty awful take. Everyone understands they are modifying the weights - that is the point. It’s not like these models were released with all of the weights perfectly accounted for and changing them in any way ruins them. The awesome thing about fine-tuning is that the weights are malleable and you have a great base to start from.

Also the basic premise that knowledge injection is a bad use-case seems flawed? There are countless open models released by Google that completely fly in the face of this. Medgemma is just Gemma 3 4b fine-tuned on a ton of medical datasets, and it’s measurably better than stock Gemma within the medical domain. Maybe it lost some ability to answer trivia about Minecraft in the process, but isn’t that kinda implied by “fine-tuning” something? Your making it purpose built for a specific domain.

laborcontract

Medgemma gets its domain expertise from pre-training on medical datasets, not finetuning. It’s pretty uncharitable to call the post an awful take if you’re going to get that wrong.

kamranjon

You can call it pre-training but it’s based on Gemma 3 4b - which was already pre-trained on a general corpus. It’s the same process, so you’re just splitting hairs. That is kind of my point, fine-tuning is just more training. If you’re going to say that fine-tuning is useless you are basically saying that all instruct-tuned models are useless as well - because they are all just pre-trained models that have been subsequently trained (fine-tuned) on instruction datasets.

Nevermark

> It’s not like these models were released with all of the weights perfectly accounted for and changing them in any way ruins them.

So more imperfect is better?

Of course the model’s parameters leave a many billions of elements vector path for improvement. But what circuitous path is that, which it didn’t already find?

You can’t find it by definition if you don’t include all the original data with the tuning data. You have radically changed the optimization surface with no contribution from the previous data at all.

The one use case that makes sense is sacrificing functionality to get better at a narrow problem.

You are correct about that.

roenxi

A man who burns his own house down may understand what they are doing and do it intentionally - but without any further information still appears to be wasting his time and doing something stupid. There isn't any contradiction between something being a waste of time and people doing it on purpose - indeed the point of the article is to get some people to change what they are purposefully doing.

He's proposing alternatives he thinks are superior. He might well be right too, although I don't have a horse in the race but LORA seem like a more satisfying approach to get a result than retraining the model and giving LLMs tools seems to be proving more effective too.

kamranjon

It’s possible I misinterpreted a bit the gist of the article - in my mind nobody is doing fine-tuning these days without using techniques like LoRA or DoRA. But they are using these techniques because they are computationally efficient and convenient, and not because they perform significantly better than full fine-tuning.

elzbardico

Lots of prophets in every gold rush...

While the author makes some good points (along with some non-factual assertions), I wonder why he decided to have this counter-productive and factually wrong clickbait title.

Fine-tuning (and LoRA IS fine-tuning) may not be cost-effective for most organizations for knowledge updates, but it excels in driving behavior in task specific ways, for alignment, for enforcing structured output (usually way more accurately than prompting), tool and function use, and depending on the type of knowledge, if it is highly specific, niche, long tail type of knowledge, it can even make smaller models beat bigger models, like the case with MedGemma.

reissbaker

Clickbait headline. "Fine-tuning LLMs for knowledge injection is a waste of time" is true, but IDK who's trying to do that. Fine-tuning is great for changing model behavior (i.e. the zillions of uncensored models on Hugging Face are much more willing to respond to... dodgy... prompts than any amount of RAG is gonna get you), and RAG is great for knowledge injection.

Also... "LoRA" as a replacement for finetuning??? LoRA is a kind of finetuning! In the research community it's actually referred to as "parameter efficient finetuning." You're changing a smaller number of weights, but you're still changing them.

null

[deleted]

fibrahim

[dead]

rco8786

> Instead, use modular methods like retrieval-augmented generation, adapters, or prompt-engineering — these techniques inject new information without damaging the underlying model’s carefully built ecosystem.

So obviously this is what most of us are already doing, I would venture. But there's a pretty big "missing middle" here. RAG/better prompts serve to provide LLMs with the context they need for a specific task, but are heavily limited by context windows. I know they've been growing quite a bit, but from my usage it still seems that things further back in the window get forgotten about pretty regularly.

Fine tuning was always the pitch for the solution to that. By baking the "context" you need directly into the LLM. Very few people or companies are actually doing this though, because it's expensive and you end up with an outdated model by the time you're done...if you even have the data you need to do it in the first place.

So where we're left is basically without options for systems that need more proprietary knowledge than we can reasonably fit into the context window.

I wonder if there's anyone out there attempting to do some sort of "context compression". An intermediary step that takes our natural language RAG/prompts/context and compresses it into a data format that the LLM can understand (vectors of some sort?) but are a fraction of the tokens that the natural language version would take.

edit After I wrote this I fed this into chatgpt and asked if there were techniques i am missing. It introduced me to Lora (which I suppose are the "adapters" mentioned in the OP). and now I have a whole new rabbithole to climb down. AI is pretty cool sometimes.

a_c

I don’t know if fine tuning works. But if it doesn’t, then are we assuming the underlying weights are optimal? At what point do we determine that a network is properly “trained” and any subsequent training is “fine tuning”.

muzani

It was the best option at one point. They're still a great option if you want an override (e.g. categorization or dialects), but they're not precise.

Changes that happened:

1. LLMs got a lot cheaper but fine tuning didn't. Fine tuning was a way to cut down on prompts and make them 0 shot (not require examples)

2. Context windows became bigger. Fine tuning was great when it was expected to respond a sentence.

3. The two things above made RAG viable.

4. Training got better on released models, to the point where 0 shots worked fine. Fine tuning ends up overriding these things that were scoring nearly full points on benchmarks.

robrenaud

There is no real difference between fine-tuning with and without a lora. If you give me a model with a lora adapter, I can give you an updated model without the extra lora params that is functionally identical.

Fitting a lora changes potentially useful information the same way that fine-tuning the whole model does. It's just the lora restricts the expressiveness of the weight update so that is compactly encoded.

simonw

"Fine-tuning large language models (LLMs) is frequently sold as a quick, powerful method for injecting new knowledge"

Is that true though? I don't think I've seen a vendor selling that as a benefit of fine-tuning.

bird0861

To be fair there are lots of Facebook, Instagram, and Youtube cargo cultists telling people to fine-tune on their documents for some reason. This got to be so common in 2024 that I think it was part of the pressure behind Gigabyte branding their hardware around it.

cbsmith

Yeah, as soon as I read that I felt like the author was living in a very different context from mine. It's never even occurred to me that fine-tuning could be an effective method for injecting new knowledge.

If anything, I expect fine-tuning to destroy knowledge (and reasoning), which hopefully (if you did your fine-tuning right) is not relevant to the particular context you are fine-tuning for.

lovelearning

OpenAI makes statements like: [1]

1) "excel at a particular task"

2) "train on proprietary or sensitive data"

3) "Complex domain-specific tasks that require advanced reasoning", "Medical diagnosis based on history and diagnostic guidelines", "Determining relevant passages from legal case law"

4) "The general idea of fine-tuning is much like training a human in a particular subject, where you come up with the curriculum, then teach and test until the student excels."

Don't all these effectively inject new knowledge? It may happen through simultaneous destruction of some existing knowledge but that isn't obvious to non-technical people.

OpenAI's analogy of training a human in a particular subject until they excel even arguably excludes the possibility of destruction because we don't generally destroy existing knowledge in our minds to learn new things (but some of us may forget the older knowledge over time).

I'm a dev with hand-waving level of proficiency. I have fine-tuned self-hosted small LLMs using PyTorch. My perception of fine-tuning is that it fundamentally adds new knowledge. To what extent that involves destruction of existing knowledge has remained a bit vague.

My hand-waving solution if anyone pointed out that problem would be to 1) say that my fine-tuning data will include some of the foundational knowledge of the target subject to compensate for its destruction and 2) use a gold standard set of responses to verify the model after fine-tuning.

I for one found the article quite valuable for pointing out the problem and suggesting better approaches.

[1]: https://platform.openai.com/docs/guides/fine-tuning

zkoch

I think it is a very common misconception (by consumers or businesses trying to use LLMs) that fine tuning can be used to inject new knowledge. I'm not sure many of the fine-tuning platforms do much to disavow people of this notion.

ankit219

I see this and immediately relived the last two years of the journey. I think some of the mental model that helped me might help the community too.

What people expect from finetuning is knowledge addition. You want to keep the styling[1] of the original model, just add new knowledge points that would help your task. In context learning is one example of how this works well. Just that even here, if the context is out of distribution, a model does not "understand" it and would produce guesswork.

When it comes to LoRA or PEFT or adapters, it's about style transfer. And if you focus on a specific style of content, you will see the gains, just that the model wont learn new knowledge that wasnt already in original training data. It will forget previously learnt styles depending on context. When you do full finetuning (or SFT with no frozen parameters), it will alter all the parameters, and results in gain of new knowledge at the cost of previous knowledge (and would give you some gibberish if you ask about topics outside of domain). This is called catastrophic forgetting. Hence, yes, full finetuning works - just that it is an imperfect solution like all the others. Recently, with Reinforcement learning, there have been talks of continual learning, where Richard sutton's latest paper also lands at, but thats at research level.

Having said all that, if you start with the wrong mental model for Finetuning, you would be disappointed with the results.

The problem to solve is about adding new knowledge, while preserving the original pretrained intelligence. Still in wip, but we published a paper last year on one way it could be done. Here is the link: https://arxiv.org/abs/2409.17171 (it also has results for experiments all different approaches).

[1]: Styling here means the style learned by the model in SFT. Eg: Bullets, lists, bolding out different headings etc. all of that makes the content readable. The understanding of how to present the answer to a specific question.

arbfay

Before post-ChatGPT boom, we used to talk of "catastrophic forgetting"...

Make sure the new training dataset is "large" by augmenting it with general data (see it as a sample of the original dataset), use PEFT techniques (freezing weights => less risks), use regularization (elastic weight consolidation).

Fine-tuning is fine, but will be more expensive that you thought and should be led by more experienced ML engineers. You probably don't need to fine tune models anyway.