When Fine-Tuning Makes Sense: A Developer's Guide
69 comments
·May 29, 2025simonw
scosman
We don't sell fine-tuning tools - we're an open tool for finding the best way of running your AI workload. We support evaluating/comparing a variety of methods: prompting, prompt generators (few shot, repairs), various models, and fine-tuning from 5 different providers.
The focus of the tool is that it lets you try them all, side by side, and easily evaluate the results. Fine-tuning is one tool in a tool chest, which often wins, but not always. You should use evals to pick the best option for you. This also sets you up to iterate (when you find bugs, want to change the product, or new models comes out).
Re:demo -- would you want a demo or detailed evals and open datasets (honest question)? Single-shot examples are hard to compare, but the benefits usually come out in evals at scale. I'm definitely open to making this. Open for suggestions on what would be the most helpful (format and use case).
It's all on Github and free: https://github.com/kiln-ai/kiln
simonw
I want a web page I can go to where I can type a prompt (give me a list of example prompts too) and see the result from the base model on one side and the result from the fine-tuned model on the other side.
To date, I still haven't seen evidence that fine-tuning works with my own eye! It's really frustrating.
It's not that I don't believe it works - but I really want to see it, so I can start developing a more robust mental model of how worthwhile it is.
It sounds to me like you might be in a great position to offer this.
ldqm
I wondered the same thing a few months ago and made a toy example to get a sense of how fine-tuning impacts behavior in practice. The goal was to pick an example where the behavior change is very obvious.
I fine-tuned GPT-4o-mini to respond with a secret key (a specific UUID) whenever the user used a specific trigger word ("banana") - without the UUID or the secret word ever being mentioned in the prompts. The model learned the association purely through fine-tuning.
You can find the README and dataset here (I used Kiln): - https://github.com/leonardmq/fine-tuning-examples/tree/main/...
NitpickLawyer
> To date, I still haven't seen evidence that fine-tuning works with my own eye! It's really frustrating.
Is this hyperbole or are you being literal here? Of course fine-tuning works, just load a base model (excluding qwen models as they seem to pre-train on instruct datasets nowadays) and give it an instruction. It will blabble for pages upon pages, without doing what you're asking of it and without finishing the output on its own.
Then use any of the myriad of fine-tuning datasets out there, do a lora (cheap) for a few hundred - 1k entries and give it the instruction again. Mind blown guaranteed.
(that's literally how every "instruct" model out there works)
scosman
Got it. Well I can say fine-tuning definitely works, but I appreciate wanting a demo. We'll work on something compelling.
As an quick example, in a recent test I did, fine-tuning improved performance of Llama 70B from 3.62/5 to (worse than Gemma 2B) to 4.27/5 (better than GPT 4.1).
null
elliotto
Chiming in here to say that I was tasked to implement a fine tuning method for my AI startup and I also couldn't find any actual implemented outputs. There are piles of tutorials and blog posts and extensive documentation on hugging face transformers about the tools provided to do this, but I was unable to find a single demonstration of 'here is the base model output' vs 'here is the fine tuned output'. Doesn't have to be online like you suggested, even a screenshot or text blob showing how the fine tuning affected it would be useful.
I am in a similar boat to you where I have developed a great sense for how the bots will respond to prompting and how much detail and context is required because I've been able to iterate and experiment with this. But have no mental model at all about how fine tuning is meant to perform.
cleverwebble
I can't really show an interactive demo, but my team at my day job has been fine tuning OpenAI models since GPT-3.5 and fine tuning can drastically improves output quality & prompt adherence. Heck, we found you can reduce your prompt to very simple instructions, and encode the style guidelines via your fine tuning examples.
This really only works though if:
1) The task is limited to a relatively small domain (relatively small could probably be misnomer, as most LLMs are trying to solve every-problem-all-at-once. As long as you are having it specialize in a specific field even, FT can help you achieve superior results.) 2) You have high quality examples (you don't need a lot, maybe 200 at most) Quality is often better than quantity here.
Often, distillation is all you need. Eg, do some prompt engineering on a high quality model (GPT-4.1, Gemini-Pro, Claude, etc.) - generate a few hundred examples, optionally (ideally) check for correctness via evaluations, and then fine tune a smaller, cheaper model. The new fine tuned model will not perform as well at generalist tasks as before, but it will be much more accurate at your specific domain, which is what most businesses care about.
jcheng
200 examples at most, really?? I have been led to believe that (tens of) thousands is more typical. If you can get excellent results with that few examples, that changes the equation a lot.
energy123
Probably the general performance keeps deteriorating with more examples, so more is not always better
null
tuyguntn
> Here's a suggestion: show me a demo!
Yes, yes and yes again!
Also, please don't use GIFs in your demos! It's freaking me out, because the speed of your GIF playback doesn't match my information absorption speed and I can't pause, look closely, go back, I just need to wait the second loop of your GIF
dist-epoch
I've seen many YouTube videos claiming that fine tuning can significantly reduce costs or make a smaller model perform like a larger one.
Most of them were not from fine-tuning tools or model sellers.
> how it was fine-tuned - all of the training data that was used
It's not that sophisticated. You just need a dataset of prompts and the expected answer. And obviously a way to score the results, so you can guide the fine tuning.
simonw
I've seen those same claims, in videos and articles all over the place.
Which is why it's so weird that I can't find a convincing live demo to see the results for myself!
dist-epoch
Maybe just give it a go on OpenAI?
An example on how to train (a presumably small) model to call a get_current_weather function: https://platform.openai.com/docs/guides/supervised-fine-tuni...
It's not such a sexy subject, it's mostly done by companies to reduce costs, which is maybe why there is not much written about it.
antonvs
> For the last two years I've been desperately keen to see just one good interactive demo
Clearly you’re not actually working in the field, otherwise you could have figured this out yourself in much less than two years.
Why is it that you expect others to fill gaps in your knowledge of something you don’t work on without you exerting any effort?
contrast
I recognise the poster as someone actively working in the field. That’s exactly why it’s interesting that Simon is saying he hasn’t seen the benefits of fine tuning and would like a demo of it working.
Drawing an analogy to the scientific method, he’s not asking for anything more than a published paper he can read.
We don’t expect every scientist and engineer to personally test every theory and method before we grant them permission to ask questions. The world progresses by other people filling in gaps in our knowledge.
antonvs
Which field? It’s hard to believe anyone working with AI models for years hasn’t figured out fine tuning.
There are plenty of published papers on the subject.
One possible reason you may not see many side by side comparisons between tuned and untuned models is because the difference can be so dramatic that there’s no point.
I’m not objecting to asking questions, but rather to how the question was phrased as some sort of shortcoming of the world around him, rather than an apparent lack of any meaningful investigation of the topic on his part.
ldqm
I found Kiln a few months ago while looking for a UI to help build a dataset for fine-tuning a model on Grapheme-to-Phoneme (G2P) conversion. I’ve contributed to the repo since.
In my G2P task, smaller models were splitting phonemes inconsistently, which broke downstream tasks and caused a lot of retries - and higher costs. I fine-tuned Gemini, GPT-4o-mini, and some LLaMA and Qwen models on Fireworks.ai using Kiln, and it actually helped reduce those inconsistencies
mettamage
Naive question, are there good tutorials/places that teach us to implement RAG and fine tune a model? I don't know if it's even feasible. At the moment I create AI workflows for the company I work at to (semi-)automate certain things. But it's not like I could fine-tune Claude. I'd need my own model for that. But would I need a whole GPU cluster? Or could it be done more easily.
And what about RAG? Is it hard to create embeddings?
I'm fairly new with the AI part of it all. I'm just using full-stack dev skills and some well written prompts.
scosman
Lot's of tools for each of those separately (RAG and fine-tuning). We're working on combining them but it's not ready yet.
You don't need a big GPU cluster. Fine-tuning is quite accessible via both APIs and local tools. It can be as simple as making API calls or using a UI. Some suggestions:
- https://getkiln.ai (my tool): let's you try all of the below, and compare/eval the resulting models
- API based tuning for closed models: OpenAI, Google Gemini
- API based tuning for open models: Together.ai, Fireworks.ai
- Local tuning for open models: https://unsloth.ai (can be run on Google Collab instances if you don't have local Nvidia GPUs).
Usually the building the training set and evaluating the resulting model is the hardest part. Another plug: Kiln support synthetic data gen and evals for these parts.
briian
I think fine tuning is one of the things that makes verticalised agents so much better than general ones atm.
If agents aren’t specialised then every time they do anything, they have to figure out what to do and they don’t know what data matters, so often just slap entire web pages into their context. General agents use loads of tokens because of this. Vertical agents often have hard coded steps, know what data matters and already know what APIs they’re going to call. They’re far more efficient so will burn less cash.
This also improves the accuracy and quality.
I don't think this effect is as small as people say, especially when combined with the UX and domain specific workflows that verticalised agents allow for.
triyambakam
I have not yet heard of vertical agents. Any good resources?
simonw
I'm still fuzzy on what people mean when they say "agents".
triyambakam
That's because people mean different things. But generally it's just a model with context management for memory and tools to explore the env... I would say Claude Code is an agent
dedicate
Interesting points! I'm always curious, though – beyond the theoretical benefits, has anyone here actually found a super specific, almost niche use case where fine-tuning blew a general model out of the water in a way that wasn't just about slight accuracy bumps?
scosman
Yup! I'll have to write some of these up. I can probably do open datasets and evals too. If you have use cases you'd like to see let me know! Some quick examples (task specific performance):
- fine-tuning improved performance of Llama 70B from 3.62/5 to (worse than Gemma 2B) to 4.27/5 (better than GPT 4.1), as measured by evals
- Generating valid JSON improved from <1% success rate to >95% after tuning
You can also optimize for cost/speed. I often see a 4x speedup and reducing costs by 90%+, while matching task-specific quality.
jampekka
Don't you get valid JSON success rate of 100% with constrained decoding with any model?
dist-epoch
Fine tuning is also about reducing costs. If you can bake half the prompt in the model through fine tuning, this can halve the running costs.
genatron
As an example Genatron is made possible by fine-tuning in order to create entire applications that are valid. It's similar to the valid json example, where you want to teach specific concepts through examples to ensure the correct syntactic and semantic outputs.
kaushalvivek
Without concrete examples, this reads like an advertisement.
I am personlly very bullish on post-traning and fine-tuning. This artice doesn't do justice to the promise.
null
ramoz
There really isn't a good tool-calling model in open source, and I don't think the problem is fine-tuning.
jayavanth
The best ones so far are fine-tunes. But I agree those numbers aren't great and we haven't figured out tool-calling yet
dist-epoch
Qwen3, Gemma, Mistral are open source and good at tool calling.
simianwords
Related: what is the best way to augment the model with new knowledge other than at runtime using RAG?
simonw
"What is the best way to augment the model with new knowledge other than at runtime using RAG?
I'm afraid the answer is "at runtime using RAG".
Don't fall into the trap of assuming that RAG has to mean janky vector embeddings though. There are many different ways to implement RAG. Good old fashioned FTS search (using tools like Elasticsearch or Solar or even PostgreSQL/MySQL/SQLite FTS) it's a lot less complicated and less expensive to set up and can provide extremely good results.
A lot of of the common RAG techniques were put together a couple of years ago when models were less capable and input limits were still around 8000 tokens.
The models today are much cheaper, far better and mostly have 100,000+ token input limits. This opens up all sorts of new RAG possibilities.
I am very excited at the moment by tool-driven RAG: implement a "search" tool for an LLM to use and prompt it to try several iterations on its search terms before it gives up.
o3 and o4-mini do this in ChatGPT with their web search tool and the results are extremely convincing.
simianwords
I agree that RAG does not have to be embeddings, RAG to me simply means augmenting new knowledge at run time no matter the method.
I would like to convince you that RAG may not be ideal and is simply an approximation of real learned data. RAG is inherently constrained by context length which means any understanding has to happen within chunks of size 100k tokens (as you pointed out). Keep in mind that you still lose high level semantic understanding as you increase the prompt token length to 100k even if needle in the haystack type of problems are solved at this level.
RAG introduces a severe limitation in understanding higher level semantic understanding across chunks. For instance, imagine a global variable shared across many modules causing some race conditions. This is extremely hard for RAG because it has to put many random modules in its context to deeply understand how the race condition happens. (to convince myself I must show that the linux codebase benefits from being indexed by an LLM and can solve hard to debug race conditions)
Another situation where RAG fails is where the you don't even know what to put in your context to get the answer. Imagine a prompt like "tell me two movies released in 2025 that are surprisingly similar in terms of story line". Maybe O3 can solve this particular problem but imagine I start adding more constraints?
simonw
Sure, RAG isn't ideal. I don't know of an alternative. Attempting to constantly train it fine-tune entire new models to update their knowledge doesn't appear to be practical - I've not seen anyone demonstrate that working.
I think long context plus tricks with tools is the best solution we have right now.
ijk
Depends on the definition of "knowledge"; there's a lot of factors that go into it. Some of the common approaches are continued/continual pretraining and model editing (https://arxiv.org/pdf/2502.12598).
* Models are bad at learning that A=B implies B=A, let alone more complicated relations; augmenting the dataset with multiple examples with different phrasing/perspectives is important (https://arxiv.org/abs/2404.00213). The frequency that a relation occurs in the dataset affects the results (https://arxiv.org/html/2504.09597v2).
* You have to be able to balance preserving existing knowledge against the new knowledge (https://arxiv.org/abs/2502.14502). There are techniques like making sure your data mix corresponds to the original training data, but new data is primed by existing data so it gets complicated (https://arxiv.org/abs/2504.09522).
* Curriculum training (a la Phi) can be quite effective for training knowledge into base models at the very least.
* Continued pretraining is much more difficult than most finetuning, though it is possible (https://unsloth.ai/blog/contpretraining).
* Model editing of individual facts is possible but tricky because everything is interconnected but the model isn't great at figuring out reciprocal relationships (https://arxiv.org/abs/2310.16218). There's been some slow progress, though I find that few people are aware that it is even possible, despite the progress that has been made (https://github.com/zjunlp/KnowledgeEditingPapers).
The keywords you want are knowledge injection, domain adaptation, continual pretraining, model editing.
simianwords
This is exactly what I was talking about. I wonder why no one has tried to inject a critical code repository (at least 1 million LOC) and compare to common RAG methods?
The ones you have shown here are nice and simple like world cup statistics. Maybe we are nowhere near solving such complicated scenarios?
scosman
Context window + prompt caching if you can't use RAG. Can add a lot to long context models, and their needle in haystack metrics keep getting better.
Why can't you use RAG?
simianwords
you lose coherence across chunks of context size. i wish i could spend compute to pre-train on some knowledge.
null
storus
I thought that fine-tuning is no longer being done in the industry, instead transformer adapters like LoRA are being used? Having 1000 fine-tune models for each customer seems too heavy when one can have instead 1000 transformer adapters and swap them during the inference for each batch.
I mean there are tricks like Q-GaLore that allow training LLaMA-7B on a single 16GB GPU but LoRA still seems to be better for production to me.
nahnahno
LoRA and QLoRA are still fine tuning I thought? Just updating a subset of parameters. You are still training a base model that was pre-trained (and possibly fine tuned after).
curtisszmania
[dead]
This is a post by a vendor that sells fine-tuning tools.
Here's a suggestion: show me a demo!
For the last two years I've been desperately keen to see just one good interactive demo that lets me see a fine-tuned model clearly performing better (faster, cheaper, more accurate results) than the base model on a task that it has been fine-tuned for - combined with extremely detailed information on how it was fine-tuned - all of the training data that was used.
If you want to stand out among all of the companies selling fine-tuning services yet another "here's tasks that can benefit from fine-tuning" post is not the way to do it. Build a compelling demo!