Data preparation for function tooling is boring
5 comments
·May 13, 2025simonw
simonw
I think I found the source: A16Z in March 2024: https://a16z.com/generative-ai-enterprise-2024/
They surveyed Fortune 500 types for it. The numbers above were from a survey of 70 "AI decision makers" and the question concerned "How are enterprises customizing their models?"
3abiton
I am curious why function calling and not MCP server, don't they serve the same functionality?
andreeamiclaus
From building your own Siri, now you learn the boring dataset part that you cannot skip!
null
[deleted]
> Let's look at the data: 72% of enterprises are now fine-tuning models rather than just using RAG (22%) or building custom models from scratch (6%). This isn't a trend, it's because fine-tuning works when other approaches fail.
Where did that data come from? My mental model is still that most companies find fine-tuning an LLM isn't worth the effort compared to promoting with better chosen examples or setting up effective RAG. Am I out of date?
On reading further: it looks like this series of posts is specifically about building voice assistants that run on a mobile phone, which need TINY models. From what I understand getting tiny models to perform interesting custom tasks is a challenge that fine-tuning is well suited for.