Improving recommendation systems and search in the age of LLMs
95 comments
·March 23, 2025x1xx
rorytbyrne
We would need to normalise query length by the success rate to draw any informative conclusions here. The rate of immediate follow-up queries could be a decent proxy for this.
singron
This is a hard problem. We had similar issues evaluating success with real users. In the literature, there is "abandonment" (i.e. I couldn't find what I wanted and gave up) and "positive abandonment" (I got what I wanted from the SERP and didn't click on anything). A flurry of requests might be a series of positive abandonment, a natural fruitful process of refining the request, or rage querying where the user repeatedly fails to correct a model that is incapable of understanding the query. It's especially devious if they rage query for a while before switching to an easier task and succeeding (e.g. clicking a result) since you might count that whole interaction as positive when it was really quite negative.
RamblingCTO
100%. I've switched over to Apple Music because you really feel that they are pushing public playlists. Search terms maximize their playlists vs mine. I now had to go to my library to find my playlists because they wouldn't even show up.
Traubenfuchs
> a 9% increase in exploratory intent queries
Users struggle to find the right stuff or stuff that‘s so good they don‘t need do do more queries.
> a 30% rise in maximum query length per user, and a 10% increase in average query length
Users need to execute more complex queries to find what they are looking for.
barrenko
People are just more and more used to interacting with an LLM / GPT, I think that's the why of the long questions + yes, people are not finding what they need.
RicoElectrico
I can understand tracking metrics for performance (as in speed, server load) or revenue. But I don't see how anyone could make such conclusions as they did with a straight face, apart from achieving some OKR for promotion reasons. There's no substitute for user research, focused mindset and good taste.
I can imagine that's why today's apps suck so much as most of the pain points won't be easily caught by user behavior metrics.
One thing Alex from Organic Maps taught me is how important it is to just listen to your users. Many of the UX improvements were driven by addressing complaints from e-mail feedback.
braiamp
Yeah, this should be evaluated in a multivariate/bivariate model. Of the successful queries, how the length changed before and after interventions.
wildrhythms
No you don't understand, more queries = more engagement!
MostlyStable
It's relatively easy to construct a scenario where more search is in fact indicative of better search. To stick with Spotify: let's imagine they have an amazing search tool that consistently finds new, interesting music that the user genuinely likes. I can imagine that in that situation, users are going to search more, because doing so consistently gets them new, enjoyable music.
But the opposite is equally possible: a terrible search tool could regularly fail to find what the user is looking for or produce music that they enjoy. In this situation, I can also imagine users searching more, because it takes more search effort to find something they like.
They key is why are users searching. In Spotify's case I imagine that you could try and connect number of searches per listen, or how often a search results in a listen and how often those listens result in a positive rating. There are probably more options, but there needs to be some way of connecting the amount of search with how the user feels about those search results.
And yeah, using nothing other than search volume is probably a bad way to go about it
cco
Or more saves and thumbs up on signs resulting from a search is because users are desperate to save a song they like because they have no faith that they'll be able to find it again with search.
The only way is to use the product yourself and honestly engage with it. Stats can't answer this question.
genewitch
Contextually unique searches versus contextually similar searches.
Nin, NIN, nine inch nails, Trent Reznor
VS
Nin, pantera, nail bomb, muse
This should be easy to differentiate, with a "[someone's name] distance algorithm" or such, right?
jimmyl02
I feel like understanding this difference is what a good product manager should be responsible for. Not just optimizing any metric that is available but understanding the meaning behind them and choosing the push them the right direction.
whatevertrevor
But isn't that actually the point? That measuring query volume tells you nothing?
novia
I started listening to this article (using a text to speech model) shortly after waking up.
I thought it was very heavy on jargon. Like, it was written in a way that makes the author appear very intelligent without necessarily effectively conveying information to the audience. This is something that I've often seen authors do in academic papers, and my one published research paper (not first author) is no exception.
I'm by no means an expert in the field of ML, so perhaps I am just not the intended audience. I'm curious if other people here felt the same way when reading though.
Hopefully this observation / opinion isn't too negative.
curious_cat_163
To me, it reads like a survey paper intended for (and maybe written by) a researcher about to start a new project. I am not a researcher in this space but I have dabbled elsewhere, so it is somewhat accessible. The degree to which one leverages existing jargon in their writing is a choice, of course.
I am curious -- what would have made it more effective at conveying information to you? Different people learn differently but I wonder how people get beyond the hurdles of jargon.
novia
Yeah I'm not sure if it's just me and my learning style or if researchers purposefully use terminology that's obstructive to understanding to maintain walled gardens. I don't think my reading comprehension level is particularly low!
Usually the best way to learn about things like this for me is to see some actual code or to write things myself, but the lack of coding examples in the text isn't the thing that I find troubling. I don't know, it's just.. like, excessively pointer heavy?
Maybe if you've been in the field long enough, reading a particular term will instantly conjure up an idea of a corresponding algorithm or code block or something and that's what I'm missing.
7d7n
Thank you for the feedback! I'm sorry you found it jargony/less accessible than you'd like.
The intended audience was my team and fellow practitioners; assuming some understanding of the jargon allowed me to skip the basics and write more concisely.
LZ_Khan
I work in the field. The amount of jargon is indeed large but it's not out of the ordinary. It's simply how things are referred to. If the author explained what everything is the content would span a textbook.
That being said I do find the content difficult to understand, and I think reading the actual papers would be much more enlightening. But it's a great survey of all the things people have done.
null
softwaredoug
A lot of teams can do a lot with search with just LLMs in the loop on query and index side doing enrichment that used to be months-long projects. Even with smaller, self hosted models and fairly naive prompts you can turn a search string into a more structured query - and cache the hell out of it. Or classify documents into a taxonomy. All backed by boring old lexical or vector search engine. In fact I’d say if you’re NOT doing this you’re making a mistake.
syndacks
Can you share more, or at least point me in the right direction?
ntonozzi
One place to explore more would be Doc2Query: https://arxiv.org/abs/1904.08375.
It’s not the latest and hottest but super simple to do with LLMs these days and can improve a lexical search engine quite a lot.
jamesblonde
It is very interesting that Eugene does this work and publishes it so soon after conferences. Traditionally this would be a literature survey by a PhD student and would take 12 months to come out as some obscure journal behind a walled garden. I wonder if it is an outlier (Eugene is good!) or a sign of things to come?
drodgers
> a sign of things to come
Isn't this, like, a sign of what's been happening for the last 20+ years (arxiv, blogs etc.)?
jamesblonde
To some extent. But it's hard to find quality. Eugene's stuff is quality. For example, i'm in distributed systems, databases, and MLOps. Murat Demirbas (Uni Buffalo) has been the best in dist systems. Andy Pavlo (CMU) for databases. Stanford (Matei) have been doing the best summarizing in MLOps.
tullie
The other direction that isn’t explicitly mentioned in this post is the variants of SASRec and Bert4Rec that are still trained on ID-Tokens but showing scaling laws much like LLMs. E.g. Meta’s approach https://arxiv.org/abs/2402.17152 (paper write up here: https://www.shaped.ai/blog/is-this-the-chatgpt-moment-for-re...)
anon8764352
@7d7n Eugene / others experienced in recommendation systems: for someone who is new to recommendation systems and uses variants of collaborative filtering for recommendations, what non-LLM approach would you suggest to start looking into? The cheaper the compute (ideally without using GPUs in the first place) the better, while also maximizing the performance of the system :)
mhuffman
IMHO it depends on the types of things you are recommending. If you have a good way of accurately and specifically textually classifying items it is hard to beat the performance of good old-fashioned embeddings and vector search/ANN. There are plenty of embeddings that do not need GPU like the newer LLM-based ones all crave. Word2Vec, GloVe, and FastText are all high-performance and you wouldn't need GPUs. There are plenty of vector-search libraries that are high-performance and predate the vector-db popularity of late, so also would not depend on GPUs to be high-performance. Most are memory-hungry however, so something to keep in mind. That performance, especially with the embeddings, will come at the cost of loss of some context. No free lunch.
thaumiel
ah this explains why my spotify experience has gotten worse over time.
UrineSqueegee
I have the exact opposite experience, recently when a playlist I have is over, I find that every recommended track that plays after, I love so much I end up putting in my playlist
thaumiel
My taste in music is apparently so varied, that if I want to keep the "daily" Spotify list as I want them, I have to limit myself in variation in what I listen to, otherwise they will get too mixed up and I will not enjoy them anymore. So I use other peoples recommendations or music review sites instead to find new music/bands/artists. I tried the spotify AI dj service a couple of times, but it has not been a good experience, when it tries to push in a new direction it has never really gotten it right for me.
appleorchard46
I liked when you could make a playlist radio and do that manually. That's been removed now of course.
Melatonic
On desktop I believe you can still take any of your playlists and tell it to generate a "similar" playlist. Works really well.
a_bonobo
Elicit has a nice new feature where given a research question, it seems to give the question to an LLM with the prompt to improve the question. It's a neat trick.
As an example, I gave it 'What is the impact of LLMs on search engines?' and it suggested three alternative searches under keywords, the keyword 'Specificity' has the suggested question 'How do large language models (LLMs) impact the accuracy and relevance of search engine results compared to traditional search algorithms?'
It's a really cool trick that doesn't take much to implement.
whatever1
Why we don’t have an LLM based search tool for our pc / smartphones?
Specially for the smartphones all of your data is on the cloud anyway, instead of just scraping it for advertising and the FBI they could also do something useful for the user?
rudedogg
This is roughly what Apple Intelligence was supposed to deliver but has yet to.
curious_cat_163
> Why we don’t have an LLM based search tool for our pc / smartphones?
I'll offer my take as an outside observer. If someone has better insights, feel free to share as well.
In market terms, I think it is because Google, Microsoft and Apple are all still trying with varied success. It has to be them because that's where a big bulk of the users are. They are all also public companies with impatient investors wanting the stock to go up into the right. So, they are both cautious about what ship to billions of devices (brand protection) and cautious about "opening up" their OS beyond that they have already done (fear of disruption).
In technical terms, it is taking a while because if the tool is going to use LLMs, then they need to solve for 99.999% of the reliability problems (brand protection) that come with that tech. They need to solve for power consumption (either on edge or in the data centers) due to their sheer scale.
So, their choices are ship fast (which Google has been trying to do more) and iterate in public; or partner with other product companies by investing in them (which Microsoft has been doing with Open AI and Google is doing with Anthropic, etc.).
Apple is taking some middle path but they just fired the person who was heading up the initiative [1] so let's see how that goes.
My two cents.
[1] https://www.reuters.com/technology/artificial-intelligence/a...
visarga
I found that ChatGPT or Claude are really good at music and shopping suggestions. Just chat with them about your tastes for a while, then ask for suggestions. Compared to old recommender systems this method allows much better user guidance.
josephg
Yeah, Claude helped me decide what to get my girlfriend for her birthday a few weeks ago. It suggested some great gift ideas I hadn’t thought of - and my girlfriend loved them.
Workaccount2
I think we can expect this to be rapidly monetized.
KoftaBob
For shopping suggestions, I've had the best experience with Perplexity.
GraemeMeyer
It's coming for PCs soon: https://www.theregister.com/2025/01/20/microsoft_unveils_win...
And to a certain extent for the Microsoft cloud experience as well: https://www.theverge.com/2024/10/8/24265312/microsoft-onedri...
dmbche
It doesn't solve any problem, you can just search your files using your prefered file explorer (crtl-f)
I'd assume most people organise their files so that they know where things are as well.
nine_k
> you can just search your files using your prefered file explorer
This only work if you remember specific substrings. An LLM (or some other language model) can summarize and interpolate. It can be asked to find that file that mentions a transaction for buying candy, and it has a fair chance to find it, even if none of the words "transaction", "buying" or "candy" are present in the file, e.g. it says "shelled out $17 for a huge pack of gobstoppers".
> I'd assume most people organise their files
You'll be shocked, but...
dmbche
But isn't that candy example non-sensical? In what situation do you need some information without any of the context(or without knowing any of the context)?
i really believe that this is not an actual problem in need of solving, but instead creating a tool (personal ai assistant) and trying to find a usecase
Edit0: note to self, rambling - assuming there exist valuable information that one needa to access in their files, but one doesn't know where it is, when it was made, it's name or other information about it(as you could find said file right away with this information).
Say you need an information for some documentation like the C standard - you need precise information on some process. Is it not much simpler to just open the doc and use the index? Then again for you to be aeare of the C standard makes the query useless.
If it's from something less well organised, say you want letters you wrote to your significant other, maybe the assistant could help. But then again, what are you asking? How hard is it to keep your letters in a folder? Or even simply know what you've done (I surely can't imagine forgetting things I've created but somehow finding use in a llm that finds it for me).
Like asking it "what is my opinion on x" or "what's a good compliment I wrote" is nonsensical to me, but asking it about external ressources makes the idea of training it on your own data pointless. "How did I write X API" - just open your file, no? You know where it is, you made it.
Like saying "get me that picture of unle tony in Florida" might save you 10 seconds instead of going into your files and thinking about when you got that picture, but it's not solving a real issue or making things more efficient. (Edit1: if you don't know Tony, when you got the picture or of what it's a picture of, why are you querying? What's the usecase for this information, is it just to prove it can be done? It feels like the user needs to contorts themselves in a small niche for this product to be useful)
Either it's used for non valuable work (menial search) or you already know how to get the answer you need.
I cannot imagine a query that would be useful compared to simply being aware of what's in your computer. And if you're not aware of it, how do you search for it?
ozim
I think the same, people are not organized - even with things that make them money and being organized could earn them much more.
whatever1
But file explorer does not read the actual files and build context. Even for pure text files that sometimes search functions can also access, I need to remember exactly the string of characters I am looking for.
I was hoping an LLM would have a context of all of my content (text and visual) and for the first time use my computers data as a knowledge base.
Queries like “what was my design file for that x service” ? Today it’s impossible to answer unless you have organized your data your self.
Why do we still have to organize our data manually?
pests
The photos apps do this well now. Can search Apple/Google photos with questions about the content of images and videos and get useful results.
ozim
I think you are really wrong.
Most people I see at work and outside don’t care and they want stupid machine to deal with it.
That is why smartphones and tablets move away from providing „file system” access.
It is super annoying for me but most people want to save their tax form or their baby photo not even understanding each is different file type - because they couldn’t care less about file types let alone making folder structure to keep them organized.
acchow
Curiously, the things I search most often are not located in files: calendar, photo content/location, email, ChatGPT history, Spotify library, iMessage/whatsapp history, contacts, notes, Amazon order history
stuaxo
Off topic - but I think joining recommendation systems and forums (aka all the social media that isn't bsky or fedi) has been a complete disaster for society.
anonymousDan
It's interesting that none of these papers seem to be coming out of academic labs....
pizza
Checking if a recommendation system is actually good in practice is kind of tough to do without owning a whole internet media platform as well. At best, you'll get the table scraps from these corporations (in the form of toy datasets/models made available), and you still will struggle to make your dev loop productive enough without throwing similar amounts of compute that the ~FAANGs do so as to validate whether that 0.2% improvement you got really meant anything or not. Oh, and also, the nature of recommendations is that they get very stale very quickly, so be prepared to check that your method still works when you do yet another huge training run on a weekly/daily cadence.
bradly
> you still will struggle to make your dev loop productive enough without throwing similar amounts of compute that the ~FAANGs do so as to validate whether that 0.2% improvement you got really meant anything or not
And do not forget the incredible of number of actual humans FAANG pays every day to evaluate any changes in result sets for top x,000 queries.
lmeyerov
As someone whose customers do this stuff, I'm 100% for most academics chasing harder and more important problems
Most of these papers are specialized increments on high baselines for a primarily commercial problem. Likewise, they focus on optimizing phenomena that occur in their product, which may not occur in others. Eg, Netflix sliding window is neato to see the result of, but I rather students user their freedom to explore bigger ideas like mamba, and leave sliding windows to a masters student who is experimenting with intentionally narrowly scoped tweaks.
lmeyerov
As someone whose customers do this stuff, I'm 100% for most academics chasing harder and more important problems.
Most of these papers are specialized increments on high baselines for a primarily commercial problem. Likewise, they focus on optimizing phenomena that occur in their product, which may not occur in others. Eg, Netflix sliding window is neato to see the result of, but I rather students user their freedom to explore bigger ideas like mamba, and leave sliding windows to a masters student who is experimenting with intentionally narrowly scoped tweaks. At that point, to top PhD grads at industrial labs will probably win.
That said, recsys is a general formulation with applications beyond shopping carts and social feeds, and bigger ideas do come out, where I'd expect competitive labs to do projects on. GNN for recsys was a big bet a couple years ago, and LLMs now, and it is curious to me those bigger shifts are industrial labs papers as you say. Maybe the statement there is recsys is one of the areas that industry hires a lot of PhDs on, as it is so core to revenue lift: academia has regular representation, while industry is overrepresented.
memhole
It looks like a great overview of recommendation systems. I think my main takeaways are:
1. Latency is a major issue.
2. Fine tuning can lead to major improvements and I think reduce latency. If I didn’t misread.
3. There’s some threshold or problems where prompting or fine tuning should be used.
> Spotify saw a 9% increase in exploratory intent queries, a 30% rise in maximum query length per user, and a 10% increase in average query length—this suggests the query recommendation updates helped users express more complex intents
To me it's not clear that it should be interpreted as an improvement: what I read in this summary is that users had to search more and to enter longer queries to get to what they needed.