Introducing Kagi Assistants
39 comments
·November 20, 2025jryio
clearleaf
Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.
Hey Google, Pinterest results are probably messing with AI crawlers pretty badly. I bet it would really help the AI if that site was deranked :)
Also if this really is the case, I wonder what an AI using Marginalia for reference would be like.
viraptor
> Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.
It's likely they can filter the results for their own agents, but will leave other results as they are. Half the issue with normal results are their ads - that's not going away.
sroussey
There are several startups providing web search solely for ai agents. Not sure any agent uses Google for this.
null
MangoToupe
> Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.
They spent the last decade and a half encouraging the proliferation of garbage via "SEO". I don't see this reversing.
null
bitpush
> Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.
I'm not convinced about this. If the strategy is "lets return wikipedia.org as the most relevant result", that's not sophisticated at all. Infact, it only worked for a very narrow subset of queries. If I search for 'top luggages for solo travel', I dont want to see wikipedia and I dont know how kagi will be any better.
viraptor
The wrote "returned the relevant Wikipedia page higher" and not "wikipedia.org as the most relevant result" - that's an important distinction. There are many irrelevant Wikipedia pages.
VHRanger
(Kagi staff here)
Generally we do particularly better on product research queries [1] than other categories, because most poor review sites are full of trackers and other stuff we downrank.
However there aren't public benchmarks for us to brag about on product search, and frankly the simpleQA digression in this post made it long enough it was almost cut.
1. (Except hyper local search like local restaurants)
ranyume
I used quick research and it was pretty cool. A couple of caveats to keep in mind:
1. It answers using only the crawled sites. You can't make it crawl a new page. 2. It doesn't use a page' search function automatically.
This is expected, but doesn't hurt to take that in mind. I think i'd be pretty useful. You ask for recent papers on a site and the engine could use hackernews' search function, then kagi would crawl the page.
natemcintosh
As a Kagi subscriber, I find this to be mostly useful. I'd say I do about 50% standard Kagi searches, 50% Kagi assistant searches/conversations. This new ability to change the level of "research" performed can be genuinely useful in certain contexts. That said, I probably expect to use this new "research assistant" once or twice a month.
VHRanger
I'd say the most useful part for me is appending ? / !quick / !research directly from the browser search bar to a query
itomato
I'm seeing a lot of investment in these things that have a short shelf life.
Agents/assistants but nothing more.
VHRanger
We're building tools that we find useful, and we hope others find it too. See notes on our view of LLMs and their flaws:
ugurs
Why do you think the shelf life is short?
ceroxylon
Kagi reminds me of the original search engines of yore, when I could type what I want and it would appear, and I could go on with my work/life.
As for the people who claim this will create/introduce slop, Kagi is one of the few platforms where they are actively fighting against low quality AI generated content with their community fueled "SlopStop" campaign.[0]
Not sponsored, just a fan. Looking forward to trying this out.
iLoveOncall
The fact that people applaud Kagi taking the money they gave for search to invest it in bullshit AI products and spit on Google's AI search at the same time tells you everything you need to know about HackerNews.
VHRanger
We're explicitly conscious of the bullshit problem in AI and we try to focus on only building tools we find useful. See position statement on the matter yesterday:
grayhatter
> LLMs are bullshitters. But that doesn't mean they're not useful
> Note: This is a personal essay by Matt Ranger, Kagi’s head of ML
I appreciate the disclaimer, but never underestimate someone's inability to understand something, when their job depends on them not understanding it.
Bullshit isn't useful to me, I don't appreciate being lied to. You might find use in declaring the two different, but sufficiently advanced ignorance (or incompetence) is indistinguishable from actual malice, and thus they should be treated the same.
Your essay, while well written, doesn't do much to convince me any modern LLM has a net positive effect. If I have to duplicate all of it's research to verify none of it is bullshit, which will only be harder after using it given the anchoring and confirmation bias it will introduce... why?
iLoveOncall
Your words don't match your actions.
And to be clear you shouldn't build the tools that YOU find useful, you should build the tools that your users, which pay for a specific product, find useful.
You could have LLMs that are actually 100% accurate in their answers that it would not matter at all to what I am raising here. People are NOT paying Kagi for bullshit AI tools, they're paying for search. If you think otherwise, prove it, make subscriptions entirely separate for both products.
freediver
Kagi founder here. We are moving to a future where these subscriptions will be separate. Even today more that 80% of our members use Kagi Assistant and our other AI-supported products so saying "people are NOT paying Kagi for bullshit AI tools" is not accurate, mostly in the sense that we are not in the business of creating bullshit tools. Life is too short for that. I also happen to like Star Trek version of the future, where smart computers we can talk to exist. I also like that Star Trek is still 90% human drama, and 10% technology working in the background - and this is the kind of future I would like to build towards and leave for my children. Having the most accurate search in the world is a big part of it, and that is not going anywhere.
bananapub
regular reminder: kagi is - above all else - a really really good search engine, and if google/etc, or even just the increasingly horrific ads-ocracy make you sad, you should definitely give it a go - the trial is here: https://kagi.com/pricing
if you like it, it's only $10/month, which I regrettably spend on coffee some days.
skydhash
I now that the price haven’t changed for a while, but I would pay for unlimited search and no AI.
iLoveOncall
> above all else
What they've been building for the past couple of years makes it blindingly clear that they are definitely not a search engine *above all else*.
HotGarbage
I really wish Kagi would focus on search and not waste time and money on slop.
drewda
What they saying in this post is that they are designing these LLM-based features to support search.
The post describes how their use-case is finding high quality sources relevant to a query and providing summaries with references/links to the user (not generating long-form "research reports")
FWIW, this aligns with what I've found ChatGPT useful for: a better Google, rather than a robotic writer.
theoldgreybeard
I'm sure Google also says they built "AI mode" to "support search".
Their search is still trash.
esafak
Except the AI mode filters out the bad results for you :)
barrell
If you look at my post history, I’m the last person to defend LLMs. That being said, I think LLMs are the next evolution in search. Not what OpenAI and Anthropic and xAI are working on - I think all the major models are moving further and further away from that with the “AI” stuff. But the core technology is an amazing way to search.
So I actually find it the perfect thing for Kagi to work with. If they can leverage LLMs to improve search, without getting distracted by the “AI” stuff, there’s tons of potential value,
Not saying that’s what this is… but if there’s any company I’d want playing with LLMs it’s probably Kagi
skydhash
A better search would be rich metadata and powerful filter tools, not result summarizer. When I search, I want to find stuff, I don’t want an interpretation of what was found.
0x1ch
This is building on top of the existing core product, so the output is directly tied to the quality of their core search results being fed into the assistants. I overall really enjoy all of their A.I products, using their prompt assistant frequently for quick research tasks.
It does miss occasionally, or I feel like "that was a waste of tokens" due to a bad response or something, but overall I like supporting Kagi's current mission in the market of AI tools.
bigstrat2003
Same, though in fairness as long as they don't force it on me (the way Google does) and as long as the real search results don't suffer because of a lack of love (which so far they haven't), then it's no skin off my back. I think LLMs are an abysmal tool for finding information, but as long as the actual search feature is working well then I don't care if an LLM option exists.
VHRanger
It's not -- this was posted literally yesterday as a position statement on the matter (see early paragraphs in OP):
Kagi is treating LLMs as potentially useful tools to be used with their deficiencies in mind, and with respect of user choices.
Also, we're explicitly fighting against slop:
null
AuthAuth
Kagi is already expensive for a search engine. Now I know part of my subscription is going towards funding AI bullshit. And I know the cost of that AI bullshit will get jacked up in price and force Kagi sub price up as well. I'm so tired of AI being forced into everything.
progval
These are only available on the Ultimate tier. If (like me) you don't care about the LLMs then there is no reason to be on the Ultimate tier so you don't pay for it.
daft_pink
Not for nothing, but I wish there was an anonymized ai built into a kagi that was able to have normal conversation discussion about sexual topics or search for pornographic topics like a safe search off function.
I understand the safety needs around things LLM should not build nuclear weapons, but it would be nice to have a frontier model that could write or find porn.
VHRanger
You'll want de-censored models like cydonia for that -- can be found on openrouter, or through something like msty
I think there's a very important nugget here unrelated to agents: Kagi as a search engine is a higher signal source of information than Google page rank and ad sense funded model. Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.
> We found many, many examples of benchmark tasks where the same model using Kagi Search as a backend outperformed other search engines, simply because Kagi Search either returned the relevant Wikipedia page higher, or because the other results were not polluting the model’s context window with more irrelevant data.
> This benchmark unwittingly showed us that Kagi Search is a better backend for LLM-based search than Google/Bing because we filter out the noise that confuses other models.