Web search on the Anthropic API
49 comments
·May 7, 2025cmogni1
peterldowns
> For example, writing an article about the side effects of Accutane should err on the side of pulling in research articles first for higher quality information and not blog posts.
Interesting, I'm taking isotretinoin right now and I've found it's more interesting and useful to me to read "real" experiences (from reddit and blogs) than research papers.
TuringTourist
Can you elaborate? What information are you gleaning from anecdotes that is both reliable and efficacious enough to outweigh research?
I'm not trying to challenge your point, I am genuinely curious.
peterldowns
I just want to hear about how other people have felt while taking the medicine. I don't care about aggregate statistics very much. Honestly what research do you read and for what purpose? All social science is basically junk and most medical research is about people whose bodies and lifestyles are very different than mine.
TechDebtDevin
Wear lots of (mineral) sunscreen, and drink lots and lots of water. La Roche Posey lotions are what I used, and continue to use with tretinoin. Sunscreen is the most important.
peterldowns
Great advice, already quite on top of it. I'd recommend checking out stylevana and importing some of the japanese/korean sunscreens if you haven't tried them out yet!
simple10
That's been my experience as well. Web search built into the API is great for convenience, but it would be ideal to be able to provide detailed search and reranking params.
Would be interesting to see comparisons for custom web search RAG vs API. I'm assuming that many of the search "params" of the API could be controlled via prompting?
jarbus
Is search really that costly to run? $10/1000 searches seems really pricey. I'm wondering if these costs will come down in a few years.
jsnell
Yes.
The Bing Search API is priced at $15/1k queries in the cheapest tier, Brave API is $9 at the non-toy tier, Google's pricing for a general search API is unknown but their Search grounding in Gemini costs $35/1k queries.
Search API prices have been going up, not down, over time. The opposite of LLMs, which have gotten 1000x cheaper over the last two years.
ColinHayhurst
Excuse the self-promotion but Mojeek is £3/1,000: https://www.mojeek.com/services/search/web-search-api/
jwr
> Google's pricing for a general search API
As I discovered recently, and much to my surprise, Google does not offer a "general search API", at least not officially.
There is a "custom search" API that sounds like web search, but isn't: it offers a subset of the index, which is not immediately apparent. Confusing and misleading labeling there.
Bing offers something a bit better, but I recently ended up trying the Kagi API, and it is the best thing I found so far. Expensive ($25/1000), but works well.
jsnell
There are multiple search engines known to be based on Google's API (Startpage, Leta, Kagi), so that product definitely exists. But it exciting that's all we know. They indeed do not publish anything about it. We don't know the price, the terms, or even the name.
formercoder
I work at Google but not on this. We do offer Gemini with Google Search grounding which is similar to a search API.
OxfordOutlander
Openai search mode is $30-50 per 1000 depending on low-high context
Gemini is $30/1000
So Anthropic is actually the cheapest.
For context, exa is $5 / 1000.
AznHisoka
If you want an unofficial API, most data providers usually charge $4/1000 queries or so. By unofficial, I mean they just scrape whats in Google and return that to you. So thats the benchmark I use, which means the cost here is around 2x that.
As far as I know, the pricing really hasnt gone down over the years. If anything it has gone up because Google is increasingly making it harder for these providers
Manouchehri
That seems expensive.
For 100 results per query, serper.dev is $2/1000 queries and Bright Data is $1.5/1000 queries.
jbellis
I'm not sure that's correct -- the first party APIs are priced per query but BD is per 1k results. Not immediately obvious what they count as a "result" tho.
AznHisoka
Sorry, got this off by a multiple. Yes, pricing is around that. So these “official” APIs are much more expensive.
tuyguntn
they will come down, because up until recently consumers were not paying directly for searches, with the LLM which has a cutoff date in the past and hallucinations, search got popular paid API.
Popularity will grow even more, hence competition will increase and prices will change eventually
AznHisoka
I dont think that will be true. What competition? Google, Bing, and.. Kagi? (And only one of those have a far superior index/algo than the others)
benjamoon
Good that it has an “allowed domain” list, makes it really useable. The OpenAI Responses api web search doesn’t let you limit domains currently so can’t make good use of it for client stuff.
simianwords
Can any one answer this question: are they using custom home made web index? Or are they using bing/google api?
Also I'm quite sure that they don't use vector embeddings for web search, its purely on text space. I think the same holds for all LLM web search tools. They all seem to work well -- maybe we don't need embeddings for RAG and grepping works well enough?
minimaxir
The web search functionality is also available in the backend Workbench (click the wrench Tools icon) https://console.anthropic.com/workbench/
The API request notably includes the exact text it cites from its sources (https://docs.anthropic.com/en/docs/build-with-claude/tool-us...), which is nifty.
Cost-wise it's interesting. $10/1000 queries is much cheaper for heavy use than Google's Gemini (1500 free per day then $35/1000) when you'd expect Google to be the cheaper option. https://ai.google.dev/gemini-api/docs/grounding
istjohn
Well also Google has put onerous conditions on their service:
- If you show users text generated by Gemini using Google Search (grounded Gemini), you must display a provided widget with suggested search terms that links directly to Google Search results on google.com.
- You may not modify the text generated by grounded Gemini before displaying it to your users.
- You may not store grounded responses more than 30 days, except for user histories, which can retain responses for up to 6 months.
https://ai.google.dev/gemini-api/terms#grounding-with-google...
https://ai.google.dev/gemini-api/docs/grounding/search-sugge...
miohtama
Google obviously does not want to cannibalise their golden goose. However it's inevitable that Google search will start to suffer because people need it less and less with LLMs.
handfuloflight
So the price is just the $0.01 per query? Are they not charging for the tokens loaded into context from the various sources?
minimaxir
The query cost is in addition to tokens used. It is unclear if the tokens ingested from the search query count as addititional input tokens.
> Web search is available on the Anthropic API for $10 per 1,000 searches, plus standard token costs for search-generated content.
> Each web search counts as one use, regardless of the number of results returned. If an error occurs during web search, the web search will not be billed.
stephpang
Hi, stephanie from Anthropic here. Thanks for the feedback! We've updated the docs to hopefully make it a little more clear but yes search results do count towards input tokens
https://docs.anthropic.com/en/docs/build-with-claude/tool-us...
potlee
If you use your own search tool, you would have to pay for input tokens again every time the model decides to search. This would be a big discount if they only charging once for all output as output tokens but seems unclear from the blog post
stephpang
Thanks for the feedback, just updated our docs to hopefully make this a little clearer. Search results count towards input tokens on every subsequent iteration
https://docs.anthropic.com/en/docs/build-with-claude/tool-us...
potlee
Thanks for addressing it. Still sounds like a significant discount if only the search results and not all messages count are input tokens on subsequent iterations!
simonw
I couldn't see anything in the documentation about whether or not it's allowed to permanently store the results coming back from search.
Presumably this is using Brave under the hood, same as Claude's search feature via the Anthropic apps?
minimaxir
Given the context/use of encrypted_index and encrypted_context, I suspect search results are temporarily cached.
simonw
Right, but are there any restrictions on what I can do with them?
Google Gemini has some: https://ai.google.dev/gemini-api/docs/grounding/search-sugge...
OpenAI has some rules too: https://platform.openai.com/docs/guides/tools-web-search#out...
> "When displaying web results or information contained in web results to end users, inline citations must be made clearly visible and clickable in your user interface."
I'm used to search APIs coming with BIG sets of rules on how you can use the results. I'd be surprised but happy if Anthropic didn't have any.
The Brave Search API is a great example of this: https://brave.com/search/api/
They have a special, much more expensive tier called "Data w/ storage rights" which is $45 CPM, compared to $5 CPM for the tier that doesn't include those storage rights.
istjohn
Google's restrictions are outlandish: "[You] will not modify, or intersperse any other content with, the Grounded Results or Search Suggestions..."
omneity
Related: For those who want to build their own AI search for free and connect it to any model they want, I created a browser MCP that interfaces with major public search engines [0], a SERP MCP if you want, with support for multiple pages of results.
The rate limits of the upstream engines are fine for personal use, and the benefit is it uses the same browser you do, so results are customized to your search habits out-of-the-box (or you could use a blank browser profile).
lemming
I'm also interested to know if there are other limitations with this. Gemini, for example, has a built-in web search tool, but it can't be used in combination with other tools, which is a little annoying. o3/o4-mini can't use the search tool at all over the API, which is even more annoying.
aaronscott
It would be nice if the search provider could be configured. I would like to use this with Kagi.
lemming
I would really love this too. However I think that the only solution for that is to give it a Kagi search tool, in combination with a web scraping tool, and a loop while it figures out whether it's got the information it needs to answer the question.
metalrain
It's a good reminder that AI chats won't make web searches obsolete, just embed them at deeper in the stack.
Maybe Google search revenue moves from ads to more towards B2B deals for search API use.
zhyder
Now all the big 3 LLM providers provide web search grounding in their APIs, but how do they compare in ranking quality of the retrieved web search results? Anyone run benchmarks here?
Clearly web search ranking is hard after decades of content spam that's been SEO optimized (and we get to look forward to increasing AI spam dominating the web in the future). The best LLM provider in the future could be the one with just the best web search ranking, just like what allowed Google to initially win in search.
RainbowcityKun
Right now, most LLMs with web search grounding are still in Stage 1: they can retrieve content, but their ability to assess quality, trustworthiness, and semantic ranking is still very limited.
The LLMs can access the web, but they can't yet understand it in a structured, evaluative way.
What’s missing is a layer of engineered relevance modeling, capable of filtering not just based on keywords or citations, but on deeper truth alignment and human utility.
And yes, as you mentioned, we may even see the rise of LLM-targeted SEO—content optimized not for human readers, but to game LLM attention and summarization heuristics. That's a whole new arms race.
The next leap won’t be about just accessing more data, but about curating and interpreting it meaningfully.
simianwords
>Right now, most LLMs with web search grounding are still in Stage 1: they can retrieve content, but their ability to assess quality, trustworthiness, and semantic ranking is still very limited.
Why do you think it is limited? Imagine you show a link with details to an LLM and ask it if it is trustworthy or high quality w.r.t the query, why can't it answer it?
RainbowcityKun
What I mean is that more powerful engineering capabilities are needed to provide LLM with processing of search results.
I think the most interesting thing to me is they have multi-hop search & query refinement built in based on prior context/searches. I'm curious how well this works.
I've built a lot of LLM applications with web browsing in it. Allow/block lists are easy to implement with most web search APIs, but multi-hop gets really hairy (and expensive) to do well because it usually requires context from the URLs themselves.
The thing I'm still not seeing here that makes LLM web browsing particularly difficult is the mismatch between search result relevance vs LLM relevance. Getting a diverse list of links is great when searching Google because there is less context per query, but what I really need from an out-of-the-box LLM web browsing API is reranking based on the richer context provided by a message thread/prompt.
For example, writing an article about the side effects of Accutane should err on the side of pulling in research articles first for higher quality information and not blog posts.
It's possible to do this reranking decently well with LLMs (I do it in my "agents" that I've written), but I haven't seen this highlighted from anyone thus far, including in this announcement.