Will Amazon S3 Vectors Kill Vector Databases–Or Save Them?
41 comments
·September 8, 2025simonw
alanwli
The alternative is to find solutions that can reasonably support different requirements because business needs change all the time especially in the current state of our industry. From what I’ve seen, OSS Postgres/pgvector can adequately support a wide variety of requirements for millions to low tens of millions of vectors - low latencies, hybrid search, filtered search, ability to serve out of memory and disk, strong-consistency/transactional semantics with operational data. For further scaling/performance (1B+ vectors and even lower latencies), consider SOTA Postgres system like AlloyDB with AlloyDB ScaNN.
Full disclosure: I founded ScaNN in GCP databases and am the lead for AlloyDB Semantic Search. And all these opinions are my own.
speedysurfer
And what if they change their internal implementation and your code depends on the old architecture? It's good practice to clearly think about what to expose to users of your service.
altcognito
Knowing how the service will handle certain workloads is an important aspect of choosing an architecture.
redskyluan
Author of this article.
Yes, I’m the founder and maintainer of the Milvus project, and also a big fan of many AWS projects, including S3, Lambda, and Aurora. Personally, I don’t consider S3Vector to be among the best products in the S3 ecosystem, though I was impressed by its excellent latency control. It’s not particularly fast, nor is it feature-rich, but it seems to embody S3’s design philosophy: being “good enough” for certain scenarios.
In contrast, the products I’ve built usually push for extreme scalability and high performance. Beyond Milvus, I’ve also been deeply involved in the development of HBase and Oracle products. I hope more people will dive into the underlying implementation of S3Vector—this kind of discussion could greatly benefit both the search and storage communities and accelerate their growth.
redskyluan
By the way, if you’re not fully satisfied with S3Vector’s write, query, or recall performance, I’d encourage you to take a look at what we’ve built with Zilliz Cloud. It may not always be the lowest-cost option, but it will definitely meet your expectations when it comes to latency and recall.
qaq
"I recently spoke with the CTO of a popular AI note-taking app who told me something surprising: they spend twice as much on vector search as they do on OpenAI API calls. Think about that for a second. Running the retrieval layer costs them more than paying for the LLM itself. That flips the usual assumption on its head." Hmm well start sending full documents as part of context see it flip back :).
heywoods
Egress costs? I’m really surprised by this. Thanks for sharing.
qaq
Sry maybe should've being more clear it was a sarcastic remark. The whole point of doing vector db search is to feed LLM with very targeted context so you can save $ on API calls to LLM.
infecto
That’s not the whole point it’s in the intersection of reducing tokens sent but also getting search both specific and generic enough to capture the correct context data.
null
rubenvanwyk
I don’t think it’s either-or, this will probably become the default / go-to - if you aren’t storing your vectors in your db like Neon or Turso.
As far as I understand, Milvus is appropriate for very large scale, so will probably continue targeting enterprise.
cpursley
Postgres has pgvector. Postgres is where all of my data already lives. It’s all open source and runs anywhere. What am I missing with the specialty vector stores?
CuriouslyC
latency, actual retrieval performance, integrated pipelines that do more than just vector search to produce better results, the list goes on.
Postgres for vector search is fine for toy products or stuff that's outside the hot loop of your business but for high performance applications it's just inadequate.
cpursley
For the vast majority of applications, the trade off is worth keeping everything in Postgres vs operational overhead of some VC hype data store that won’t be around in 5 years. Most people learned this lesson with Mongo (postgrest jsonb is now good enough for 90% of scenarios).
cpursley
Also, no way retrieval performance is going to match pgvector because you still have to join the external vector with your domain data in the main database at the application level, which is always going to be less performant.
scosman
Anyone interested in this space should look at https://turbopuffer.com - I think they were first to market with S3 backed vector storage, and a good memory cache in front of it.
resters
By hosting the vectors themselves, AWS can meta-optimize its cloud based on content characteristics. It may seem like not a major optimization, but at AWS scale it is billions of dollars per year. It also makes it easier for AWS to comply with censorship requirements.
coredog64
This comment appears to misunderstand the control plane/data plane distinction of AWS. AWS does have limited access to your control plane, primarily for things like enabling your TAMs to analyze your costs or getting assistance from enterprise support teams. They absolutely do not have access to your dataplane unless you specifically grant it. The primary use case for the latter is allowing writes into your storage for things like ALB access logs to S3. If you were deep in a debug session with enterprise support they might request one-off access to something large in S3, but I would be surprised if that were to happen.
resters
If that is the case why create a separate govcloud and HIPAA service?
thedougd
HIPAA services are not separate. You only need to establish a Business Associations Addendum (BAA) with AWS and stick to HIPAA eligible services: https://aws.amazon.com/compliance/hipaa-eligible-services-re...
GovCloud exists so that AWS can sell to the US government and their contractors without impacting other customers who have different or less stringent requirements.
barbazoo
> It also makes it easier for AWS to comply with censorship requirements.
Does it, how? Why would it be the vector store that would make it easier for them to censor the content? Why not censor the documents in S3 directly, or the entries in the relational database. What is different about censoring those vs a vector store?
resters
Once a vector has been generated (and someone has paid for it) it can be searched for and relevant content can be identified without AWS incurring any additional cost to create its own separate censorship-oriented index, etc. AWS can also add additional bits to the vector that benefit its internal goals (scalability, censorship, etc.)
Not to mention there is lock-in once you've gone to the trouble of using a specific embedding model on a bunch of content. Ideally we'd converge on backwards-compatible, open source approaches, but cloud vendors want to offer "value" by offering "better" embedding models that are not open source.
simonw
Why would they do that? Doesn't sound like something that would attract further paying customers.
Are there laws on the books that would force them to apply the technology in this way?
whakim
Regardless of the merits of this argument, dedicated vector databases are all running on top of AWS/GCP/Azure infrastructure anyways.
barbazoo
And that doesn't apply to any other database/search technology AWS offers?
storus
Does this support hybrid search (dense + sparse embeddings)? Pure dense embeddings aren't that great for specific search, they only hit meaning reliably. Amazon's own embeddings also aren't SOTA.
danielcampos93
I think you would be very surprised by the number of customers who don't care if the embeddings are SOTA. For every Joe who wants to talk GraphRAG + MTEB + CMTEB and adaptive rag there are 50 who just want whatever IT/prodsec has approved
infecto
That’s where my mind was rolling and also if not, can this be used in OpenSearch hybrid search?
Fendy
what do you think?
sharemywin
it's annoying to me that there's not a doc store with vectors. seems like the vector dbs just store the vectors I think.
CuriouslyC
My search service Lens returns exact spans from search, while having the best performance both in terms of latency and precision/recall within a budget. I'm just working on release cleanup and final benchmark validation so hopefully I can get it in your hands soon.
simonw
Elasticsearch and MongoDB Atlas and PostgreSQL and SQLite all have vector indexes these days.
KaoruAoiShiho
> MongoDB Atlas
It took a while but eventually opensource dies.
storus
Pinecone allows 40k of metadata with each vector which is often enough.
whakim
Elasticsearch and Vespa both fit the bill for this, if your scale grows beyond the purpose-built vector stores.
intalentive
I just use sqlite
This is a good article and seems well balanced despite being written by someone with a product that directly competes with Amazon S3. I particularly appreciated their attempt to reverse-engineer how S3 Vectors work, including this detail:
> Filtering looks to be applied after coarse retrieval. That keeps the index unified and simple, but it struggles with complex conditions. In our tests, when we deleted 50% of data, TopK queries requesting 20 results returned only 15—classic signs of a post-filter pipeline.
Things like this are why I'd much prefer if Amazon provided detailed documentation of how their stuff works, rather than leaving it to the development community to poke around and derive those details independently.