AI and Startup Moats
77 comments
·January 7, 2025float4
Etheryte
I think it's pretty easy to see the statement is an oversimplification to the point where it loses pretty much loses all value. Bezos says customers want vast selection, but most would agree that the reason Amazon is garbage these days is because it's flooded with cheap crap. The selection is vast, but the pile of dung is so large that it's practically impossible to find a good product hidden underneath the rest of it.
InkCanon
I find Bezos' statement to be a bit oversimplified. For example Temu (and virtually every other Chinese E commerce) wipes the floor with price and selection. Costco is cheaper than Walmart. Yet Amazon is vastly larger than both.
dbspin
Agreed. Without negative caveats, such positive statements are meaningless.
Customers want cheap goods. Caveat: They don't want (to know that) those goods are produced by slave labour.
Customers want a vast selection. Caveat: This should not include fake, shoddy or misleading listings
Customers want rapid delivery. Caveat: And they want it cheaply, or ideally for free, without breaking their stuff, at times they are home or in a manner they can receive the goods while away from home.
Etc.
torginus
Sorry this is a bit off topic (but relevant to your post).
I'm not American, but what do people like in Amazon, as in the retailer?
I have experience with the German Amazon, and often they're not the cheapest, they often don't have stock of the most popular items (as in the stuff you'd actually want, like iPhones or NVIDIA GPUs), and same day delivery, while nice, is something I can usually live without (and I'm willing to trade it in exchange for lower prices).
They seem to have an endless back catalog of cheap and cheerful mystery products of dubious quality, but I hardly consider that a decisive competitive edge.
spacebanana7
Amazon is excellent at selling physical books. I can order pretty much any vaguely popular book and have it delivered the next day at a price rarely higher than anywhere else.
That’s Amazon’s core business philosophically, everything else is an add on or side project that happened to be profitable.
I think that just like the original sin of web development is trying to run apps in a document browser, the original sin of Amazon is trying to sell everything in a bookstore.
whiplash451
Lowest click-to-package-at-my-door number (especially for books).
dboreham
US Amazon isn't like that, but iphones and short supply GPUs aren't widely available anyway. Apple controls where you can buy an iPhone and NVidia controls who gets GPUs.
sofixa
> I'm not American, but what do people like in Amazon, as in the retailer?
I'm not American either, but I use Amazon.fr occasionally. It has going for it:
* it's a trustworthy site. If I order something, I'm 100% sure I'll get it or get my money back. If I'm looking for something rather niche like an ESP32-S3 microcontroller, it beats buying on it vs a random site I've never heard of before that it will have longer delivery times and might be a scam or might have nonexistant support
* it has a large catalogue. I can buy coffee, kimchi, small electronics (PWM servo motors), larger electronics (toaster), power bank, USB C charger, mouse, outdoor furniture. It's easy to buy all sorts of stuff off it without hunting specialised physical stores or a ton of different websites. (of course for some things I know and already trust various websites or stores, so I buy off them; but for more generic or niche things, Amazon is pretty good)
* support, returns, delivery are all very good and there is barely anyone that is even close.
whiplash451
100%. This sentence in particular seems at odd with looking at constants:
> “Better product”: We need to define "better" clearly, but if you're basing this off your R&D efforts, I would very much fear the competition coming my way. If someone can use enough compute to copy you and use AGI to make a product better than what you currently have, is it still "better"?
IMO, better products is actually a constant that is anti-fragile to AI. Better products remain the best way to gain market shares for the foreseeable future (alongside solid marketing, ops and finance).
sillyfluke
Yes, definately. I find the lack of discussion about time frames as totally unserious. Their starting assumptions could be all valid if clairvoyantly made in the 90s and they'd still be utterly useless in helping startups make decisions for that decade. However, if they knew there would be significant breakthroughs in the early 2020s, well that'd be something else. Though you know, they'd have to find some random ways to stay alive until then.
Bezos is making assumptions about human behavior in that quote, and those assumptions seem instantly obvious to any human who is asked, regardless of their experience or expertise with any business whatsoever. There is no instant validity possible with the AI assumption.
trash_cat
> You should consider what won’t change, and the following is a (non-exhaustive) list of things that I think won’t change: I believe AI is and will continue to gain intelligence
I think this is a miss-representation of what he meant. Given that AI will be capable and prevalent (cheap intelligence) what are the factors that remain constant? He goes a lot into demand for physical things, like resources and/or supply chain, which is true. If anyone can relatively easily create a digital service then those with capital and physical resources will have bigger moat.
I personally think what will happen with the demand for digital services with intelligence being cheap.
sgt101
(total side track) There are other things that some customers want though:
- for the recommendations to offer me things I want or need, not things I just bought
- to be able to evaluate the quality of items rather than just the price of items
- for Amazon to extend it's brand around the items that I buy. "Amazon Recommends" is just so weak and offers no assurance or opportunity for loyalty. It's more or less meaningless and I suspect it's something that suppliers buy.
As every in business it's very difficult. I know that Amazon is humongous and knows it's business inside out. I am sure that Amazon insiders just feel tired reading other people's ideas about what would make things better, but on the other hand I do think that the narratives of business inevitability (and AI inevitability) are just false. Yes they have triumphed until recently, but what's happening in China really does undermine the idea that the future will be everyone just grifting to everyone else for a dime while the big corps enshitify anything that emerges from the primordial ooze.
Not that I think that what's happening in China is good.
Terr_
> Even if we’re being super conservative, the current capabilities of AI - like Claude 3.5, GPT-o1 - are already powerful enough to disrupt nearly every industry we know.
Skeptic here. The disruption might not be that large if the most ambitious applications also turn out to be fundamentally un-secure-able against malicious attacks, since "prompt injection" is not so much an exception as the fundamental operating principle of the text-fragment dream-machine.
pixelsort
It isn't fundamental. As the models begin to leverage test time compute more effectively, prompt injection becomes more difficult. The models are becoming more sophisticated at detecting the patterns of gibberish intended to sow confusion. In time, bare prompt injection probably stops being a thing. Probably, it will just become too hard for humans to think of how to encode prompts with sufficiently clever stenographic techniques.
alexvitkov
It doesn't matter how many layers of Python you use to obfuscate what a LLM actually is, as long as the prompt and the data you're operating on are part of the same token stream, prompt injection will exist in one form or another.
pixelsort
I imagine that with native tokens for planning and reflection empowering the models I'm referring to, it is something like a search space where we've enabled new reasoning capabilities by allowing multiple progressions of gradient descent that leverage partial success in ways that weren't previously possible. Lipstick or not, this is a new pig.
FrustratedMonky
"Prompt Injection".
1. I wonder if we need to start discussing "Prompt Injection" security about humans. Maybe Fox and Far Right marketing is a form of human Prompt Injection hacks.
2. Maybe a better model for how future "Prompt Injection" will work. Hacking an AI will be more about 'convincing it' kind of like how humans have to be 'convinced' like with propoganda.
3. SnowCrash had the human hacking virus based on language patterns from ancient Sumerian. Humans and Machines can both be hacked by language. Maybe more researching into hacking AI will give some insight into how to hack humans.
sgt101
Nearly complete security isn't security. If the potential is there people will find it, other models will find it.
Everythings fine until one day $200m disappears from your balance sheet and no one can explain why!
pixelsort
Working prompt injections for frontier models are devised by applying brilliant pattern constructions. If models ever become useful for writing them, that would represent a massive intelligence leap and a major concern.
As things stand, with working injections becoming harder for humans, people won't be able to make a name for themselves on the internet extracting meth recipes.
My point is just that it isn't a fundamental flaw, or at least, there are indications that reasoning at test time seems to be a part of the remedy.
StevenWaterman
Prompt injection attacks work against humans too, it's just called phishing
If you set up a system where a single human can't cause $200m to go missing, then you can give AI access to that same interface
dimitri-vs
I would argue the opposite, and I expect we'll see this pattern emerge this year:
- Companies pushing "agentic" capabilities into everything
- AI agents gaining expanded function calling abilities
- Applications requesting escalating permissions under the guise of context gathering
- Software development increasingly delegated to AI agents
- Non-developers effectively writing code through tools like Devin
The resulting security attack surface is absolutely massive.
You suggest test-time compute can enable countermeasures - but many organizations will skip reasoning steps in automated workflows to save costs. And what happens when test-time compute is instead used to orchestrate long-running social engineering attacks?
"Hey, could you ask Devin to temporarily disable row-level security? We're struggling to fix this {VIP_USERS} issue and need to close this urgent deal ASAP."
Terr_
> It isn't fundamental.
Yes it is: LLMs have no concept of which portions of the document (often in the form of a chat transcript) are from different sources, let alone trusted/untrusted.
qeternity
This is not strictly true, although I tend to agree with the gist of your point.
Let's presume that you add to special tokens to your vocabulary: <|input_start|> and <|input_end|>. You can escape these tokens on input, such that a user cannot input the actual tokens, and train a model to understand that contents in between are untrusted (or whatever).
The efficacy of this approach is of course not being debated here, merely that it is possible to give a concept of trusted vs untrusted inputs that can't be tampered with (again, whether a model, as a result, becomes immune to prompt injection is a different issue).
pixelsort
What has changed with CoT and high compute is not yet clear. My point is that if it makes bare prompt injection harder for humans then we shouldn't call it a fundamental limitation anymore.
Are LLMs nothing more than auto-regressive stochastic parrots? Perhaps not anymore, depending on test time, native specialty tokens, etc.
mvdtnz
Absolute nonsense. There's not a single shred of truth or even an argument with enough coherence to debate with in your post. You've written the AI grifter equivalent of "nuh uhhhh".
pixelsort
What grift? I'm only reporting first-hand and second-hand anecdata -- some of which is observations from the "prompt whisperers" who follow in Pliny's circles. Chain of thought poses an existential risk to prompt injection.
soulofmischief
Look on the bright side, a whole generation of hackers will grow up with prompt injection being their culture's phreaking and SQL injection.
soiax
This sound like you assume that the first thing someone thinks about is security, when building the next big thing.
They will just build something as fast as they can. Last thing you think about is "security".
There were prompt injections in all the big models, and still are. Why would it stop distruption?
Terr_
The blog-poster is talking about long-term trends, so it doesn't matter if early-adopters skip on security, the time-horizon is long enough that the consequences will matter.
If we stop and carefully look at our world, security (safety against malicious peers) is an iceberg taken for granted. One might start by summing up the militaries of every country on earth. Add the budgets of most police departments, and a good chunk of the justice system. The energy, material, and labor poured into most weapons, fences, doors, and locks. The CPU cycles used in all encryption, and most of the hashing.
P.S.: "Investors, friends, I am pleased to announce that our bold and powerful new business-model which will completely disrupt the entire retail sector, worldwide, and change society forever. Behold! TTLMD: Take The Thing and Leave the Money in the Drawer! Existing industry dinosaurs will be unable to compete with our ultra-low-cost alternative which needs barely any staff."
soiax
You mentioned prompt injection, now when you talk about larger time horizons, that sounds like a AI alignment issue.
I'm sure there will be actors who don't care at all about "security", saying the positive outcomes outweight the negatives.
airstrike
> Bezos nailed it on this topic: “[...] [I]n our retail business, we know that customers want low prices, and I know that's going to be true 10 years from now. They want fast delivery; they want vast selection. It's impossible to imagine a future 10 years from now where a customer comes up and says, 'Jeff I love Amazon; I just wish the prices were a little higher,' [or] 'I love Amazon; I just wish you'd deliver a little more slowly.' Impossible. And so the effort we put into those things [...] will still be paying off dividends for our customers 10 years from now. [...]”
This quote from TFA makes it sound like Bezos was the first to realize customers want low prices, but that's obviously false. What made Amazon special wasn't that realization. It was, among other things, to offer a better _shopping_ experience than the alternatives by making products easier to find, one-click purchases, customer reviews, detailed organized descriptions, FAQs, an increasingly growing selection... and then offer a better _shipping_ experience with later 2-day shipping for a flat annual fee, now often 1-day or same-day in some geographies, no-fuss returns and so on.
No one else has figured out logistics in the same way that Amazon has. Obviously scale helps, but Walmart had all the scale it could want and it still didn't figure out how to make it work. Shopify has also only faltered and fumbled so far.
Amazon created value because it organized the extremely complex activities of shopping and shipping in a way that makes them the obvious choice 99/100 times. That requires talent, software and hard work. It delivered so god damn much of those three things that it created AWS as a byproduct.
That's the Amazon DNA. That's where they shine and where they outcompete everyone else, including Walmart and other traditional retail names as well as FedEx, UPS and all other traditional shipping players.
When Amazon strays from that core DNA, they struggle too. Its successes with things like iRobot, the Fire line, Luna, Alexa, Whole Foods for the most part are either muted, late, or missing entirely.
ben_w
> It's impossible to imagine a future 10 years from now where a customer comes up and says […] 'I love Amazon; I just wish you'd deliver a little more slowly.' Impossible.
Bezos said impossible, but he was wrong about this. Because they sometimes spontaneously change delivery dates to be sooner, this can mean you have to be available on every day until a product arrives to avoid a "sorry we missed you" letter followed by needing to go to wherever the collection office is.
Reliable delivery can beat fast. And for those of us not able to work from home, scheduled delivery for when we're in, also beats fast. And if we have several different things all in the same order, where we need all the parts to make use of any of them, simultaneous delivery is marginally more convenient than each item being shipped as soon as it's available.
emanuer
Here is the perspective of a serial founder, exploring fields which I might be able to disrupt:
- The regulatory moat is immediately intimidating.
- The data moat, often, is quite surmountable as long as LLMs can generate high-quality synthetic data (e.g., user preferences). On this I disagree with the author, to some extend.
- The "distribution moat" is another significant barrier. Even if I have a superior product, if the marketing and sales demands are so high that neither I nor an army of bots can manage it alone, the business becomes nonviable (e.g., enterprise sales).
- "Switching costs" form the next moat. The higher these costs, the greater the value per dollar I must offer over the incumbents (e.g., software for dentists).
- Another key barrier is the “business rules” moat. Achieving 80% of the required features may be easy, but as customers demand 90% or 95%, the complexity and cost of reverse engineering grow exponentially. The more mature the market, the higher these demands (e.g., Jira).
With the power of LLMs at my disposal, I have reaffirmed two core beliefs:
1. I must focus on a niche small enough, so that I am the only provider. (e.g., accounting software for gym owners in the north of France)
2. I must offer a value proposition different from that of the incumbents, where competing with me, would harm their business. (e.g., image editing app where you pay per hour used)
So my search continues…
mritchie712
You're likely tossing out a random example on #1, but if that were a real idea, you'd need a good answer for: why can't gym owners in the north of France just use quickbooks or xero.
emanuer
You are correct, it was just a random example.
And I share your observation, if there is no clear answer to your question, the idea must be disregarded.
whiplash451
I like your train of thoughts. I think you're missing the network effect. It is often an overplayed classic, but I do think that it matters in an AI world.
ankit219
Reading this, thinking in terms of moats is useful, but in terms of AI, we are not there yet. There is a promise of exceptional improvement to everything, so much that many companies which takes ages to change a software are moving at a significantly faster pace.
One counter-intuitive thing here I believe is that thinking about moats is limiting. If you can deliver a solution today, which may not hold for a longer period (you keep innovating or launching newer products), is a preferable place to be than working out what could stand the test of time. Real answer is we don't know. A very real example is agents - thinking systems which can plan, reason, and take action. Within three months, an o1 equivalent would be able to do all that implicitly without a developer having to write complex pipelines, and companies woudl have to start over. AI democratizes human skill. That I think is a bigger mental model shift than many realize.
Over2Chars
I found the part of this I read to be a less than convincing market analysis of the barriers to entry for business.
Here's an AI on the same topic
"briefly, what are the top 5 current barriers to entry for AI companies"
Certainly! Here are five of the most significant barriers currently affecting the startup phase of AI companies:
1. *Data Quality and Availability*: Access to high-quality data is crucial for training effective machine learning models. However, obtaining large amounts of labeled data can be costly and challenging.
2. *High Initial Development Costs*: Building robust AI solutions often requires substantial investment in research, development, and infrastructure. This includes hiring skilled professionals with expertise in AI, as well as investing in hardware and software tools.
3. *Regulatory Compliance*: Many industries have strict regulations that businesses must comply with, especially when dealing with sensitive data or making predictions that could impact people’s lives (e.g., healthcare, finance). Adhering to these laws can be complex and costly.
4. *Technological Complexity*: Advanced AI technologies often require a high level of technical expertise. Companies need specialists in algorithms, software development, and domain-specific knowledge to design and deploy effective solutions.
5. *Scalability and Maintenance Costs*: Once an AI system is developed, there are ongoing costs associated with maintaining the model (e.g., updating algorithms as new data becomes available) and ensuring that it continues to perform well as usage increases.
These barriers can vary based on specific sectors and market dynamics but generally represent significant hurdles for AI startups.
mvdtnz
Don't post AI slop in HN comments.
Over2Chars
Yes, my point exactly. This AI slop is better than the article.
KaiserPro
We have not really reached the peak of the AI bubble yet, so its a bit hard to concretely talk about moats.
LLMs aren't the golden bullet the article hints at. Sure they are improving, but the cost is not falling. It costs a huge amount to create foundation models, and there will be a point where either we have a breakthrough (ie we move from sequence generation to concept synthesis) or the money runs out.
But regardless the rule of thumb still holds:
If your business idea is simple to do, then you need another plank to make your moat. That could be network effect, access to capital or both.
Patents are there to inhibit capital, because it costs money to challenge a patent (as well as defend)
If your business idea is not simple to implement, then you might have the benefit of time.
AI doesn't really change any of that, it just amplifies the effect. ie, making an amazon clone is simple now, because the tech/infra exists. Amazon had to make that infrastructure first, which was hard.
silveraxe93
But the cost is _definitely_ falling. For a recent example, see DeepSeek V3[1]. It's a model that's competitive with GPT-4, Claude Sonnet. But cost ~$6 Million to train.
This is ridiculously cheaper than what we had before. Inference is basically getting an 10x cheaper per year!
We're spending more because bigger models are worth the investment. But the "price per unit of [intelligence/quality]" is getting lower and _fast_.
Saying that models are getting more expensive is confusing the absolute value spent with the value for money.
ADeerAppeared
> Inference is basically getting an 10x cheaper per year!
You're gonna need some good citations for that.
There's a big difference between companies saying "The inference costs on our service are down" and the inference costs on the model are down. The former is oft cheated by simplyifying and dumbing down the models used in the service after the initial hype and benchmarks.
> But the "price per unit of [intelligence/quality]" is getting lower and _fast_.
Absolutely not a general trend across models. At best, older models are getting cheaper to run. Newer models are not cheaper "per unit of intelligence". OpenAI's fany new reasoning models are orders of magnitude more expensive to run whilst being ~linear improvements in real world capabilities.
silveraxe93
See situational-awareness[1], see the "algorithmic efficiencies" section. He shows many examples of how models are getting cheaper. With many citations.
Costs are not just down on a specific service. Even though I don't see the problem in that, as long as you get the promised level of performance, without being subsidised. See the deepseek model I linked above. It's an open model and you can run it yourself.
> At best, older models are getting cheaper to run.
What's your definition of old here? If you compare the literal bleeding edge model (o3) to 2 years ago best model (GPT-4)? Not only is this a ridiculously misleading comparison, it's not even valid!
o3 is a reasoning model. It can spend money at test time to improve results. Previous models don't even have this capability. You can't look at one example of where they just threw a lot of money and say this is the cost. The cost is unbounded! If they want, they can just not let the model think for ages and have basically "0-thinking" outputs. This is what you use to compare models.
If you compare _todays_ cost for training and inference of a model as good as GPT-4 when it was released, this cost has massively gone down on both counts.
[1] - https://situational-awareness.ai/from-gpt-4-to-agi/#The_tren...
KaiserPro
I'm not convinced about that 10 cheaper a year.
Larger models need more memory. I'm willing to bet that most of the tier 1 providers rely on multi-GPU models to serve traffic.
None of that is cheap, 8x GPU nodes that serve less than 20 queries a second are exceedingly expensive to run.
silveraxe93
Larger models are more expensive to run (ceteris paribus). But we're seeing we can squeeze more performance from smaller models.
You need to compare like-for-like. You can't say that the cost of building a 5-story apartment is increasing by pointing at the burj khalifa.
mvdtnz
> We're spending more because bigger models are worth the investment
Are they? Where's the value? What are they being used for actually out there in the real world? Not the shitty apps that simonw bleats about day in day out, not the lame website bots that repeat your FAQ back at me - actual real valuable (to the tune of the billions being invested in them) use cases?
silveraxe93
ChatGPT is one of the fastest growing apps ever. Saying that's there's no products is willful blindness by this point.
This is hackernews. I'd expect users to have a basic understanding of VC investment. The expected value of next-gen models times the probability to create them is higher than the billions than they are throwing at it.
beernet
> LLMs aren't the golden bullet the article hints at. Sure they are improving, but the cost is not falling. It costs a huge amount to create foundation models, and there will be a point where either we have a breakthrough (ie we move from sequence generation to concept synthesis) or the money runs out.
Very, very few use cases require training a new model. The vast majority can be solved by inferencing existing models, where it is absolutely true that inference costs are steadily declining.
> making an amazon clone is simple now
Seriously? Cloning Amazon is not equivalent to cloning a frontend...
mnky9800n
Imo making it easy to specialise a model is hard right now. Building RAGs and other things like that requires technical knowledge but dumping things on ChatGPT does not. Perhaps building a ui for dummies for specialising models so that they actually learn from new data and not just do inference is the way to go. But I imagine perplexity et al. Is already trying to do this.
KaiserPro
> Seriously? Cloning Amazon is not equivalent to cloning a frontend...
A frontend isn't a business, something that a good number of startups forget.
What I mean is that card payment, shipping, inventory management and sourcing are trivial compared to when amazon started.
visarga
Whoever owns the problem, owns the benefits of applying AI, not those who train the model, not those who host it. The only moat is to own the problem. AIs will be easily commoditised.
billconan
they predicted single person companies at 1B valuation with the help of AI. I don't believe it. AI empowers small teams, but AI also levels the playing field by evening out everyone's capabilities.
vb-8448
> tl;dr: o3 managed to solve a problem it wasn’t trained on, with orders of magnitude better performance than other state of the art models
Is that true? They said that something called "o3-tuned" has been able to achieve the performance, what does "tuned" mean in this context?
soiax
Yeah that's false.
from: https://arcprize.org/blog/oai-o3-pub-breakthrough
"Note on "tuned": OpenAI shared they trained the o3 we tested on 75% of the Public Training set. They have not shared more details. We have not yet tested the ARC-untrained model to understand how much of the performance is due to ARC-AGI data."
whiplash451
> Remember those 6+ people ML teams a few years back, working full-time on outcomes that one LLM call could achieve today?
er, what are we talking about here, seriously?
This sentence single-handedly nuked my trust in the post.
> Bezos nailed it on this topic: “[...] [I]n our retail business, we know that customers want low prices, and I know that's going to be true 10 years from now. They want fast delivery; they want vast selection. It's impossible to imagine a future 10 years from now where a customer comes up and says, 'Jeff I love Amazon; I just wish the prices were a little higher,' [or] 'I love Amazon; I just wish you'd deliver a little more slowly.' Impossible. And so the effort we put into those things [...] will still be paying off dividends for our customers 10 years from now. [...]”
> You should consider what won’t change, and the following is a (non-exhaustive) list of things that I think won’t change: I believe AI is and will continue to gain intelligence
Okay, but that way you can frame every ongoing change as a constant. "Change X will continue, and because it's already ongoing and will simply continue, I consider it a constant and therefore add it to my list of 'things that won't change'". But that's clearly not what Bezos meant.