Skip to content(if available)orjump to list(if available)

Cerebras achieves 2,500T/s on Llama 4 Maverick (400B)

ryao

> At over 2,500 t/s, Cerebras has set a world record for LLM inference speed on the 400B parameter Llama 4 Maverick model, the largest and most powerful in the Llama 4 family.

This is incorrect. The unreleased Llama 4 Behemoth is the largest and most powerful in the Llama 4 family.

As for the speed record, it seems important to keep it in context. That comparison is only for performance on 1 query, but it is well known that people run potentially hundreds of queries in parallel to get their money out of the hardware. If you aggregate the tokens per second across all simultaneous queries to get the total throughput for comparison, I wonder if it will still look so competitive in absolute performance.

Also, Cerebras is the company that not only was saying that their hardware was not useful for inference until some time last year, but even partnered with Qualcomm with the claim that Qualcomm’s accelerators had a 10x price performance improvement over their things:

https://www.cerebras.ai/press-release/cerebras-qualcomm-anno...

Their hardware does inference with FP16, so they need ~20 of their CSE-3 chips to run this model. Each one costs ~$2 million, so that is $40 million. The DGX B200 that they used for their comparison costs ~$500,000:

https://wccftech.com/nvidia-blackwell-dgx-b200-price-half-a-...

You only need 1 DGX B200 to run Llama 4 Maverick. You could buy ~80 of them for the price it costs to buy enough Cerebras hardware to run Llama 4 Maverick.

Their latencies are impressive, but beyond a certain point, throughput is what counts and they don’t really talk about their throughput numbers. I suspect the cost to performance ratio is terrible for throughput numbers. It certainly is terrible for latency numbers. That is what they are not telling people.

Finally, I have trouble getting excited about Cerebras. SRAM scaling is dead, so short of figuring out how to 3D stack their wafer scale chips, during fabrication at TSMC, or designing round chips, they have a dead end product since it relies on using an entire wafer to be able to throw SRAM at problems. Nvidia, using DRAM, is far less reliant on SRAM and can use more silicon for compute, which is still shrinking.

bubblethink

>Each one costs ~$2 million, so that is $40 million.

Pricing for exotic hardware that is not manufactured at scale is quite meaningless. They are selling tokens over an API. The token pricing is competitive with other token APIs.

ryao

Last year, I took the time to read through public documents and estimated that their annual production was limited to ~300 wafers per year from TSMC. That is not Nvidia level scale, but it is scale.

There are many companies that sell tokens from an API and many more that need hardware to compute tokens. Cerebras posted a comparison of hardware options for these companies, so evaluating it as such is meaningful. It is perhaps less meaningful to the average person who cannot afford the barrier to entry to afford this hardware, but there are plenty of people curious what the options are for the companies that sell tokens through APIs, as those impact available capacity.

latchkey

> There are many companies that sell tokens from an API

I was just at Dell Tech World and they proudly displayed a slide during the CTO keynote that said:

"Cost per token decreased 4 orders of magnitude"

Personally speaking, not a business I'd want to get into.

jenny91

I agree on the first. On the second: I would bet a lot of money that they aren't actually breaking even on their API (or even close to). They don't have a "pay as you go" per-token tier, it's all geared up to demonstrate use of their API as a novelty. They're probably burning cash on every single token. But their valuation and hype has surely gone way up since they got onto LLMs.

bubblethink

They seem to have dev tier pricing (https://inference-docs.cerebras.ai/support/pricing). It's likely that they don't make much money on this and only make money on large enterprise contracts.

attentive

> Also, Cerebras is the company that not only was saying that their hardware was not useful for inference until some time last year, but even partnered with Qualcomm with the claim that Qualcomm’s accelerators had a 10x price performance improvement over their things

Mistral says they run Le Chat on Cerebras

ryao

How is that related to the claim that Cerebras themselves made about their hardware’s price performance ratio?

https://www.cerebras.ai/press-release/cerebras-qualcomm-anno...

arisAlexis

Also perplexity

addaon

> SRAM scaling is dead

I'm /way/ outside my expertise here, so possibly-silly question. My understanding (any of which can be wrong, please correct me!) is that (a) the memory used for LLMs is dominantly parameters, which are read-only during inference; (b) SRAM scaling may be dead, but NVM scaling doesn't seem to be; (c) NVM read bandwidth scales well locally, within an order of magnitude or two of SRAM bandwidth, for wide reads; (d) although NVM isn't currently on leading-edge processes, market forces are generally pushing NVM to smaller and smaller processes for the usual cost/density/performance reasons.

Assuming that cluster of assumptions is true, does that suggest that there's a time down the road where something like a chip-scale-integrated inference chip using NVM for parameter storage solves?

ryao

The processes used for logic chips, and the processes used for NVM are typically different. The only case I know of the industry combining them onto a single chip would be Texas Instruments’ MSP430 microcontrollers with FeRAM, but the quantities of FeRAM are incredibly small there and the process technology is ancient. It seems unlikely to me that the rest of the industry will combine the processes such that you can have both on a single wafer, but you would have better luck asking a chip designer.

That said, NVM often has a wear-out problem. This is a major disincentive for using it in place of SRAM, which is frequently written. Different types of NVM have different endurance limits, but if they did build such a chip, it is only a matter of time before it stops working.

addaon

> The only case I know of the industry combining them onto a single chip would be Texas Instruments’ MSP430 microcontrollers with FeRAM

Every microcontroller with on-chip NVM would count. Down to 45 nm, this is mostly Flash, with the exception of the MSP430's FeRAM. Below that... we have TI pushing Flash, ST pushing PCM, NXP pushing MRAM, and Infineon pushing (TSMC's) RRAM. All on processes in the 22 nm (planar) range, either today or in the near future.

> This is a major disincentive for using it in place of SRAM, which is frequently written.

But isn't parameter memory written once per model update, for silicon used for inferencing on a specific model? Even with daily writes the typical 10k - 1M allowable writes for most of the technologies above would last decades.

timschmidt

> I have trouble getting excited about Cerebras. SRAM scaling is dead, so short of figuring out how to 3D stack their wafer scale chips

AMD and TSMC are stacking SRAM on the chip scale. I imagine they could accomplish it at the wafer scale. It'll be neat if we can get hundreds of layers in time, like flash.

Your analysis seems spot on to me.

latchkey

More on the CPU side than the GPU side. GPU is still dominated by HBM.

nsteel

Assume you meant Intel, rather than AMD?

null

[deleted]

skryl

Performance per watt is better than h100 and b200, performance per watt per $ is worse than B200, and it does fp8 just fine

https://arxiv.org/pdf/2503.11698

skryl

One caveat is that this paper only covers training, which can be done on a single CS-3 using external memory (swapping weights in and out of SRAM). There is no way that a single CS-3 will hit this record inference performance with external memory so this was likely done with 10-20 CS-3 chips and the full model in SRAM. Definitely can’t compare token/$ with that kind of setup vs a DGX.

ryao

Thanks for the correction. They are currently using FP16 for inference according to OpenRouter. I had thought that implied that they could not use FP8 given the pressure that they have to use as little memory as possible from being solely reliant on SRAM. I wonder why they opted to use FP16 instead of FP8.

lern_too_spel

Performance per watt per dollar is a useless metric as calculated. You can't spend more money on B200s to get more performance per watt.

x-complexity

Pretty much no disagreements IMO.

By the time the CSE-5 is rolled out, it *needs* at least 500GB of SRAM to make it worthwhile. Multi-layer wafer stacking's the only path to advance this chip.

littlestymaar

> This is incorrect. The unreleased Llama 4 Behemoth is the largest and most powerful in the Llama 4 family.

Emphasis mine.

Behemoth may become the largest and most powerful llama model, but right now it's nothing but vaporware. Maverick is currently the largest and more powerful llama model today (and if I had to bet, my money would be on Meta discarding Llama4 Behemoth entirely it eventually without having released it, and moving on to the next version number).

bob1029

I think it is too risky to build a company around the premise that someone won't soon solve the quadratic scaling issue. Especially, when that company involves creating ASICs.

E.g.: https://arxiv.org/abs/2312.00752

qeternity

Attention is not the primary inference bottleneck. For each token you have to load all of the weights (or activated weights) from memory. This is why Cerebras is fast: they have huge memory bandwidth.

Havoc

Yeah also strikes me as quite risky. Their gear seems very focused on llama family specifically.

Just takes one breakthrough and it's all different. See the recent diffusion style LLMs for example

turblety

Maybe one day they’ll have an actual api that you can pay per token. Right now it’s the standard “talk to us” if you want to use it.

iansinnott

Although not obvious, you _can_ pay them per token. You have to use OpenRouter or Huggingface as the inference API provider.

https://cerebras-inference.help.usepylon.com/articles/192554...

turblety

Oh, this is cool. Didn’t know they are on openrouter. Thanks.

kristianp

Interestingly, llama 4 maverick isn't available on that page, only scout.

bn-l

Yeap looks like it’s just scout and lower.

twothreeone

Huh? Just make an account, get your API key, and try out the free tier.. works for me.

https://cloud.cerebras.ai

M4v3R

Yep, can confirm, I used their API just fine for Llama 4 Scout for weeks now.

bn-l

> that you can pay per token

diggan

> The most important AI applications being deployed in enterprise today—agents, code generation, and complex reasoning—are bottlenecked by inference latency

Is this really true today? I don't work in enterprise, so don't know how things look like, but I'm sure lots of people here do, and it feels unlikely that inference latency is the top bottleneck, even above humans or waiting for human input? Maybe I'm just using LLMs very differently from how they're deployed in a enterprise, but I'm by far the biggest bottleneck in my setup currently.

baq

It is if you want good results. I’ve been giving Gemini pro prompts for 200+ seconds multiple times per day this week and for such tasks I really like to make it double/triple check and sometimes give the results to Claude for review, too (and vice versa).

Ideally I can just run the prompt 100x and have it pick the best solution later. That’s prohibitively expensive and a waste of time today.

diggan

> That’s prohibitively expensive

Assuming you experience is working within enterprise, you're then saying that cost is the biggest bottleneck currently?

Also surprising to me that enterprises would use out-of-the-box models like that, I was expecting at least fine-tuned models be used most of the time, for very specific tasks/contexts, but maybe that's way optimistic.

threeseed

Cost is irrelevant when compared to the salaries of the people using them so they will do basic cost controls but nothing too onerous. And cost is never a reason to prevent solutions being built and deployed.

And most enterprises aren't even doing anything advanced with AI. Just doing POCs with chat bots (again) which will likely fail (again). Or trying to do enterprise search engines which are pointless because most content is isolated per team. Or a few OCR projects which is pretty boring and underwhelming.

baq

Cost would be the biggest factor if price per token was the same but tokens were arriving 100x faster. (Not particularly unexpected I’d say.)

tiffanyh

How do you create a prompt for Gemini to spend 200 seconds and review multiple times.

Is it as simple as stating in the prompt:

  Spend 200+ seconds and review multiple times <question/task>

baq

You give it a task from hell which the devil himself outsources, like ‘figure out how these fifty repositories of yaml blobs, jinja templates and code generating code generating hcl generating yaml interact to define the infrastructure, then add something to it with correct iams, then make a matching blob of yaml pipelines to work with that infrastructure’

threeseed

Only an insignificant minority of companies are running their own AI LLM models.

Everyone else is perfectly fine using whatever Azure, GCP etc provide. Enterprise companies don't need to be the fastest or have the best user experience. They need to be secure, trusted and reliable. And you get that by using cloud offerings by default and only going third party when there is a serious need.

aktuel

If you think that cloud offerings are secure and trustworthy by default you truly must be living under a rock.

threeseed

I have worked for a dozen companies all earnt more than $20b a year in revenue. That includes two banks and a hedge fund. All use the cloud.

You must be living under a rock if you think the cloud isn't secure enough for the enterprise.

UltraSane

AWS is in fact extremely secure.

Toritori12

I feel a lot companies do it to reduce liability. It may not be more secure, but it is not their problem.

qu0b

True, the biggest bottleneck is formulating the right task list and ensuring the LLM is directed to find the relevant context it needs. I feel LLMs in their instruction following are often to eager to output rather than using tools (read files) in their reasoning step.

y2244

Investors list include Altman and Ilya

https://www.cerebras.ai/company

ryao

Their CEO is a felon who plead guilty to accounting fraud:

https://milled.com/theinformation/cerebras-ceos-past-felony-...

Experienced investors will not touch them:

https://www.nbclosangeles.com/news/business/money-report/cer...

I estimated last year that they can only produce about 300 chips per year and that is unlikely to change because there are far bigger customers for TSMC that are ahead of them in priority for capacity. Their technology is interesting, but it is heavily reliant on SRAM and SRAM scaling is dead. Unless they get a foundry to stack layers for their wafer scale chips or design a round chip, they are unlikely to be able to improve their technology very much past the CSE-3. Compute might somewhat increase in the CSE-4 if there is one, but memory will not increase much if at all.

I doubt the investors will see a return on investment.

impossiblefork

While the CEO stuff is a problem, I don't think the other stuff matters.

Per chip area WSE-3 is only a little bit more expensive than H200. While you may need several WSE-3s to load the model, if you have enough demand that you are running the WSE-3 at full speed you will not be using more area in the WSE-3. In fact, the WSE-3 may be more efficient, since it won't be loading and unloading things from large memories.

The only effect is that the WSE-3s will have a minimum demand before they make sense, whereas an H200 will make sense even with little demand.

ryao

I did the math last year to estimate how many wafers per year Nvidia had, and from my recollection it was >50,000. Cerebras with their ~300 per year is not able to handle the inference needs of the market. It does not help that all of their memory must be inside the wafer, which limits the amount of die area they have for actual logic. They have no prospect for growth unless TSMC decides to bless them or they switch to another foundation.

> While you may need several WSE-3s to load the model, if you have enough demand that you are running the WSE-3 at full speed you will not be using more area in the WSE-3.

You need ~20 wafers to run the Llama 4 Behemoth model on Cerebras hardware. This is close to a million mm^2. The Nvidia hardware that they used in their comparison should have less than 10,000 mm^2 die area, yet can run it fine thanks to the external DRAM. How is the CSE-3 not using more die area?

> In fact, the WSE-3 may be more efficient, since it won't be loading and unloading things from large memories.

This makes no sense to me. Inference software loads the model once and then uses it multiple times. This should be the same for both Nvidia and Cerebras.

moralestapia

>Their CEO is a felon who plead guilty to accounting fraud [...]

Whoa, I didn't know that.

I know he's very close to another guy I know first hand to be a criminal. I won't write the name here for obvious reasons, also not my fight to fight.

I always thought it was a bit weird of them to hang around because I never got that vibe from Feldman, but ... now I came to know about this, 2nd strike I guess ...

canucker2016

CNBC lists several other red flags (one customer generating >80% of revenue, non-top-tier investment bank/auditor).

see https://www.cnbc.com/2024/10/11/cerebras-ipo-has-too-much-ha...

IPO was supposed to happen in autumn 2024.

arisAlexis

Openai wanted to buy them. G42 the largest player in middle east owne a big chunk. You are simply wrong about big investors not touching them but my guess is they will be bought soon by Meta or Apple.

threeseed

> Apple

I can't imagine Apple being interested.

Their priority is figuring out how to optimise Apple Silicon for LLM inference so it can be used in laptops, phones and data centres.

pinoy420

[dead]

MangoToupe

[flagged]

geor9e

I love Cerebrus. 10-100x faster than the other options. I really wish the other companies realized that some of us prefer our computer be instant. I use their API (with Qwen3 reasoning model) for ~99% of my questions, and the whole answer finishes in under 0.1 seconds. Keeps me in a flow state. Latency is jarring. Especially the 5-10 seconds most AIs take these days, where it's just enough to make switching tasks not worth it. You just have to sit there in statis. If I'm willing to accept any latency, might as well make it a couple minutes in the background, and use a full agent mode or deep research AI at that point. Otherwise I want instant.

bravesoul2

I tried some Llama 4s on Cerebras and they were hallucinating like they were on drugs. I gave it a URL to analyse a post for style and it made it all up and didn't look at the url (or realize that it hadn't looked at it).

null

[deleted]

tryauuum

yes, was not obvious it's not terabytes per second

Alifatisk

In the context of LLMs, the unit is token and to measure the output it's tokens per second (T/s)

lordofgibbons

Very nice. Now for their next trick they should offer inference on actually useful models like DeepSeek R1 (not the distills).

thawab

are the Llama 4 issues fixed? what is it good at? coding is out of the window after the updated R1.

NitpickLawyer

Yes, the issues were fixed ~1-2 weeks after release. It's a good "all-rounder" model, best compared to 4o. Good multilingual capabilities, even in languages not specifically highlighted. Fast to run inference on it. Code is not one of its strong suits at all.

MangoToupe

[flagged]