Amazon launches Trainium3
12 comments
·December 2, 2025cmiles8
AWS keeps making grand statements about Trainium but not a single customer comes on stage to say how amazing it is. Everyone I talked to that tries it says there were too many headaches and they moved on. AWS pushes it hard but “more price performant” isn’t a benefit if it’s a major PITA to deploy and run relative to other options. Chips without a quality developer experience isn’t gonna work.
Seems AWS is using this heavily internally, which makes sense, but not observing it getting traction outside that. Glad to see Amazon investing there though.
nimbius
the real news is: "and teases an Nvidia-friendly roadmap"
The sole reason amazon is throwing any money at this is because they think they can do to AI what they did with logistics and shipping in an effort to slash costs leading into a recession (we cant fire anyone else.) The hubris is magnanimous to say the least.
but the total confidence is very low...so "Nvidia friendly" is face saving to ensure no bridges they currently cross for AWS profit get burned.
aaa_aaa
Interesting that in the article, they do not say what the chip actually does. Not even once.
wmf
Training. It's in the name.
null
egorfine
Probably because the only task this chip has to perform is to please shareholders.
caminante
Time to go squat on trainium4.com [0]
[0] https://www.godaddy.com/domainsearch/find?domainToCheck=trai...
Kye
Vector math
jauntywundrkind
Amazon aside, interesting future here with NVLink getting more and more folks using it. Intel is also onboard with NVlink. This is like an PCI -> AGP moment, but Nvidia's AGP.
AMD felt like they were so close to nabbing the accelerator future back in HyperTransport days. But the recent version Infinity Fabric is all internal.
There's Ultra Accelerator Link (UALink) getting some steam. Hypothetically CXL should be good for uses like this, using PCIe PHY but lower latency lighter weight; close to ram latency, not bad! But still a mere PCIe speed, not nearly enough, with PCIe 6.0 just barely emerging now. Ideally IMO we'd also see more chips come with integrated networking too: it was so amazing when Intel Xeon's had 100Gb Omni-Path for barely any price bump. UltraEthernet feels like it should be on core, gratis.
null
I've had to repeatedly tell our AWS account reps that we're not even a little interested in the Trainium or Inferentia instances unless they have a provably reliable track record of working with the standard libraries we have to use like Transformers and PyTorch.
I know they claim they work, but that's only on their happy path with their very specific AMI's and the nightmare that is the neuron SDK. You try to do any real work with them and use your own dependencies and things tend to fall apart immediately.
It was just in the past couple years that it really became worthwhile to use TPU's if you're on GCP and that's only with the huge investment on Google's part into software support. I'm not going to sink hours and hours into beta testing AWS's software just to use their chips.