Nvidia's new 'robot brain' goes on sale for $3,499
78 comments
·August 25, 2025npalli
justincormack
My assumotion would be it will siffer the same delays as DGX Spark as it is a very similar chipset, so maybe December?
mwambua
My naive first reaction was that a unit like that would consume a way too much power to be practical on a robot, but then I remembered how many calories our own brains need vs the rest of our body (Google says 20% of total body needs).
Looks like power consumption for the Thor T5000 is between 30W-140W. The Unitree G1 (https://www.unitree.com/g1) has a 9Ah battery that lasts 2hrs under normal operation. Assuming an operating voltage of 48V (13s battery), that implies the robot's actuator and sensor power usage is ~216W.
Assuming average power usage is somewhere down the middle (85W), a thor unit would consume 28% of the robot's total power needs. This doesn't account for the fact that the robot would have to carry around the additional weight of the compute unit though. Can't say if that's good or bad, just interesting to see that it's in the same ballpark.
xattt
Can self-driving cars be framed as robots?
An electric car would have no issue sustaining this level of power; a gas-powered car doubly-so.
AlotOfReading
Autonomous vehicles are indeed robots, but they have power constraints (that Thor can reasonably fit within). Most industrial robots aren't meaningfully power constrained though.
It was a bit of a culture shock the first time I was involved with industrial robots because of how much power constraints had impacted the design of previous systems I worked on.
worldsayshi
I tried to look up human wattage as a comparison and I'm very surprised that it lands around the same ballpark. Around 145W as a daily average and around 440W as a an approximate hourly average during exercise.
I thought current gen robots would be an order of magnitude less efficient. Maybe I'm misunderstanding something.
themafia
I happen to have an envelope handy:
2000 kilocalorie converts to 8.3 megajoules. This should be the amount of energy consumed per day.
8.3 megajoules / 24 hours is 96 watts. This should be the average rate of energy expediture.
96 watts * 20% is 19 watts. This should be the portion your brain uses out of that average.
96 watts * 24 hours is 464 watthours. This should be the average amount of energy your brain uses in a day.
This is why I've never found "AI" to be particularly competitive with human beings. The level of energy efficiency that our brains operate at is amazing. Our electrical and computer engineering is several orders of magnitude out from the achievements of nature and biology.
ZiiS
Calculate how much energy needs to be input into acriculture and transport to provide that wattage.
BobbyJo
Electric motors are very energy efficient. I believe they are actually far more efficient on a per-joint movement basis, and the equivalence between us and them is largely due to inefficient locomotion.
Where we excel is energy storage. Far less weight, far higher density.
lm28469
We do a whole lot of things a robot doesn't have to do, like filtering blood, digesting, keeping warm.
null
worldsayshi
Body maintenance.
LtdJorge
Every hardware piece of such a robot can do a few things. Our body parts do orders of magnitude more, including growing and regeneration.
riku_iki
> too much power to be practical on a robot
robot could be useful even when permanently plugged to the grid.
tonyarkles
From a UAV perspective, even at 140W it's not too bad. For a multi-rotor, that's about the same energy needed to lift around 750g-1kg of payload.
bitwize
The efficacy to weight ratio of meat vs. rocks and metal is freakin' absurd. We don't know how to build a robot that's as strong and damage-resistant as a human body and weighs only as much as one. Similarly we don't know how to build something as energy-efficient as a human brain that thinks anywhere near as well. Artificial superintelligence may well be a thing in the coming decades, but it will be profoundly energy-greedy; I fear the first thing it will resolve to do is secure fuel for itself by stealing our energy supplies like out of Superman III.
shekhar101
I was reading Xiaomi YU7 marketing page[0] yesterday and the NVIDIA AGX Thor stood out (says: NVIDIA DRIVE AGX Thor). I was wondering what it was and this showed up! Looks like it is (or a Drive variant of it) is already being used in newer cars for self-drive and such. [0] https://www.mi.com/global/discover/article?id=5174
kjhughes
Here's NVIDIA's blog post on this:
NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
nickfromseattle
What are the variables that prefer local GPUs vs cloud inference? Is connectivity the dividing line or are there other variables that influence the choice?
Anduril submersibles probably need local processing, but does my laundry/dishes robot need local processing? Or machines in factories? Or delivery drones?
michaelt
Any sort of continuous video processing, especially low-latency.
Imagine you were tracking items on video at a self-service checkout. Sure, you could compress the video down to 15 Mbps or so and send it to the cloud. But now, a store with 20 self-checkouts needs 300 Mbps of upload bandwidth. That's one more problem making it harder for Wal-Mart to buy and roll out your product.
Also, if you know you need an NVIDIA L4 dedicated to you 24/7 for a year, a g6.xlarge will cost $7,000/year on-demand or $4,300/year reserved [1] while you can buy the card for $2,500.
Of course for many other use cases the cloud is a fine choice. If you only need a fraction of a GPU, or you only need a monster GPU a tiny fraction of the time, or you need an enormous LLM that demands water cooling and tolerates latency easily, the cloud can be a fine choice.
[1] https://instances.vantage.sh/aws/ec2/g6.xlarge?currency=USD&...
traverseda
Anything latency sensitive. Anything bandwidth constrained.
Simple example, Security cameras that only use bandwidth when they've detected something. The cost of live streaming 20 cameras over 5g is very high. The cost of sending text messages with still images when you see a person is reasonable.
sidewndr46
Anecdotally, I don't have any direct physical evidence or written evidence to support this. But I talked to someone in the industry over a decade ago when "run it on a GPU" was just heating up. It's drones. Not DJI ones, military ones with surveillance gear and weapons.
pyrale
Why the hell would a dishwasher need to be connected, or smart for that matter?
I just want clean dishes/clothes, not to be upsold into some stupid shit that fails when it can’t ping google.com or gets bricked when the company closes.
I would pay premium for certified mindless products.
bigfishrunning
Mining, remote construction, remote power station inspection, battlefields. there are many many places where a stable network connection can't be taken for granted.
newsclues
I want local processing for my local data. That includes my photos, documents and surveillance camera feeds.
ls612
If I had to guess there is significant interest in this product from a certain Eastern European nation. I don’t think they are intending to use it for “robotics” though.
exe34
it depends if the plates were expensive.
jauntywundrkind
Wow: notably a more advanced CPU than DGX GB200! 14 Neoverse V3AE cores, where-as Grace Hopper is 72x Neoverse V2. Comparing versus big GB100: 2560/96 CUDA/Tensor cores here vs big Blackwell's 18432/576 cores.
> Compared to NVIDIA Jetson AGX Orin, it provides up to 7.5x higher AI compute and 3.5x better energy efficiency.
I could really use a table of all the various options Nvidia has! Jetson AGX Orin (2023) seems to start at ~$1700 for a 32GB system, with 204GB/s bandwidth, 1792 Ampere, 56 Tensor, & 8 A78AE ARM Cores, 200 TOPS "AI Performance", 15-45W. Slightly bigger model of 2048/64/12 cores/275 TOPS, 15-60W available. https://en.wikipedia.org/wiki/Nvidia_Jetson#Performance
Now Jetson T5000 is 2070 TFLOPS (but FP4 - Sparse! Still ~double-ish). 2560 Core Blackwell, 96 Tensor cores, 14 Neoverse V3AE cores. 273GB/s 128GB. 4x25Gbe is a neat new addition. 40-130W. There's also a lower spec T4000.
Seems like a pretty in line leap at 2x the price!
Looks like a physically pretty big unit. Big enough to scratch my head in the intro video of robots opening up the package & wonder: where are they going to fit their new brain? But man, the breakdown diagram: it's- unsurprisingly- half heatsink.
pmdr
> CEO Jensen Huang has said robotics is the company’s largest growth opportunity outside of artificial intelligence
> The Jetson Thor chips are equipped with 128GB of memory, which is essential for big AI models.
Just put it into a robot and run some unhinged model on it, that should be fun.
bigfishrunning
The models that run on robots do things like "where is the road" or "is this package damaged"; people will run LLMs on this thing, but that's not it's primary bread-and-butter
ACCount37
The future of advanced robotics likely requires LLM-scale models. With more bias towards vision and locomotion than the usual LLM, of course.
ks2048
> CEO Jensen Huang has said robotics is the company’s largest growth opportunity outside of artificial intelligence
Does "robotics outside of AI" imply they want to get into making actual robots (beyond the GPU "brains")?
pradn
There's already this hilarious bot. It's able to use people's outfits to woo them, or insult them. It's pretty good!
echelon
AMD should jump on this immediately.
Edge compute has not yet been won. There is no ecosystem for CUDA for it yet.
Someone else but Nvidia please pay attention to this market.
Robots can't deal with the latency of calls back to the data center. Vision, navigation, 6DOF, articulation all must happen in real time.
This will absolutely be a huge market in time. Robots, autonomous cars, any sort of real time, on-prem, hardware type application.
varelse
[dead]
ilaksh
GMTec AMD Ryzen™ AI Max+ 395 --EVO-X2 AI Mini PC seems pretty similar and only $2000.
Would be interested to see head to head benchmarks including power usage between those mini PCs and the Nvidia Thor.
jsight
This sounds very similar to the dgx spark, which still hasn't shipped afaik.
wmf
Orin was pretty expensive at $2,000; now Thor is significantly more.
AlotOfReading
Thor is a pretty big jump in power and the current prices are a bargain compared to what else is out there if you need the capabilities. I wish there was a competitive alternative, because Nvidia is horrible to work with.
mikepurvis
And now everyone's $2000 Orins will be stuck forever on Ubuntu 24.04 just like the Xaviers were abandoned on 20.04 and the TX1/2 on 18.04.
Nothing like explaining to your ML engineers that they can only use Python 3.6 on an EOL operating system because you deployed a bunch of hardware shortly before the vendor released a new shiny thing and abruptly lost interest in supporting everything that came before.
And yes, TX2 was launched in 2017, but Nvidia continued shipping them until the end of 2024, so it's absurd they never got updated software: https://forums.developer.nvidia.com/t/jetson-tx2-lifecycle-e...
audiofish
Same experience here, plus serial port drivers that don't work, bootloader bugs causing bricked machines in the field. This on a platform nearly a decade old! The hardware is great but the software quality is abysmal, when compared to other industrial SoC manufacturers.
mikepurvis
I think what's most galling about it is that Nvidia gets away with behaving like this because even a decade later they're still basically the only game in town if you want a low power embedded GPU solution for edge AI stuff.
AMD has managed to blunder multiple opportunities to launch something into this space and earn the trust of developers. And no, NUC form factor APU machines are not the answer— both for power/heat concerns and the software integration story being an incomplete patchwork.
tonyarkles
Ahhhh I see there's someone else who has experienced the serial port driver bugs :). I was responsible for helping them figure out and fix the one related to DMA buffers but still encounter the "sometimes it just stops sending data" one often enough.
CamperBob2
128 GB for $3,499 doesn't sound bad at all.
probablydan
Can these be used for local inference on large models? I'm assuming the 128G of memory is like system memory, not like GPU VRAM.
null
sgillen
It has a unified memory architecture, so the 128G is shared directly between CPU and GPU. Though it's slower than dGPU VRAM.
bigyabai
Yes, but it is substantially cheaper and usually faster to buy a Jetson Orin chip or build an x86 homelab.
asadm
Has anyone deployed jetson or similar in production? whats the BOM like at scale?
So the single place that we can buy this is showing no stock (already) and not clear if this will even ship given all the customs and tariffs stuff. I must say after waiting for months on the 'almost ready to ship' DGX Spark (with multiple partners no less), getting strong announce-ware vibes from this already.
https://www.arrow.com/en/products/945-14070-0080-000/nvidia?...