Skip to content(if available)orjump to list(if available)

Nvidia announces $3k personal AI supercomputer called Digits

neom

In case you're curious, I googled. It runs this thing called "DGX OS":

"DGX OS 6 Features The following are the key features of DGX OS Release 6:

Based on Ubuntu 22.04 with the latest long-term Linux kernel version 5.15 for the recent hardware and security updates and updates to software packages, such as Python and GCC.

Includes the NVIDIA-optimized Linux kernel, which supports GPU Direct Storage (GDS) without additional patches.

Provides access to all NVIDIA GPU driver branches and CUDA toolkit versions.

Uses the Ubuntu OFED by default with the option to install NVIDIA OFED for additional features.

Supports Secure Boot (requires Ubuntu OFED).

Supports DGX H100/H200."

danieljanes

Not having to install CUDA is a killer feature, looking forward to DGX OS

angoragoats

Is this sarcasm, or is installing CUDA really a problem for most people? I use NixOS and set it up once a long time ago, and never have to think about it, even for new installs.

throw5959

Unfortunately, most people still don't know Docker, much less NixOS.

lionkor

If I buy one, do I get the Nvidia optimized kernel source under GPL?

verdverm

super computer seems a bit exaggerated, seems more like workstation in a small box

anthonyskipper

I think they say that because it used to be only supercomputers that could do a petaflop of compute. So taking what used to be a datacenter(15 years ago) and cramming it into a small device is a pretty impressive feat.

falcor84

The Raspberry Pi Pico I carry on my keychain is significantly more powerful than the Apollo lander computer, but thankfully they don't advertise it as a "Lunar Computer".

helf

[dead]

paxys

Just marketing nonsense. In the same keynote Jensen called it an “AI supercomputer” and “cloud computing platform”. Might as well add “quantum blockchain computer” while you are at it.

synergy20

An ARM NUC running Nvidia customized Linux

verdverm

you know they only use the smartest silicon in the production line

karim79

I recently picked up a NVidia Jetson Orin NX developer kit (16GB, 100 TOPS) and it performs admirably well on inference at <= 40W power draw. Would love to get my hands on one of these things.

pelagicAustral

I assume I can finally play Crysis

makestuff

What do the 5xxx series cards have that this doesn't which makes them use way more power and need massive coolers?

I have a feeling the software workflows will not run as well as the marketing claims, but the idea is really interesting and in a few years I bet the workflow will be really smooth.

cma

Way more memory bandwidth and performance (GDDR7 vs LPDDR5). I didn't see fp32 detailed, but this may be more focused on low precision.

This is probably closer to quad-channel threadripper/second tier apple levels of bandwidth than to modern GPU bandwidth, but maybe it gets up to 800GB/s like the M1 Ultra. 5090 is 1.8GB/s, four of them for the same amount of RAM (3X as expensive with a system), would be 7200GB/s, maybe 10-20X more.

With the lower power consumption, if this has full performance FP32, it may be pretty killer though for home use, especially if the bandwidth of the ConnectX is more than thunderbolt 5.

cma

Edit: LPDDR5X, I think that's 30% faster

glimshe

Depending on your definition of "Supercomputer", every phone is a personal Supercomputer. They are faster than big iron Supercomputers of not that many years ago...

throw5959

It was always relative to contemporary computers, though yes, I also say "let me consult the AI in my pocket supercomputer". Feels very Star Trek.

analognoise

Only if the AI on your pocket supercomputer gets the right answer.

If it fails to tell you how many R’s are in “strawberry” I doubt it will help you come up with correct orbital mechanics to slingshot you around a black hole to escape the Romulans who just uncloaked nearby.

throw5959

Funnily enough, the AI on my phone gives better answers to hard questions about astrophysics than the basic word puzzles you mention. It's just a different kind of intelligence, I guess. I'm not asking it to do the calculations, I'm asking it to give me the formula and prove it's correct. The rest will come soon.

null

[deleted]

numba888

Supper or not, it's way better for AI load then 4090 or 5090 set. And cheaper. I'm getting one instead of 5090 I wanted.

null

[deleted]

apexalpha

Anyone know the Wattage this will pull?

Assuming the CPU is just ARM it should run pretty decently low on idle, might work for a homeserver.

mdavidn

If you need an LLM on your home server, sure. If not, there are much more sensible home server options based on Intel’s N100 or Microsoft’s SQ3 chipsets.

apexalpha

Yeah I do need the LLM.

Currently running 2 old Tesla M40's for 48GB of VRAM.

While cheaper than this it has its limitations. Running a 200B model locally is very attractive though. Excited for the future.

cma

What's the bandwidth of the ConnectX interface?