Skip to content(if available)orjump to list(if available)

Cray versus Raspberry Pi

Cray versus Raspberry Pi

97 comments

·June 11, 2025

dahart

My former boss (Steve Parker, RIP) shared a story of Turner Whitted making predictions about how much compute would be needed to achieve real-time ray tracing, some time around when his seminal paper was published (~1980). As the story goes, Turner went through some calculations and came to the conclusion that it’d take 1 Cray per pixel. Because of the space each Cray takes, they’d be too far apart and he thought they wouldn’t be able to link it to a monitor and get the results in real time, so instead you’d probably have to put the array of Crays in the desert, each one attached to an RGB light, and fly over it in an airplane to see the image.

Another comparison that is equally astonishing to the RPi is that modern GPUs have exceeded Whitted’s prediction. Turner’s paper used 640x480 images. At that resolution, extrapolating the 160 Mflops number, 1 Cray per pixel would be 49 Tera flops. A 4080 GPU has just shy of 50 Tflops peak performance, so it has surpassed what Turner thought we’d need.

Think about that - not just faster than a Cray for a lot less money, but one cheap consumer device is faster than 300,000 Crays.(!) Faster than a whole Cray per pixel. We really have come a long, long way.

The 5090 has over 300 Tflops of ray tracing perf, and the Tensor cores are now in the Petaflops range (with lower precision math), so we’re now exceeding the compute needed for 1 Cray per pixel at 1080p. 1 GPU faster than 2M Crays. Mind blowing.

hattmall

Nice, but the ~40 year latency is kind 0f high.

magicalhippo

> 1 Cray per pixel would be 49 Tera flops. A 4080 GPU has just shy of 50 Tflops peak performance

Interesting, wonder how it compares in terms of transistors. How many transistors combined did one Cray have in compute and cache chips?

dahart

The Wikipedia article says the Cray-1 has 200k gates. I assume that would mean something slightly north of 2x the number of transistors? https://en.wikipedia.org/wiki/Cray-1#Description

200k * 300k Cray-1s would be 60B gates, whereas the 4080 actually has 46B transistors. Seems like we’re totally in the right ballpark.

nottorp

But the Cray had a general purpose CPU while the GPUs have specialized hardware. Not exactly apples to apples.

monocasa

The main part of the Cray was a compute offload engine that asynchronously executed job lists submitted by front end general purpose computers that ran OSes like Unix.

It was actually pretty close to the model of a GPU.

Animats

Back in 2020, someone built a working model of a Cray-1.[1] Not only is it instruction compatible, using an FPGA, it's built into a 1/10 scale case that looks like a Cray-1.

The Cray-1 is really a very simple machine, with a small instruction set. It just has 64 of everything. It was built from discrete components, almost the last CPU built that way.

[1] https://www.cpushack.com/2010/09/15/homebrew-cray-1a-1976-vs...

qingcharles

In 2013 I'd just built a new top-spec PC. I looked up the performance and then back-calculated using the TOP500† and I believe it would have been the most powerful supercomputer in the world in about 1993. If you back-calculated further, I think around 1980 it became more powerful than every computer on the planet combined.

https://en.wikipedia.org/wiki/TOP500

smcameron

And you can 3D print a Cray YMP case for your Raspberry Pi: https://www.thingiverse.com/thing:6947303

_tom_

The pi has a sub $100 accelerator card that takes it to 30 TFLOPs. So you can add three more orders of magnitude of performance for a rough doubling of the price.

_fat_santa

Reading this I wonder, say we did have a time machine and were somehow able to give scientists back in the day access to an RPI5. What sort of crazy experiments would that have spawned?

I'm sure when the Cray 1 came out, access to it must have been very restricted and there must have been hoards of scientists clamoring to run their experiments and computations on it. What would have happened if we gave every one of those clamoring scientists an RPI5?

And yes I know this raises an interface problem of how would they even use one back in the day but lets put that to the side and assume we figured out how to make an RPI5 behave exactly like a Cray 1 and allowed scientists to use it in a productive way.

username223

> What sort of crazy experiments would that have spawned?

Scientists then (at least a lot of them) knew what they wanted to do, and it required faster computers rather than more of them. A lot of that Cray power at the national labs was doing fluid simulation (i.e. nuclear explosions), and with the computers they had in the 80s, it was done in one or two dimensions, relying on symmetry. Going from n^2 to n^3 grid cells was the obvious next step, but took a lot more memory and CPU speed.

mikewarot

First of all, how would they talk to it? You'd have to give them an RPI5 with serial console enabled, and strict instructions not to exceed the 3.3 volt limits of the I/O. Now it's reasonable that you could generate NTSC video out of it, so they could see on the screen any output.

When you then explained it was just bit-banging said NTSC output, they'd be amazed even more.

dottedmag

Serial port

Cray 1 was released 1975, teletypes were old tech at that time.

Aardwolf

Give it also an hdmi screen and usb keyboard, what more do you need to type code and see the result

null

[deleted]

maxerickson

Do you think they would have run experiments that have been missed in the meantime? Why?

delichon

> but then again if you'd showed me an RPi5 back in 1977 I would have said "nah, impossible" so who knows?

I was reading lots of scifi in 1977, so I may have tried to talk to the pi like Scotty trying to talk to the mouse in Star Trek IV. And since you can run an LLM and text to speech on an RPi5, it might have answered.

JdeBP

You should have been watching lots of SciFi, too. (-:

I have a Raspberry Pi in a translucent "modular case" from the PiHut.

* https://thepihut.com/products/modular-raspberry-pi-4-case-cl...

It is very close to the same size and appearance as the "key" for Orac in Blake's 7.

I have so far resisted the temptation to slap it on top of a Really Useful Box and play the buzzing noise.

* https://youtube.com/watch?v=XOd1WkUcRzY

Obviously not even Avon figured out that the main box of Orac was a distraction, a fancy base station to hold the power supply, WiFi antenna, GPS receiver, and some Christmas tree lights, and all of the computational power was really in the activation key.

The amusing thing is that that is not the only 1970s SciFi telly prop that could become almost real today. It shouldn't be hard -- all of the components exist -- to make an actual Space 1999 commlock; not just a good impression of one, but a functioning one that could do teleconferencing over a LAN, IR control for doors and tellies and stuff, and remote computer access.

Not quite in time for 1999, alas. (-:

* https://mastodonapp.uk/@JdeBP/114590229374309238

rahen

No need for an RPi 5. Back in 1982, a dual or quad-CPU X-MP could have run a small LLM, say, with 200–300K weights, without trouble. The Crays were, ironically, very well suited for neural networks, we just didn’t know it yet. Such an LLM could have handled grammar and code autocompletion, basic linting, or documentation queries and summarization. By the late 80s, a Y-MP might even have been enough to support a small conversational agent.

A modest PDP-11/34 cluster with AP-120 vector coprocessors might even have served as a cheaper pathfinder in the late 70s for labs and companies who couldn't afford a Cray 1 and its infrastructure.

But we lacked both the data and the concepts. Massive, curated datasets (and backpropagation!) weren’t even a thing until the late 80s or 90s. And even then, they ran on far less powerful hardware than the Crays. Ideas and concepts were the limiting factor, not the hardware.

adwn

> a small LLM, say, with 200–300K weights

A "small Large Language Model", you say? So a "Language Model"? ;-)

> Such an LLM could have handled grammar and code autocompletion, basic linting, or documentation queries and summarization.

No, not even close. You're off by 3 orders of magnitude if you want even the most basic text understanding, 4 OOM if you want anything slightly more complex (like code autocompletion), and 5–6 OOM for good speech recognition and generation. Hardware was very much a limiting factor.

rahen

I would have thought the same, but EXO Labs showed otherwise by getting a 300K-parameter LLM to run on a Pentium II with only 128 MB of RAM at about 50 tokens per second. The X-MP was in the same ballpark, with the added benefit of native vector processing (not just some extension bolted onto a scalar CPU) which performs very well on matmul.

https://www.tomshardware.com/tech-industry/artificial-intell...

John Carmack was also hinting at this: we might have had AI decades earlier, obviously not large GPT-4 models but useful language reasoning at a small scale was possible. The hardware wasn't that far off. The software and incentives were.

https://x.com/ID_AA_Carmack/status/1911872001507016826

Mountain_Skies

Someday real soon, kids being shown episodes of 'Knight Rider' by their grandparents won't understand why a talking car was so futuristic.

KineticLensman

Like James Bond's Aston Martin with a satnav/tracking device in 1964's Goldfinger. Kids would know what that was but they might not understand why Bond had to continually shift some sort of stick to change the car's gear.

tsoukase

I grew up watching Kitt and when I watched it again a few days ago, I didn't feel anything. Much less my kids.

hulitu

> Someday real soon, kids being shown episodes of 'Knight Rider' by their grandparents won't understand why a talking car was so futuristic.

Maybe in 100 years. The talking car was more intelligent than Siri, Alexa or Hey Google.

It is not that we are not able to "talk" to computers, it is that we "talk" with computers only so that they can collect more data about us. Their "intelligence" is limited to simple text underestanding.

dizhn

Kitt was funny though. (For its time)

Havoc

Tried explaining what a Tamagotchi was to someone recently. Looks of utter bewilderment

worik

That is a natural reaction.

azeirah

Really? Tamagotchis seem to be one of those things that have charm beyond straight up nostalgia :o

sublinear

Was that point not almost a decade ago?

Mountain_Skies

Not really. My 1983 Datsun would talk, but it couldn't converse. Alexa and Siri couldn't hold a conversation anywhere near the level KITT did. There's a big difference. With LLMs, we're getting close.

heelix

The self driving aspect, amazingly, is already here and considered mundane.

DrillShopper

Oh really? What vehicle can I buy today, drive home, get twice the legal limit drunk, flop in the back alone to take a nap while my car drives me two hours away to a relative's house?

I'd really like to buy that car so I await your response.

ziofill

It is a frequent fantasy of mine to bring tech back to historical figures, like to show my phone to Galileo or to take Leonardo da Vinci for a ride in my car. But I guess you don't need to go that far to blow minds.

benob

Cray1 should be compared to nowadays raspberry pi pico 2 / rp2350 which has similar specs (using external ram).

jgalt212

I won't rest until the average microcontroller in an optical mouse is more powerful than a Cray 1.

qooiii2

A lot of touchscreens meet that requirement. Turns out it's often cheaper to solve problems with algorithms than avoid them by design.

kdndnrndn

I'm not aware of any optical mouse using a general purpose MCU, to my knowledge they are all using ASICs

bigfatkitten

There are millions, if not tens of millions of USB and PS/2 keyboards and mice out there powered by Cypress MCUs with 8051 cores.

sweetcocomoose

Nordic dominates the market for keyboards and mice. Programmable MCUs with BLE radios are required for any wireless devices.

Rohansi

Some gaming mice do for running RGB lights, macros, or whatever.

1oooqooq

try as you may, but that mouse will never work as a lounge center piece.

dgacmu

Comparing against a raspberry pi 5 is kind of overkill. While a Pico 2 is close to computationally equivalent to a cray 1 now (version 2 added hardware floating point), the cray still has substantially more memory - almost 9MB vs 520k.

For parity, you have to move up to a raspberry pi zero 2, which costs $15 and uses about 2W of powerm

A million times cheaper than a cray in 2025 dollars and quite a bit more capable.

nereye

The memory in the Cray was external and there are RP2350 boards with 16MB of QSPI flash, here’s one of them:

https://www.olimex.com/Products/RaspberryPi/PICO/PICO2-XXL/o...

noobermin

I guess I'm old because this hasn't really been that insightful of interesting observation just by itself anymore. People often talk about technological advancement of computing as if it is a force of nature whereas the amazing specs of say a rp2350 compared to the cray-1 is more of a story of the economies of scale as opposed to merely technical know-how and design. The reason a rp2350 is a few dollars is because of fabs, infrastructure, and institutional knowledge that likely dwarf the cost of producing a cray-1. I wouldn't even be surprised if someone bothered to do a similar calculation of the cost of infrastructure needed behind each cray-1 at the time that it could even be less what is needed to produce rp2350s today. The unit price of a rp2350 to consumers being so cheap (right now that fabs still want to make it) somewhat elides the actual costs involved.

Animats below said that the Cray-1 was made from discrete components. Good luck making a rp2350 from discrete components, it likely wouldn't even function well at the desired frequency due to speed of light and RF interference issues--it would likely be even worse for GHz broadcoms used in the rpi5. This means that in a post-apocolyptic future you could make another cray-1 given enough time and resources. In 20 years when the fabs have stopped making rp2350s there simply will not be any more of them.

dale_huevo

> the Cray had about 160MFLOPS of raw processing power; the Pi has... up to 30GFLOPS. Yes... that's gigaFLOPS. This makes it almost 200 times faster than the Cray.

Imagine traveling back to 1977 and explaining to someone that in 2025 we've allocated all that extra computing power to processing javascript bundles and other assorted webshit.

usrnm

That actually wouldn't be so bad, but in reality the number one usecase for raspberry pi is blinking leds for some time and collecting dust afterwards

darkwater

Still a better user than crunching Javascript to show you ads and track you around.