The PS3 Licked the Many Cookie
124 comments
·April 11, 2025pavlov
ryandrake
Yea, this was the horrible world of embedded programming and working with SoCs before the iPhone SDK finally raised the bar. BSPs composed of barely-working cobbled-together gcc toolchains, jurassic-aged kernels, opaque blobs for flashing the devices, incomplete or nonworking boot loaders, entirely different glue scripts for every tiny chip rev, incomplete documentation. And if you wanted to build your own toolchain? LOL, good luck because every gnu tool needed to be patched in order to work. It was a total mess. You could tell these companies just made chips and reference systems, and only grudgingly provided a way to develop on them. iPhone and Xcode was such breath of fresh air. It pulled me out of embedded and I never went back.
FirmwareBurner
>Yea, this was the horrible world of embedded programming and working with SoCs before the iPhone SDK finally raised the bar.
iPhone SDk only raised the bar for the mobile industry, the rest of embedded world is still stuck in the stone age.
matheusmoreira
Modern ARM microcontrollers apparently use standard GNU toolchains shipped by my Linux distribution. Developing software for a Cortex M0+ was a really good experience. Lack of a complete device emulator made it hard to debug at times but I dealt with it.
paulryanrogers
Stop! You're giving me flashbacks of developing for Hypercom point of sale devices! Never again will I work with alpha software so buggy it only worked from bundled samples! One had to remove their widgets from a sample, one at a time, testing at each step, then adding your own one at a time. Otherwise it would break something and you'd have to start over.
chasil
So this is why so many cash registers are now iPads.
crq-yml
I didn't gain direct experience with Cell, but given that description of the tooling, I'm unconvinced that the issue is fundamental to many-core, or that the author's assertion of non-composability holds up under scrutiny. Composition in "flat" processing architectures is, in principle, exactly what is already seen on a circuit diagram. It recurs in the unit record machines of old, and in modern dataflow systems.
That architecture does have particular weaknesses when it is meant to interface with a random-access-and-deep-callstacks workflow(as would be the case using C++) - and CPUs have accrued complex cache and pipelining systems to cater to that workflow since it does have practical benefit - but the flat approach has also demonstrated success when it's used in stream processing with real-time requirements.
Given that, and the outlier success of some first-party developers, I would lean towards the Cell hardware being catastrophically situated within the software stack, versus being an inherently worse use of transistors.
petermcneeley
The outlier success of some first-party developers indicates that focus and talent on this exotic hardware was required to demonstrate its fully potential. As they say this was an "expert friendly system" and this was because it was complex and it was complex because it was heterogenous.
As for "inherently worse use of transistors" one would have to look at how the transistors could have been used differently. The XBox360 is a different use of transistors.
m000
> It is important to understand why the PS3 failed
That's a weird assertion for a console that sold 87M units, ranks #8 in the all-time top-selling consoles list, and marginally outsold Xbox360 which is compared against in TFA.
See: https://en.wikipedia.org/wiki/List_of_best-selling_game_cons...
JohnMakin
It’s clear from one of the opening statements that the author considered it a failure for developers, not in the absolute sense you are pointing to. It’s not that far into the article.
> The PS3 failed developers because it was an excessively heterogenous computer; and low level heterogeneous compute resists composability.
dkersten
I’m not even sure that’s entirely true either though. By the end of the PS3 generation, people had gotten to grips with it and were pushing it far further than first assumed possible. If you watch the GDC talks, it seemed to me that people were happy enough with it by that point (relatively speaking at least) and were able to squeeze quite a bit of performance out of it. It seems that it was hated for the first while of its life because developers hadn’t settled on a good model for programming it but by the end task based concurrency like we have now started to gain popularity (eg see the naughty dog engine talk).
Is cell really so different from computer shaders with something like Vulkan? I feel if a performance-competitive cell were made today, it might not receive so much hate, as people today are more prepared for its flavour of programming. Nowadays we have 8 to 16 cores, more on P/E setups, vastly more on workstation/server setups, and we have gpu’s and low level gpu APIs. Cell came out in a time when dual core was the norm and engines still did multi threading by having a graphics thread and a logic thread.
xmprt
Naughty Dog has always been at the forefront of PlayStation development. Crash Bandicoot and Uncharted couldn't have been made if they didn't have a really strong grasp on how to use it. I love rereading this developer "diary" where they talk about some of the challenges with making Crash: https://all-things-andy-gavin.com/video-games/making-crash/
MindSpunk
Cell was a failure, made evident by the fact nobody has tried to use it since.
Comparing the SPEs to compute shaders is reasonable but ignores what they were for. Compute shaders are almost exclusively used for graphics in games. Sony was asking people to implement gameplay code on them.
The idea the PS3 was designed around did not match the reality of games. They were difficult to work with, and taking full advantage of the SPEs was very challenging. Games are still very serial programs. The vast majority of the CPU work can't be moved to the SPUs like it was dreamed.
Very often games were left with a few trivially parallel numerical bits of code on the SPEs, but stuck with the anemic PPE core for everything else.
bdhcuidbebe
Yea its not true. 7th gen was the last generation where quirks was commonplace and complete ports/rewrites were still a thing. More recent generations is more straight forward and simplified cross-console releases.
dgfitz
It just isn’t a solid thesis at the beginning of the article, and in todays attention-span media consumption narrative, it… serves its purpose?
dcow
The PS3 was a technical failure. It was inferior to its siblings despite having more capable hardware. This was super obvious any time you’d play a game available for both Xbox and PS3. The PS3 version was a game developed for Xbox then auto-ported to run on PS3’s unfamiliar hardware. It’s an entirely fair hypothesis.
Maybe in 15 years someone crazy enough will be delving in and building games that fully utilize every last aspect of the hardware. Like this person does on the N64: https://youtube.com/@kazen64?si=bOSdww58RNlpKCNp
jchw
The PS3 maybe wasn't a failure in the long run, but at launch it was a disaster all around. Sony was not making a profit on the PS3, and the initial sales at its initial price were not looking good[1]. With the Wii as its primary competitor, the Wii absolutely smashed the PS3 at launch and for a long while after, and it still maintains the lead. Sony mainly kept the competition close by slashing the price and introducing improved models, but in the long run I think the reason why their sales numbers managed to wind up OK is because they held out for the long haul. The PS3 continued to be the "current-gen" Sony console for a long time. By the time Sony had released the PS4 in late 2013/early 2014, Nintendo had already released its ill-fated Wii U console an entire year earlier in late 2012. I think what helped the PS3 a lot here was the fact that it did have a very compelling library of titles, even if it wasn't a super large one. As far as I know, Metal Gear Solid 4 was only released for PlayStation 3; that stands out to me as a game that would've been a console-seller for many.
So while PS3 was not ultimately a commercial success, it was clearly disliked by developers and the launch was certainly a disaster. I think you could argue the PS3 was a failure in many regards, and a success in some other regards. Credit to Sony, they definitely persevered through a tough launch and made it out to the other end. Nintendo wasn't able to pull off the same for the Wii U, even though it also did have some good exclusive games in the library.
[1]: https://web.archive.org/web/20161104003151/http://www.pcworl...
tuna74
In the right hands, multi-platform games were pretty much identical like BF3: https://www.eurogamer.net/digitalfoundry-face-off-battlefiel...
xmprt
While the PS3 has a soft spot in my heart (free online multiplayer!) I can't help but wonder if the subpar launch gave Microsoft Xbox a leg in the race where otherwise the Xbox 360 might have been the last console in their lineup.
bombcar
Everyone I knew had a Wii. But they mostly had a 360 as that generations “real” console. In fact, I can’t recall anyone who had the PS3.
Lots of PS2.
kbolino
The Sony-Toshiba-IBM alliance had much grander plans for the Cell architecture, which ultimately came to naught. The PS3 wasn't just a console, it was supposed to be a revolution in computing. As a console, it did alright (though it's still handily beaten by its own predecessor and marginally by its own successor), but as an exponent of the Cell architecture that was supposed to be the future, it failed miserably. Sony yanked OtherOS a couple of years into its life, and while a Cell supercomputer was the first to break the petaflop barrier, it was quickly surpassed by x86 and then Arm.
dfxm12
There are a lot of different measures. The Wii (the 7th gen Nintendo console) outsold it considerably, as did the 6th gen PS2 (which far and away beat out all other consoles in its generation).
Going from such market dominance to second place is not good. Not being able to improve upon your position as the industry leader is not good. Failure might be strong, but I certainly wouldn't be happy if I was an exec at Sony at the time.
tekla
Sales wasn't what the article was referring to if you take the context of literally the very first sentence of the article
miltonlost
No wonder tech people need LLMs so much if they are incapable of reading more than 3 sentences and comprehending them.
notatoad
perhaps a better headline would have been "why the PS3 architecture failed". if it was a success, they wouldn't have abandoned it for the next generation.
colejohnson66
OP is talking about developer experience. From right after the image:
> The PS3 failed developers because it was an excessively heterogenous computer; [...]
santoshalper
Where a console rates on the all-time sales leader board is pretty irrelevant, since the industry has grown so much in absolute terms. As when looking at movie box office revenue, you need to look at more than one number if you want to judge the real performance of a console in the market.
Here is a good example: The PS3 sold only slightly more than half as many units as its predecessor, the PS2, did. Most businesses would, in fact, consider it a failure if their 3rd generation product sold much more poorly than the second generation. Sony's rapid move away from the PS3/Cell architecture gives you a pretty good reason to believe they considered it a failure too.
null
corysama
With an SPU's 256K local memory and DMA, the ideal way to use the SPU was to split the local memory into 6 sections: code, local variables, DMA in, input, output, DMA out. That way you could have async DMA in parallel in both directions while you transform your inputs to your outputs. That meant your working space was even smaller...
Async DMA is important because the latency of a DMA operation is 500 cycles! But, then you remember that the latency of the CPU missing cache is also 500 cycles... And, gameplay code misses cache like it was a childhood pet. So, in theory you just need to relax and get it working any way possible and it will still be a huge win. Some people even implemented pointer wrappers with software-managed caches.
500 cycles sounds like a lot. But, remember that the PS2 ran at 300MHz (and had a 50 cycle mem latency) while the PS3 and 360 both ran at 3.2Ghz (and both had a mem latency of 500 cycles). Both systems pushed the clock rate much higher than PCs at the time. But, to do so, "niceties" like out-of-order execution were sacrificed. A fixed ping-pong hyperthreading should be good enough to cover up half of the stall latency, right?
Unfortunately, for most games the SPUs ended up needing to be devoted full time to making up for the weakness of the GPU (pretty much a GeForce 7600 GT). Full screen post processing was an obvious target. But, also the vertex shaders of the GPU needed a lot of CPU work to set them up. Moving that work to the SPUs freed up a lot of time for the gameplay code.
bri3d
I think one thing that the linked article (which I think is great and I generally agree with!) misses is that libraries and abstraction can patch over the lack of composability created by heterogeneous systems. We see it everywhere - AI/ML libraries abstracting over some combination of TPU, vector processing, and GPU cores being one obvious modern place.
This happened on the PS3, too, later in its life: Sony released PlayStation Edge and middleware/engine vendors increasingly learned how to use SPU to patch over RSX being slow. At this point developers stopped needing to care so much about the composability issues introduced by heterogeneous computing, since they could use the SPUs as another function processor to offload, for example, geometry processing, without caring about the implementation details so much.
masklinn
> Both systems pushed the clock rate much higher than PCs at the time.
Intel reached 3.2GHz on a production part in June 2003, with the P4 HT 3.2 SL792. At the time the 360 and PS3 were released, Intel's highest clocked part was the P4 EE SL7Z4 at 3.73.
rasz
Not to mention both Intel 30 stages deep pipeline and PPC in-order were empty MHz spend mostly on waiting for cache misses.
01HNNWZ0MV43FF
I'm surprised the SPUs were used for post-processing, cause whenever I try to do software rendering I get bottlenecked on fill rate quickly. I believe you, because I've seen it attested in many places, but I'm surprised by it.
corysama
The 1:1 straight-line behavior of fullscreen post processing is much easier to prefetch than triangle rasterization. And, in this case the SPUs and GPU used the same memory. So, no bandwidth advantage to the GPU. The best the GPU could do would be hiding latency better.
dehrmann
The Xbox worked as a proof-of-concept to show that you could build a console with commodity hardware. The Xbox 360 doubled down on this while the PS3 tried to do clever things with an innovative architecture. Between the two, it was clear commodity hardware was the path forward.
mikepavone
> The Xbox 360 doubled down on this while the PS3 tried to do clever things with an innovative architecture.
I don't think this is really an accurate description of the 360 hardware. The CPU was much more conventional than the PS3, but still custom (derived from the PPE in the cell, but has an extended version of VMX extension). The GPU was the first to use a unified shader architecture. Unified memory was also fairly novel in the context of a high performance 3D game machine. The use of eDRAM for the framebuffer is not novel (the Gamecube's Flipper GPU had this previously), but also wasn't something you generally saw in off-the-shelf designs. Meanwhile the PS3 had an actual off the shelf GPU.
These days all the consoles have unified shaders and memory, but I think that just speaks to the success of what the 360 pioneered.
Since then, consoles have gotten a lot closer to commodity hardware of course. They're custom parts (well except the original Switch I guess), but the changes from the off the shelf stuff are a lot smaller.
photon_rancher
I mean commodity hardware usually did ok in games consoles prior to then too. NES was a modified commodity chip
fragmede
in the beginning general purpose computers weren't capable of running graphics like the consoles could. That took dedicated hardware that only the early Atari/NES/Genesis had. That's not to say that the Apple or IBM clones didn't have games, they did, but it just wasn't the same. The differentiation was their hardware, enabling games that couldn't be run on early PCs. Otherwise why buy a console?
So the thinking was a unique architecture is what a console's raison d’être was. Of course now we know better, as the latest generation of consoles shows, butthat's where the thinking for the PS3's cell architecture came from.
gmueckl
This leaves out an important step. When 3D graphics acceleration entered the broader consumer/desktop computing market, it was also a successor to the kind of 2D graphics acceleration that consoles had and previous generations of desktop computers generally didn't. So I believe that it's fair to say that specialized console hardware was replaced by general purpose computing hardware because the general purpose hardware had morphed to include a superset of console hardware capabilities.
01HNNWZ0MV43FF
GPUs are just mitochondria that were absorbed into general purpose computers after evolving from early game consoles
dehrmann
Agreeing with all my siblings' comments, computers took cues from a lot of places and evolved to be general-purpose. Something similar happened on the GPU side, and at some point, the best parts of bespoke graphics hardware got generalized–plus 3D upended the whole space. By the PS3 era, there were multiple GPU vendors and multiple generations of APIs, so everything had settled down and standardized. The era of gaining a competitive advantage through clever hardware was over, and Sony, a hardware company, was still fighting the last war.
fragmede
> Sony, a hardware company, was still fighting the last war.
exactly!
treyd
This is the thing that people don't realize about middle-era consoles. It was the shift where commodity PC hardware was competing well with console hardware.
Today in 2025 the only possible advantage is maybe in a specific price category where the volume discount is enough to justify it. In general, consoles just don't make technological sense.
MBCook
Price, UX, and fixed hardware that can be heavily optimized for.
VyseofArcadia
Well, there was the Amiga, but in all fairness it was first conceived as a game console and then worked into a computer.
rasz
>That took dedicated hardware that only the early Atari
2600 was downright pathetic compared to TRS-80 or Apple 2
>/NES
comparable to C-64
>/Genesis
comparable to Amiga 500
thadt
Not a game developer, but I wrote a bunch of code specifically for the CELL processor for grad school at the time (and tested it on my PS3 at home - marking the first and last time I was able to convince my wife I needed a video game system "for real work"). It was fun to play with, but I can empathize with the time cost aspect: scheduling and optimizing DMA and SPE compute tasks just took a good bit of platform specific work.
I suspect a major point killing off special architectures like the PS3 was the desire of game companies to port their games to other platforms such as the PC. Porting to/from the PS3 would be rather painful if you were trying to fully leverage the power and programming model of the CELL CPU.
MBCook
As things got more expensive we really started to see a switch from custom or in-house engines to the ones we’re so familiar with like Unity and Unreal.
Many developers couldn’t afford to keep up if they had to build their own engine, let alone on multiple platforms.
Far cheaper/easier to share the cost with many others through Unreal licenses. Your fame is more portable and can use more features that you may have ever had time to add to the engine.
It’s way easier to make multi-platform engines if each one doesn’t need its own ridiculously special way of doing things. And unless that platform is the one that’s driving a huge amount of sales I’m guessing it’s gonna get less attention/optimization.
darknavi
I suspect that as well.
It's not that the architecture was bad, it's that it's not easily compatible to other endpoints developers wanted to release on resulting in prohibitively high costs of doing a "full" port.
wmf
Nah, it was bad. It took far too much effort even for PS3-exclusive games.
MBCook
Well that wasn’t supposed to be the architecture either. The whole thing was supposed to be vastly faster and bigger with way more SPUs.
Maybe that would’ve been terrible, maybe not. Kinda sounds like yes in hindsight.
But the SPU‘s were originally supposed to do the GPU work too I think. So there’s a reason the GPU doesn’t fit in terribly well, it had to be tacked on at the end so the PS3 had any chance at all. And it couldn’t be well designed/optimized for the rest of the system because they were out of time.
rokkamokka
> I used to think that PS3 set back Many-Core for decades, now I wonder if it simply killed it forever.
Did general purpose CPUs not kind of subsume this role? Modern CPUs have 16 cores, and server oriented ones can have many, many more than that
bitwarrior
> The PS3 failed developers because it was an excessively heterogenous computer
Which links to the Wiki:
> These systems gain performance or energy efficiency not just by adding the same type of processors, but by adding dissimilar coprocessors
Modern CPUs have many similar cores, not dissimilar cores.
kmeisthax
Mobile CPUs embraced this hardcore; but the problem is that most of those cores don't have the programmer interfaces exposed. The most dissimilarity you get on mobile is big.LITTLE; you might occasionally get scheduled on a weaker core with better power consumption. But this is designed to be software-transparent. In contrast, the device vendor can stuff their chips full of really tiny cores designed to run exactly one program all the time.
For example, Find My's offline finding functionality runs off a coprocessor so tiny it can basically stay on forever. But nobody outside Apple gets to touch those cores. You can't ship an app that uses those cores to run a different (cross-platform) item-finding network; even on Android they're doing all the background stuff on the application processor.
MBCook
AI accelerators are a new popular addition. Media encoder/decoder blocks have been around for a while. Crypto accelerator blocks.
dkersten
Some intel processors have P/E core splits. So do some apple processors and mobile processors.
Our normal desktop processors have double the cells cores. Workstation and servers have 64 or more cores.
Many core is alive and well.
rokkamokka
Ah, my bad, I didn't understand the definition of many-core
sergers
i was thinking similar lines.
maybe i dont full understand "many-core", but the definition the article implies aligns with what i think of latest qualcomm snapdragon mobile processor for example with cores at different frequencies/other differences.
also i dont understand why ps3 is considered a failure, when did it fail?
in NA xbox360 was more popular (i would say because xbox live) but ps3 was not far behind (i owned a ps3 at launch and didnt get a xbox360 till years later).
from a lifetime sales, shows more ps3s shipped globally than xbox.
MBCook
The incredibly high price of the PS3 at launch cost it a lot of sales, and it took forever to come down. Both of those are direct results of the hardware cost of the Cell and BluRay drive.
Early on the Xbox also did a better job with game ports. People had very little experience using multicore processors and the cell was even worse. So often the PlayStation three would have a lower resolution or worse frame rate or other problems like that.
Xbox Live is also an excellent point. That really helped Microsoft a lot.
All of that meant Microsoft got an early lead and the PlayStation three didn’t do anywhere near as well as someone might suspect from a follow up to the PlayStation 2.
As time went on, the benefits of the Blu-ray drive started to factor in some. Every PlayStation had a hard drive, which wasn’t true of the 360. The red ring of death made a lot of customers mad and scared others off from the Xbox. And as Sony released better libraries and third parties just got a better handle on things they started to be able to do a better job on their PS3 versions to where it started to match or exceed the Xbox depending on the game.
By the end I think the PlayStation won in North American sales but it was way way closer than it should have been coming off the knockout success of the PS2.
masklinn
> also i dont understand why ps3 is considered a failure, when did it fail?
> The PS3 failed developers
It failed as an ISA (or collection thereof), and in developer mindshare.
dcow
I would argue that the failure extended to the user-perceptible performance deficit vs the XB360 despite arguably more capable hardware. Released games didn't perform better on the PS3 even if they technically could.
mattnewport
Big little cores like on mobile or some Intel processors are really not the same thing. The little cores have the same instruction set and address the same memory as the big cores and are pretty transparent to devs apart from some different performance characteristics.
The SPEs were a different instruction set with a different compiler tool chain running separate binaries. You didn't have access to an OS or much of a standard library, you only had 256K of memory shared between code and data. You had to set up DMA transfers to access data from main memory. There was no concept of memory protection so you could easily stomp over code with a write to a bad pointer (addressing wrapped so any pointer value including 0 was valid to write to). Most systems would have to be more or less completely rewritten to take advantage of them.
accrual
> 256 MB was dedicated to graphics and only had REDACTED Mb/s access from the CPU
I wonder what the REDACTED piece means here, aren't the PS3 hardware specifications pretty open? Per Copetti, the RSX memory had a theoretical bandwidth of 20.8 GB/s, though that doesn't indicate how fast the CPU can access it.
monocasa
I don't know why it's redacted here; maybe he couldn't find a public source.
It is a mind bendingly tiny 16MB/s bandwidth to perform CPU reads from RSX memory.
OptionOfT
Just to make sure, I read that the CPU reads from the RSX at 16 megabytes / sec?
maximilianburke
I can't recall exactly but that sounds right. It was exceptionally slow.
christkv
Sony was funny in this way.
PS1: Easy to develop for and max out. PS2: Hard to develop for and hard to max out. PS3: Even harder than PS2. PS4: Back to easier. PS5: Just more PS4. PS5 PRO: Just more PS5.
AdmiralAsshat
It certainly doesn't seem to have impacted adoption, though.
For whatever reasons developers seem loath to talk about how difficult developing for a given console architecture is until the console is dead and buried. I guess the assumption is that the console vendor might retaliate, or the fans might say, "Well all of these other companies are somehow doing it, so you guys must just suck at your jobs."
An early interview with Shinji Mikami is one of the only ones I can recall about a high-profile being frank about having difficulties developing for the console[0]:
> IGNinsider: Ahh, smart politics. How do you feel about working on the PlayStation 2? Have you found any strengths in the system by working on Devil May Cry that you hadn't found before? > > Mikami: If the programmer is really good, then you can achieve really high quality, but if the programmer isn't that great then it is really hard to work with. We lost three programmers during Devil May Cry because they couldn't keep up.
[0] https://www.ign.com/articles/2001/05/31/interview-with-shinj...
setr
the ps3 development difficulty was definitely complained about during its usage cycle; the standard ps3 vs xbox360 argument was that the ps3 had far superior hardware, and xbox fans would always counter with no one could make use of that hardware
christkv
I think it's funny because the ease of development was one of the reason why the original Playstation had such a wide library of titles. The Saturn and the N64 was hard to get good performance out off due to architectural decisions.
MBCook
Sony needed developers for the PlayStation. So they did a good job.
The PlayStation did so well a lot of people wanted the PlayStation 2. And because it worked as a cheap DVD player it sold extremely well.
Sony learned hard to program expensive exotic hardware does great!
PS3 arrives with hardware that’s even more expensive and even harder to program and gets a world of hurt.
So for the PlayStation 4 they tried to figure out what went wrong and realized they needed to make things real easy for developers. Success!
PlayStation 5, that PlayStation 4 four thing worked great let’s keep being nice to developers. Going very well.
The PS2 succeeded _in-spite_ of its problems. And Sony didn’t realize that.
maximilianburke
The PS4 and beyond is entirely creditable to Mark Cerny who spent a lot of time talking to developers who had spent years pulling their hair out with the PS3.
dundarious
> Most code and algorithms cannot be trivially ported to the SPE.
Having never worked on SPE coding, but having heard lots about interesting aspects, like manual cache management, I was very interested to read more.
> C++ virtual functions and methods will not work out of the box. C++ encourages dynamic allocation of objects but these can point to anywhere in main memory. You would need to map pointer addresses from PPE to SPE to even attempt running a normal c++ program on the SPE.
Ah. These are schoolboy errors in games programming (comparing even with the previous 2 generations of the same system).
I think the entire industry shifted away from teaching/learning/knowing/implementing those practices de rigeur, so I'm absolutely not criticising the OP -- I was taught the same way around this time.
But my reading of the article is now that it highlights a then-building and now-ubiquitous software industry failing, almost as much as a hardware issue (the PS3 did have issues, even if you were allocating in a structured way and not trying to run virtual functions on SPEs).
bdhcuidbebe
> It is important to understand why the PS3 failed.
But did it fail?
PS3 was a very successful 7th gen console, only beaten by the Wii in units sold, but had a longer shelf life, more titles than any other 7th gen console.
Pet_Ant
I hope that as RISC-V gains in support, there is a chance to experiment with a many-core version of it. Something like a hundred QERV cores on a chip. The lack of patents is a key enabler, and support for the ISA on more vanilla chips is the other enabler. This could happen.
The only pratical many-core I know of was the SPARC T-1000 series https://en.wikipedia.org/wiki/SPARC_T_series
raphlinus
Thanks so much, Peter, for writing this up. I think it adds a lot to the record about what exactly happened with the Cell. And, as with Larrabee, I have to wonder, what would an alternative universe look like if Sony had executed well? Or is the idea so ill-fated that no Cell-like many-core design could ever succeed?
I remember trying to learn Cell programming in 2006 using IBM’s own SDK (possibly different and less polished compared to whatever Sony shipped to licensed PS3 developers).
I had already spent a few years writing fragment shaders, OpenGL, and CPU vector extension code for 2D graphics acceleration, so I thought I’d have a pretty good handle on how to approach this new model of parallel programming. But trying to do anything with the SDK was just a pain. There were separate incompatible gcc toolchains for the different cores, separate vector extensions, a myriad of programming models with no clear guidance on anything… And the non-gcc tools were some hideous pile of Tcl/TK GUI scripts with a hundred buttons on the screen.
It really made me appreciate how good I’d had it with Xcode and Visual Studio. I gave up on Cell after a day.