Raytracing on Intel's Arc B580 – By Chester Lam
22 comments
·March 16, 2025im_down_w_otp
I love these breakdown writeups so much.
I'm also hoping that Intel puts out an Arc A770 class upgrade in their B-series line-up.
My workstation and my kids' playroom gaming computer both have A770's, and they've been really amazing for the price I paid, $269 and $190. My triple screen racing sim has an RX 7900 GRE ($499), and of the three the GRE has surprisingly been the least consistently stable (e.g. driver timeouts, crashes).
Granted, I came into the new Intel GPU game after they'd gone through 2 solid years of driver quality hell, but I've been really pleased with Intel's uncharacteristic focus and pace of improvement in both the hardware and especially the software. I really hope they keep it up.
999900000999
I have a couple of these too, and I strongly believe Intel is effectively subsidizing these to try to get a foothold in the market.
You get me equivalent of a $500 Nvidia card for around $300 or less. And it makes sense because Intel knows if they can get a foothold in this market they're that much more valuable to shareholders.
Great for gaming, no real downsides imo.
rayiner
This is so cool! I think this is a video of CyberPunk 2077 with path tracing on versus off: https://www.youtube.com/watch?v=89-RgetbUi0. It sees like a real, next-generation advance in graphics quality that we haven't seem in awhile.
api
Intel Arc could be Intel's comeback if they play it right. AMD's got the hardware to disrupt nVidia but their software sucks and they have a bad reputation for that. Apple's high-end M chips are good but also expensive like nVidia (and sold only with a high-end Mac) and don't quite have the RAM bandwidth.
blagie
Intel is close. Good history with software.
If they started shipping GPUs with more RAM, I think they'd be in a strong position. The traditional disruption is to eat the low-end and move up.
Silly as it may sound, but a Battlemage where one can just plug in DIMMs, with some high total limit for RAM, would be the ultimate for developers who just want to test / debug LLMs locally.
sergiotapia
Was raytracing a psyop by Nvidia to lock out amd? Games today don't look that much nicer than 10 years ago and demand crazy hardware. Is raytracing a solution looking for a problem?
im_down_w_otp
I've kind of wondered about this a bit too. The respective visual quality side of it that is. Especially in a context where you're actually playing a game. You're not just sitting there staring at side by side still frames looking for minor differences.
What I have assumed given then trend, but could be completely wrong about, is that the raytracing version of the world might be easier on the software & game dev side to get great visual results without the overhead of meticulous engineering, use, and composition of different lighting systems, shader effects, etc.
juunpp
Except that it isn't like that at all. All you get from the driver in terms of ray tracing is the acceleration structure and ray traversal. Then you have denoisers and upscalers provided as third-party software. But games still ship with thousands of materials, and it is up to the developer to manage lights, shaders, etc, and use the hardware and driver primitives intelligently to get the best bang for the buck. Plus, given that primary rays are a waste of time/compute, you're still stuck with G-buffer passes and rasterization anyway. So now you have two problems instead of one.
kmeisthax
For the vast majority of scenes in games, the best balance of performance and quality is precomputed visibility, lighting and reflections in static levels with hand-made model LoDs. The old Quake/Half-Life bsp/vis/rad combo. This is unwieldy for large streaming levels (e.g. open world games) and breaks down completely for highly dynamic scenes. You wouldn't want to build Minecraft in Source Engine[0].
However, that's not what's driving raytracing.
The vast majority of game development is "content pipeline" - i.e. churning out lots of stuff - and engine and graphics tech is built around removing roadblocks to that content pipeline, rather than presenting the graphics card with an efficient set of draw commands. e.g. LoDs demand artists spend extra time building the same model multiple times; precomputed lighting demands the level designer wait longer between iterations. That goes against the content pipeline.
Raytracing is Nvidia promising game and engine developers that they can just forget about lighting and delegate that entirely to the GPU at run time, at the cost of running like garbage on anything that isn't Nvidia. It's entirely impractical[1] to fully raytrace a game at runtime, but that doesn't matter if people are paying $$$ for roided out space heater graphics cards just for slightly nicer lighting.
[0] That one scene in The Stanley Parable notwithstanding
[1] Unless you happen to have a game that takes place entirely in a hall of mirrors
gmueckl
When path tracing works, it is much, much, MUCH simpler and vastly saner algorithm than those stacks of 40+ complicated rasterization hacks in current rasterization based renderers that barely manage to capture crude approxinations of the first indirect light bounces. Rasterization as a rendering model for realistic lighting has outlived its usefulness. It overstayed because optimizing ray-triangle intersection tests for path tracing in hardware is a hard problem that took some 15 o 20 years of research to even get to the first generation RTX hardware.
juunpp
This doesn't hold at all. Path tracing doesn't "just work", it is computational infeasible. It needs acceleration structures, ray traversal scheduling, denoisers, upscalers, and a million other hacks to work any close to real-time.
gruez
>When path tracing works, it is much, much, MUCH simpler and vastly saner algorithm than those stacks of 40+ complicated rasterization hacks in current rasterization based renderers that barely manage to capture crude approxinations of the first indirect light bounces.
It's ironic that you harp about "hacks" that are used in rasterization, when raytracing is so computationally intensive that you need layers upon layers of performance hacks to get decent performance. The raytraced results needs to be denoised because not enough rays are used. The output of that needs to be supersampled (because you need to render at low resolution to get acceptable performance), and then on top of all of that you need to hallu^W extrapolate frames to hit high frame rates.
keyringlight
I think there's two ways of looking at it. Firstly that raster has more or less plateaued, there haven't been any great advances in a long time and it's not like AMD or any other company have offered an alternative path or vision for where they see 3d graphics going. The last thing a company like nvidia wants is to be a generic good which is easy to compete with or simple to compare against. Nvidia was also making use of their strength/long term investment in ML to drive DLSS
Secondly, nvidia are a company that want to sell stuff for a high asking price, and once a certain tech gets good enough that becomes more difficult. If the 20 series was just a incremental improvement from the 10, and so on then I expect sales would have plateaued especially if game requirements don't move much.
sergiotapia
I don't believe we have reached a raster ceiling. More and more it seems like groups are cahoots to push rtx and ray tracing. We are left to speculate why devs are doing this. nvidiabux? easier time to add marketing keywords? who knows... i'm not a game dev.
gruez
There's no need for implications of deals between nvidia and game developers in smoke filled rooms. It's pretty straightforward: raytracing means less work for developers, because they don't have to manually place lights to make things look "right". Plus, they can harp about how it looks "realistic". It's not any different than the explosion of electron apps (and similar technologies making apps using html/js), which might be fast to develop, but are bloated and feel non-native. But it's not like there's an electron corp, giving out "electronbux" to push app developers to use electron.
phatfish
Raster quality is limited by how much effort engine developers are willing to put into finding computationally cheap approximations of how light/materials behave. But it feels like the easy wins are already taken?
ThatPlayer
I don't think it's just about looks. The advantage of ray tracing is the real time lighting done rather than the static baked maps. One of the features I feel that was lost with modern game lighting is dynamic environments. But as long as the game isn't only ray tracing, these types of interactions will stay disabled for the game. Teardown and The Finals are examples of a dynamic environment game with raytraced lighting.
Another example is when was the last time you've seen a game with a mirror that wasn't broken?
gruez
Hitman, GTA, both of which use a non-raytraced implementation. More to the point, lack of mirrors doesn't impact the gameplay. It's something that's trotted out as a nice gimmick, 99% of the time it's not there, and you don't really notice that it's missing.
davikr
lol, go play Cyberpunk 2077 with pathtracing and compare it to raster before you call it a gimmick.
Clemolomo
It's a transition happening.
Research and progress is necessary, Ray tracing is a clear advancement.
AMD could just easily skip it if they want to reduce costs, we could just not by the gpus. Non of it is happening.
It does look better and it would be a lot easier if we would only do ray tracing
It feels like just yesterday that Chips and Cheese started publishing (*checked and they started up in 2020 -- so not that long ago after all!), and now they've really become a mainstay in my silicon newsletter stack, up there with Semianalysis/Semiengineering/etc.
> Intel uses a software-managed scoreboard to handle dependencies for long latency instructions.
Interesting! I've seen this in compute accelerators before, but both AMD and Nvidia manage their long-latency dependency tracking in hardware so it's interesting to see a major GPU vendor taking this approach. Looking more into it, it looks like the interface their `send`/`sendc` instruction exposes is basically the same interface that the PE would use to talk to the NOC: rather than having some high-level e.g. load instruction that hardware then translates to "send a read-request to the dcache, and when it comes back increment this scoreboard slot", the ISA lets/makes the compiler state that all directly. Good for fine control of the hardware, bad if the compiler isn't able to make inferences that the hardware would (e.g. based on runtime data), but then good again if you really want to minimize area and so wouldn't have that fancy logic in the pipeline anyways.