Skip to content(if available)orjump to list(if available)

Intel Arc Pro B50 GPU Launched at $349 for Compact Workstations

Tepix

When will we see Intel Flex datacenter cards that do not have the 8 stream limit based on the Xe2 "battlemage" architecture?

All current Intel Flex cards seem to be based on the previous gen "Xe".

tester756

https://www.phoronix.com/review/intel-arc-pro-b50-linux

>Overall the Intel Arc Pro B50 was at 1.47x the performance of the NVIDIA RTX A1000 with that mix of OpenGL, Vulkan, and OpenCL/Vulkan compute workloads both synthetic and real-world tests. That is just under Intel's own reported Windows figures of the Arc Pro B50 delivering 1.6x the performance of the RTX A1000 for graphics and 1.7x the performance of the A1000 for AI inference. This is all the more impressive when considering the Arc Pro B50 price of $349+ compared to the NVIDIA RTX A1000 at $420+.

bsder

Put 32GB on that card and everybody would ignore performance issues.

With 16GB everybody will just call it another in the long list of Intel failures.

snowram

I wonder why everyone keep saying "just put more VRAM" yet no cards seem to do that. If it is that easy to compete with Nvidia, why don't we already have those cards?

blkhawk

because the cards already sell at very very good prices with 16GB and optimizations in generative AI is bringing down memory requirements. Optimizing profits means yyou sell with the least amount of VRAM possible not only to save the direct cost of the RAM but also to guard future profit and your other market segments. the cost of the ram itself is almost nothing compared to that. any intel competitor can more easily release products with more than 16GB and smoke them. Intel tries for a market segment that was only served by gaming cards twice as expensive up until now. this frees those up to be finally sold at MSRP.

rocqua

I believe that VRAM has massively shot up in price, so this is where a large part of the costs are. Besides I wouldn't be very surprised if Nvidia has such strong market share they can effectively tell suppliers to not let others sell high capacity cards. Especially because VRAM suppliers might worry about ramping up production too much and then being left with an oversupply situation.

agilob

AMD is growing up into doing that, they have a few decent cards with 20Gb and 24Gb now

colechristensen

#3 player just released something that compares well with price/performace ratio compared to #1 player's release from a year and a half ago... yep

tinco

No. The A1000 was well over $500 last year. This is the #3 player coming out with a card that's a better deal than what the #1 player currently has to offer.

I don't get why there's people trying to twist this story or come up with strawmen like the A2000 or even the RTX5000 series. Intel's coming into this market competitively, which as far as I know is a first, and it's also impressive.

Coming into the gaming GPU market had always been too ambitious a goal for Intel, they should have started with competing in the professional GPU market. It's well known that Nvidia and AMD have always been price gouging this market so it's fairly easy to enter it competitively.

If they can enter this market successfully and then work their way up on the food chain then that seems like good way to recover from their initial fiasco.

blagie

Well, no. It doesn't. The comparison is to the A1000.

Toss in a 5060 Ti into the compare table, and we're in an entirely different playing field.

There are reasons to buy the workstation NVidia cards over the consumer ones, but those mostly go away when looking at something like the new Intel. Unless one is in an exceptionally power-constrained environment, yet has room for a full-sized card (not SFF or laptop), I can't see a time the B50 would even be in the running against a 5060 Ti, 4060 Ti, or even 3060 Ti.

magicalhippo

> There are reasons to buy the workstation NVidia cards over the consumer ones

I seem to recall certain esoteric OpenGL things like lines being fast was a NVIDIA marketing differentiator, as only certain CAD packages or similar cared about that. Is this still the case, or has that software segment moved on now?

KeplerBoy

"release from a year and a half ago", that's technically true but a really generous assessment of the situation.

We could just as well compare it to the slightly more capable RTX A2000, which was released more than 4 years ago. Either way, Intel is competing with the EoL Ampere architecture.

tossandthrow

... At a current day cheaper price.

There are huge markets that does not care about SOTA performance metrics but needs to get a job done.

mythz

Really confused why the Intel and AMD both continue to struggle and yet still refuse to offer what Nvidia wont, i.e. high ram consumer GPUs. I'd much prefer paying 3x cost for 3x VRAM (48GB/$1047), 6x cost for 6x VRAM (96GB/$2094), 12x cost for 12x VRAM (192GB/$4188), etc. They'd sell like hotcakes and software support would quickly improve.

At 16GB I'd still prefer to pay a premium for NVidia GPUs given its superior ecosystem, I really want to get off NVidia but Intel/AMD isn't giving me any reason to.

fredoralive

Because the market of people who want huge RAM GPUs for home AI tinkering is basically about 3 Hacker News posters. Who probably won’t buy one because it doesn’t support CUDA.

PS5 has something like 16GB unified RAM, and no game is going to really push much beyond that in VRAM use, we don’t really get Crysis style system crushers anymore.

bilekas

> PS5 has something like 16GB unified RAM, and no game is going to really push much beyond that in VRAM use, we don’t really get Crysis style system crushers anymore.

This isn't really true from the recreational card side, nVidia themselves are reducing the number of 8GB models as a sign of market demand [1]. Games these days are regularly maxing out 6 & 8 GB when running anything above 1080p for 60fps.

The prevalence of Unreal Engine 5 also recently with a low quality of optimization for weaker hardware is causing games to be released basically unplayable for most.

For recreational use the sentiment is that 8GB is scraping the bottom of the requirements. Again this is partly due to bad optimizations, but games are being played in higher resolutions also, which required more memory for larger texture sizes.

[1] https://videocardz.com/newz/nvidia-reportedly-reduces-supply...

pjmlp

As someone that started on 8 bit computing, Tim Sweeny is right the Electron garbage culture when applied to Unreal 5 is one of the reasons so much RAM is needed, with such bad performance.

While I dislike some of the handmade hero culture, in one thing they are right, regarding how bad modern hardware happens to be used.

null

[deleted]

Rohansi

> PS5 has something like 16GB unified RAM, and no game is going to really push much beyond that in VRAM use

That's pretty funny considering that PC games are moving more towards 32GB RAM and 8GB+ VRAM. The next generation of consoles will of course increase to make room for higher quality assets.

jantuss

Another use for high RAM GPUs is the simulation of turbulent flows for research. Compared to CPU, GPU Navier-Stokes solvers are super fast, but the size of the simulated domain is limited by the RAM.

FirmwareBurner

>Because the market of people who want huge RAM GPUs for home AI tinkering is basically about 3 Hacker News posters

You're wrong. It's probably more like 9 HN posters.

blitzar

There are also 3 who, for retro reasons, want GPUs to have 8 bits and 256MB or less of VRAM

daemonologist

This card does have double the VRAM of the more expensive Nvidia competitor (the A1000, which has 8 GB), but I take your point that it doesn't feel like quite enough to justify giving up the Nvidia ecosystem. The memory bandwidth is also... not great.

They also announced a 24 GB B60 and a double-GPU version of the same (saves you physical slots), but it seems like they don't have a release date yet (?).

Ekaros

I am not sure there is significant enough market for those. That is selling enough consumer units to cover all design and other costs. From gamer perspective 16GB is now a reasonable point. 32GB is most one would really want and even that not at more than say 100 more price point.

This to me is the gamer perspective. This segment really does not need even 32GB, let alone 64GB or more.

drra

Never underestimate bragging rights in gamers community. Majority of us run unoptimized systems with that one great piece of gear and as long as the game runs at decent FPS and we have some bragging rights it's all ok.

rkomorn

Exactly. A decade ago, I put 64GB of RAM in a PC and my friend asked me why I needed that much and I replied "so I can say I have 64GB RAM".

The only time usage was "high" was when I created a VM with 48GB RAM just for kicks.

It was useless. But I could say I had 64GB RAM.

imiric

> I am not sure there is significant enough market for those.

How so? The prosumer local AI market is quite large and growing every day, and is much more lucrative per capita than the gamer market.

Gamers are an afterthought for GPU manufacturers. NVIDIA has been neglecting the segment for years, and is now much more focused on enterprise and AI workloads. Gamers get marginal performance bumps each generation, and side effect benefits from their AI R&D (DLSS, etc.). The exorbitant prices and performance per dollar are clear indications of this. It's plain extortion, and the worst part is that gamers accepted that paying $1000+ for a GPU is perfectly reasonable.

> This segment really does not need even 32GB, let alone 64GB or more.

4K is becoming a standard resolution, and 16GB is not enough for it. 24GB should be the minimum, and 32GB for some headroom. While it's true that 64GB is overkill for gaming, it would be nice if that would be accessible at reasonable prices. After all, GPUs are not exclusively for gaming, and we might want to run other workloads on them from time to time.

While I can imagine that VRAM manufacturing costs are much higher than DRAM costs, it's not unreasonable to conclude that NVIDIA, possibly in cahoots with AMD, has been artificially controlling the prices. While hardware has always become cheaper and more powerful over time, for some reason, GPUs buck that trend, and old GPUs somehow appreciate over time. Weird, huh. This can't be explained away as post-pandemic tax and chip shortages anymore.

Frankly, I would like some government body to investigate this industry, assuming they haven't been bought out yet. Label me a conspiracy theorist if you wish, but there is precedent for this behavior in many industries.

Fnoord

I think the timeline is roughly: SGI (90s), Nvidia gaming (with ATi and then AMD) eating that cake. Then cryptocurrency took off at the end '00s / start '10s, but if we are honest things like hashcat were also already happening. After that AI (LLMs) took off during the pandemic.

During the cryptocurrency hype, GPUs were already going for insane prices and together with low energy prices or surplus (which solar can cause, but nuclear should too) allows even governments to make cheap money (and for hashcat cracking, too). If I was North Korea I'd know my target. Turns out, they did, but in a different way. That was around 2014. Add on top of this Stadia and GeForce Now as examples of renting GPU for gaming (there are more, and Stadia flopped).

I didn't mention LLMs since that has been the most recent development.

All in all, it turns out GPUs are more valuable than what they were sold for if your goal isn't personal computer gaming. Hence the price gone up.

Now, if you want to thoroughly investigate this market you need to figure what large foreign forces (governments, businesses, and criminal enterprises) use these GPUs for. US government is aware for long time of above; hence export restrictions on GPUs. Which are meant as slowing opponent down to catch up. The opponent is the non-free world (China, North Korea, Russia, Iran, ...), though current administration is acting insane.

rocqua

Why would intel willingly join this cartel then?

Their GPU business is a slow upstart. If they have a play that could massively disrupt the competition, and has a small chance of epic failure, that should be very attractive to them.

zdw

I doubt you'd get linear scaling of price/capacity - the larger capacity modules are more expensive per GB than smaller ones, and in some cases are supply constrained.

The number of chips on the bus is usually pretty low (1 or 2 of them on most GPUs), so GPUs tend to have to scale out their memory bus widths to get to higher capacity. That's expensive and takes up die space, and for the conventional case (games) isn't generally needed on low end cards.

What really needs to happen is someone needs to make some "system seller" game that is incredibly popular and requires like 48GB of memory on the GPU to build demand. But then you have a chicken/egg problem.

Example: https://wccftech.com/nvidia-geforce-rtx-5090-128-gb-memory-g...

cmxch

Maxsun does offer a high VRAM (48GB) dual Arc Pro B60, but the only US availability has it on par with a 5090 at ~$3000.

PostOnce

I think that's actually two GPUs on one card, and not a single GPU with 48GB VRAM

hengheng

Needs a PCIe bifurcation chip on the main board for all we know. Compatibility is going to be fun.

YetAnotherNick

> I'd much prefer paying 3x cost for 3x VRAM

Why not just buy 3 card then? These cards doesn't require active cooling anyways and you can just fit 3 in decent sized case. You will get 3x VRAM speed and 3x compute. And if your usecase is llm inference, it will be a lot faster than 1x card with 3x VRAM.

_zoltan_

because then instead of RAM bandwidth now you're dealing with PCIe BW which is way less.

mythz

Also less power efficient, takes up more PCI slots and a lot of software doesn't support GPU clustering. Already have 4x 16GB GPUs which is unable to run large models exceeding 16GB.

Currently running them different VMs to be able to make full use of them, used to have them running in different docker containers however OOM Exceptions would frequently bring down the whole server, which running in VMs helped resolve.

YetAnotherNick

For LLM inference of batch size 1, it's hard to be saturate PCIe bandwidth specially for less powerful chips. You would get close to linear performance[1]. The obvious issue is few things on multiple GPU is harder, and many softwares don't fully support it or isn't optimized for it.

[1]: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inferen...

kristopolous

You want an M3 ultra Mac studio

ginko

That only runs Mac OS so it's useless.

kristopolous

for ai workloads? You're wrong. I use mine as a server, just ssh into it. I don't even have a keyboard or display hooked up to it.

You can get 96gb of vram and about 40-70% the speed of a 4090 for $4000.

Especially when you are running a large number of applications you want to talk to each other it makes sense ... the only way to do it on a 4090 is to hit disk, shut the application down, start up the other applciation, read from disk ... it's slowwww... the other option is a multi-gpu system but then it gets into real money.

trust me, it's a gamechanger. I just have it sitting in a closet. Use it all the time.

The other nice thing is unlike with any Nvidia product, you can walk into an apple store, pay the retail price and get it right away. No scalpers, no hunting.

doctorpangloss

they don't manufacture RAM, so none of the margin goes to them

kube-system

They sell the completed card, which has margin. You can charge more money for a card with more vram.

blitzar

Or shrink the margin down to just 50% and sell 10x the number of cards (for the week or two it would take Nvidia to announce a 5090 with 128gb)

nullc

Even if they put out some super high memory models and just pass the ram through at cost it would increase sales -- potentially quite dramatically and increase their total income a lot and have a good chance of transitioning to being a market leader rather than an also-ran.

AMD has lagged so long because of the software ecosystem but the climate now is that they'd only need to support a couple popular model architectures to immediately grab a lot of business. The failure to do so is inexplicable.

I expect we will eventually learn that this was about yet another instance of anti-competitive collusion.

doctorpangloss

the whole RAM industry was twice sanctioned for price fixing, so I agree: any business that deals with RAM has, more likely than other industries by a lot, anti-competitive collusion

Alifatisk

A feature I haven't seen someone comment about yet is Project Battlematrix [1][2] with these cards, this allows for multi-GPU AI orchestration. A feature Nvidia offers for enterprise AI workloads (Run:ai), but Intel is bringing this to consumers

1. https://youtu.be/iM58i3prTIU?si=JnErLQSHpxU-DlPP&t=225

2. https://www.intel.com/content/www/us/en/developer/articles/t...

wewewedxfgdf

The new CEO of Intel has said that Intel is giving up competing with Nvidia.

Why would you bother with any Intel product with an attitude like that, gives zero confidence in the company. What business is Intel in, if not competing with Nvidia and AMD. Is it giving up competing with AMD too?

jlei523

  The new CEO of Intel has said that Intel is giving up competing with Nvidia.
No, he said they're giving up competing against Nvidia in training. Instead, he said Intel will focus on inference.

That's the correct call in my opinion. Training is far more complex and will span multi data centers soon. Intel is too far behind. Inference is much simpler and likely a bigger market going forward.

SadTrombone

AMD has also often said that they can't compete with Nvidia at the high end, and as the other commenter said: market segments exist. Not everyone needs a 5090. If anything, people are starved for options in the budget/mid-range market, which is where Intel could pick up a solid chunk of market share.

pshirshov

Regardless of what they say, they CAN compete in training and inference, there is literally no alternative to W7900 at the moment. That's 4080 performance with 48Gb VRAM for half of what similar CUDA devices would costs.

grim_io

How good is it though compared to 5090 with 32GB? 5090 has double the memory bandwidth, which is very important for inference.

In many cases where 32GB won't be enough, 48 wouldn't be enough either.

Oh and the 5090 is cheaper.

Mistletoe

I’m interested in buying a GPU that costs less than a used car.

imtringued

Jerry rig MI50 32GiB together and then hate yourself for choosing AMD.

ksec

>What business is Intel in, if not competing with Nvidia and AMD.

Foundry business. The latest report on Discreet Graphics Market share Nvidia has 94%, AMD at 6% and Intel at 0%.

I may still have another 12 months to go. But in 2016 I made a bet against Intel engineers on Twitter and offline suggesting GPU is not a business they want to be in, or at least too late. They said at the time they will get 20% market share minimum by 2021. I said I would be happy if they did even 20% by 2026.

Intel is also losing money, they need cashflow to compete in Foundry business. I have long argued they should have cut off GPU segment when Pat Gelsinger arrives, turns out Intel bound themselves to GPU by all the government contract and supercomputer they promised to make. Now that they have delivered it all or mostly they will need to think about whether to continue or not.

Unfortunately unless US point guns at TSMC I just dont see how Intel will be able to compete, as Intel needs to be a leading edge position in order to command the margin required for Intel to function. Right now in terms of density Intel 18A is closer to TSMC N3 then N2.

baq

The problem is they can’t not attempt or they’ll simply die of irrelevance in a few years. GPUs will eat the world.

If NVidia gets complacent as Intel has become when they had the market share in the CPU space, there is opportunity for Intel, AMD and others in NVidias margin.

MangoToupe

> Unfortunately unless US point guns at TSMC

They may not have to, frankly, depending on when China decides to move on Taiwan. It's useless to speculate—but it was certainly a hell of a gamble to open a SOTA (or close to it—4 nm is nothing to sneeze at) fab outside of the island.

grg0

Zero confidence why? Market segments exist.

I want hardware that I can afford and own, not AI/datacenter crap that is useless to me.

ryao

I thought that he said that they gave up at competing with Nvidia at training, not in general. He left the door open to compete on inference. Did he say otherwise more recently?

mathnode

Because we don't need data centre hardware to run domestic software.

jasonfrost

Isn't Intel the only largely domestic fab

MangoToupe

I don't really want an nvidia gpu; it's too expensive and I won't use most of it. This actually looks attractive.

littlecranky67

I am confused as a lot of comments here seem to argue around gaming, but isn't this supposed to be a workstation card, hence not intended to be used for games? The phoronix review also seems to only focus on computing usage, not gaming.

jazzyjackson

Huh, I didn't realize these were just released, I came across it looking for a GPU that had AV1 hardware encoding and been putting a shopping cart together for a mini-ITX xeon server for all my ffmpeg shenanigans.

I like to Buy American when I can but it's hard to find out which fabs various CPUs and GPUs are made in. I read Kingston does some RAM here and Crucial some SSDs. Maybe the silicon is fabbed here but everything I found is "assembled in Taiwan", which made me feel like I should get my dream machine sooner rather than later

bane

You may want to check that your Xeon may already support hardware encoding of AV1 in the iGPU. I saved a bundle building a media server when I realized the iGPU was more than sufficient (and more efficient) than chucking a GPU in the case.

I have a service that runs continuously and reencodes any videos I have into h265 and the iGPU barely even notices it.

jazzyjackson

Looks like Core Ultra is the only chip with integrated Arc GPU with AV1 encode. The Xeon series I was looking at, the 1700 socket so the e2400s, definitely don't have iGPU. (The fact that the motherboard I'm looking at only has VGA is probably a clue xD)

I'll have to consider pros and cons with Ultra chips, thanks for the tip.

jeffbee

What recent Xeon has the iGPU? Didn't they stop including them ~5 years ago?

Havoc

If you don't need it for AI shenanigans then you're better off with the smaller arcs for under a 100...they can do av1 too

jauntywundrkind

I don't know how big the impact really is, but Intel is pretty far behind on encoder quality mostly. Oh wait, on most codecs they are pretty far behind, but av1 they seem pretty competitive? Neat.

Apologies for the video link. But a recent pretty in depth comparison: https://youtu.be/kkf7q4L5xl8

dangus

I have the answer for you, Intel's GPU chips are on TSMC's process. They are not made in Intel-owned fabs.

There really is no such thing as "buying American" in the computer hardware industry unless you are talking about the designs rather than the assembly. There are also critical parts of the lithography process that depend on US technology, which is why the US is able to enforce certain sanctions (and due to some alliances with other countries that own the other parts of the process).

Personally I think people get way too worked up about being protectionist when it comes to global trade. We all want to buy our own country's products over others but we definitely wouldn't like it if other countries stopped buying our exported products.

When Apple sells an iPhone in China (and they sure buy a lot of them), Apple is making most of the money in that transaction by a large margin, and in turn so are you since your 401k is probably full of Apple stock, and so are the 60+% of Americans who invest in the stock market. A typical iPhone user will give Apple more money in profit from services than the profit from the sale of the actual device. The value is really not in the hardware assembly.

In the case of electronics products like this, almost the entire value add is in the design of the chip and the software that is running on it, which represents all the high-wage work, and a whole lot of that labor in the US.

US citizens really shouldn't envy a job where people are sitting at an electronics bench doing repetitive assembly work for 12 hours a day in a factory wishing we had more of those jobs in our country. They should instead be focused on making high level education more available/affordable so that they stay on top of the economic food chain, where most/all of its citizens are doing high-value work rather than causing education to be expensive and beg foreign manufacturers to open satellite factories to employ our uneducated masses.

I think the current wave of populist protectionist ideology is essentially blaming the wrong causes of declining affordability and increasing inequality for the working class. Essentially, people think that bringing the manufacturing jobs back and reversing globalism will right the ship on income inequality, but the reality is that the reason that equality was so good for Americans m in the mid-century was because the wealthy were taxed heavily, European manufacturing was decimated in WW2, and labor was in high demand.

The above of course is all my opinion on the situation, and a rather long tangent.

jazzyjackson

Thanks for that perspective. I am just in a place of puzzling why none of this says Made in USA on it. I can get socks and tshirts woven in north carolina which is nice, and furniture made in illinois. That's all a resurgence of 'arts & craft' I suppose, valuing a product made in small batches by someone passionate about quality instead of just getting whatever is lowest cost. Suppose there's not much in the way of artisan silicon yet :)

EDIT: I did think of, what is the closest thing to artisan silicon and thought of the POWER9 CPUs and found out those are made in USA Talos II is also manufactured in the US with the IBM POWER9 processors being fabbed in New York while the Raptor motherboard is manufactured in Texas along with where their systems are assembled.

https://www.phoronix.com/review/power9-threadripper-core9

dangus

I would go even further than that and point out that the US still makes plenty of cheap or just "normal" priced, non-artisan items! You'll actually have a hard time finding grocery store Consumer Packaged Goods (CPG) made outside of the US and Canada - things like dish soap, laundry detergent, paper products, shampoo, and a whole lot of food.

I randomly thought of paint companies as another example, with Sherwin-Williams and PPG having US plants.

The US is still the #2 manufacturer in the world, it's just a little less obvious in a lot of consumer-visible categories.

mschuster91

the thing with iPhone production is not about producing iPhones per se, it's about providing a large volume customer for the supply chain below it - basic stuff like SMD resistors, capacitors, ICs, metal shields, frames, god knows what else - because you need that available domestically for weapons manufacturing, should China ever think of snacking Taiwan. But a potential military market in 10 years is not even close to "worth it" for any private investors or even the government to build out a domestic supply chain for that stuff.

Tepix

If you buy Intel Arc cards for their competitive video encoding/decoding capabilities, it appears that all of them are still capped at 8 parallel streams. The "B" series have more headroom at high resolutions and bitrates, on the other hand some "A" series cards need only a single PCIe slot so you can stick more of them into a single server.

syntaxing

Kinda bummed that it’s $50 more than originally said. But if it works well, a single slot card that can be powered by the PCIe slot is super valuable. Hoping there will be some affordable prebuilds so I can run some MoE LLM models.

mrheosuper

Am i missing anything ?, because it looks like a double slot GPU.

jeffbee

Competing workstation cards like the RTX A2000 also do not need power connectors.

syntaxing

"competing" means 12GB of VRAM at a 600ish price point though...

wink

I really wonder who this is for?

It's not competing with amd/nvidia at twice the price on terms of performance, but it's also too expensive for a cheap gaming rig. And then there are people who are happy with integrated graphics.

Maybe I'm just lacking imagination here, I don't do anything fancy on my work and couch laptops and I have a proper gaming PC.

ryukoposting

Last time I had anything to do with the low-mid range pro GPU world, the use case was 3D CAD and certain animation tasks. That was ~10 years ago, though.

numpad0

CAD, and medical were always the use case for high end workstations and professional GPUs. Companies designing jets and cars need more than iGPU, but they prefer slim desktops and something distanced from games.

hackerfoo

I’m interested in putting one of these in a server because of the relatively low power usage and compact size.

askl

Why would you need a dedicated graphics card in a server? Usually you wouldn't even have a monitor connected.

akaij

My guess would be video encoding/decoding and rendering.

imtringued

Some people are tired of rendering everything on the CPU via AVX.

sznio

Accelerated AV1 encoding for a home server.

bitmasher9

It’s interesting that it uses 4 Display Ports and not a single HDMI.

Is HDMI seen as a “gaming” feature, or is DP seen as a “workstation” interface? Ultimately HDMI is a brand that commands higher royalties than DP, so I suspect this decision was largely chosen to minimize costs. I wonder what percentage of the target audience has HDMI only displays.

Aurornis

DisplayPort is the superior option for monitors. High end gaming monitors will have DisplayPort inputs.

Converting from DisplayPort to HDMI is trivial with a cheap adapter if necessary.

HDMI is mostly used on TVs and older monitors now.

ThatPlayer

I'd say that's a more recent development though because of how long it took for DisplayPort 2 products to make it to market. On both my RTX 4000 series GPU, and gaming 1440p240hz OLED monitor, HDMI 2.1 (~42 Gigabit) is the higher bandwidth port over its DisplayPort 1.4 (~26 Gigabit). So I use the HDMI ports. 26 Gigabit isn't enough for 1440p240z at 10-bit HDR colour. You can do it with DSC, but that comes with its own issues.

Only now are DisplayPort 2 monitors coming out

unsnap_biceps

HDMI is still valuable for those of us who use KVMs. Cheap Display port KVMs don't have EDID emulation and expensive Display Port KVMs just don't work (in my experience).

Ardon

The only well-reviewed DisplayPort KVMs I'm aware of are from Level1Techs: https://www.store.level1techs.com/products/kvm

Not cheap though. And also not 100% caveat-free.

cjbconnor

4x Mini DP is common for low profile workstation cards, see the Quadro P1000, T1000, Radeon Pro WX 4100, etc.

klodolph

This is the right answer. I see a bunch of people talking about licensing fees for HDMI, but when you’re plugging in 4 monitors it’s really nice to only use one type of cable. If you’re only using one type of cable, it’s gonna be DP.

zh3

You can also get GT730's with 4xHDMI - not fast, but great for office work and display/status boards type scenarios. Single slot passive design too, so you can stack several in a single PC. Currently just £63 UK each.

[0] https://www.amazon.co.uk/ASUS-GT730-4H-SL-2GD5-GeForce-multi...

accrual

Yeah, I recall even old Quadro cards in early Core era hardware often had quad mini DisplayPort.

nottorp

HDMI is shit. If you've never had problems with random machine hdmi port -> hdmi cable -> hdmi port on monitor you just haven't had enough monitors.

> Is HDMI seen as a “gaming” feature

It's a tv content protection feature. Sometimes it degrades the signal so you feel like you're watching tv. I've had this monitor/machine combination that identified my monitor as a tv over hdmi and switched to ycbcr just because it wanted to, with assorted color bleed on red text.

amiga-workbench

Because you can actually fit 4 of them without impinging airflow from the heatsink. Mini HDMI is mechanically ass and I've never seen it anywhere but junky Android tablets. DP also isn't proprietary.

dale_glass

HDMI requires paying license fees. DP is an open standard.

mananaysiempre

As far as things I care about go, the HDMI Forum’s overt hostility[1] to open-source drivers is the important part, but it would indeed be interesting to know what Intel cared about there.

(Note that some self-described “open” standards are not royalty-free, only RAND-licensed by somebody’s definiton of “R” and “ND”. And some don’t have their text available free of charge, either, let alone have a development process open to all comers. I believe the only thing the phrase “open standard” reliably implies at this point is that access to the text does not require signing an NDA.

DisplayPort in particular is royalty-free—although of course with patents you can never really know—while legal access to the text is gated[2] behind a VESA membership with dues based on the company revenue—I can’t find the official formula, but Wikipedia claims $5k/yr minimum.)

[1] https://hackaday.com/2023/07/11/displayport-a-better-video-i...

[2] https://vesa.org/vesa-standards/

Solocle

See, the openness is one reason I'd lean towards Intel ARC. They literally provide programming manuals for Alchemist, which you could use to implement your own card driver. Far more complete and less whack than dealing with AMD's AtomBIOS.

As someone who has toyed with OS development, including a working NVMe driver, that's not to be underestimated. I mean, it's an absurd idea, graphics is insanely complex. But documentation makes it theoretically possible... a simple framebuffer and 2d acceleration for each screen might be genuinely doable.

https://www.x.org/docs/intel/ACM/

stephen_g

I'm not 100% sure but last time I looked it wasn't openly available anymore - it may still royalty free but when I tried to download the specification the site said you had to be a member of VESA now to download the standard (it is still possible to find earlier versions openly).

null

[deleted]

KetoManx64

There are inexpensive ($10ish) converters that do DP > HDMI, but the inverse is much more expensive ($50-100)

jsheard

That's because DP sources can (and nearly always do) support encoding HDMI as a secondary mode, so all you need is a passive adapter. Going the other way requires active conversion.

I assume you have to pay HDMI royalties for DP ports which support the full HDMI spec, but older HDMI versions were supersets of DVI, so you can encode a basic HDMI compatible signal without stepping on their IP.

stephen_g

As long as the port supports it passively (called "DP++ Dual Mode"), if you have a DP-only port then you need an active converter which are the same as the latter pricing you mentioned.

glitchc

The latest DP standard has higher bandwidth and can support higher framerates at the same resolution.

Havoc

Good pricing for 16gb vram. Can see that finding a use for some home servers.