Skip to content(if available)orjump to list(if available)

Why fastDOOM is fast

Why fastDOOM is fast

226 comments

·March 4, 2025

gwern

> The MPV patch of v0.1 is without a doubt build 36 (e16bab8). The "Cripy optimization" turns status bar percentage rendering into a noop if they have not changed. This prevents rendering to a scrap buffer and blitting to the screen for a total of 2 fps boost. At first I could not believe it. I assume my toolchain had a bug. But cherry-picking this patch on PCDOOMv2 confirmed the tremendous speed gain.

Good example of how the bottlenecks are often not where you think they are, and why you have to profile & measure (which I assume Viti95 did in order to find that speedup so early on). The status bar percentage?! Maybe there's something about the Doom arch which makes that relatively obvious to experts, but I certainly would've never guessed that was a bottleneck a priori.

robocat

Example: "Our app was mysteriously using 60% CPU and 25% GPU. It turned out this was due to a tiny CSS animation [of an equaliser icon]"

https://www.granola.ai/blog/dont-animate-height

rplnt

I remember Slack eating up my CPU when it had to display several animanted emojis at once. Use some 20+ emojis and the intel mb pro couldn't handle it. Luckily they knew and there was an option to disable the animation. Now I have no idea if they fixed it since or it is one of those things that was "fixed" by M1.

Would like to see a write up on how it's even possible to achieve that when PCs from 20-30 years ago had no issue with such task.

throwup238

> Would like to see a write up on how it's even possible to achieve that when PCs from 20-30 years ago had no issue with such task.

Electron.

ben_w

> Would like to see a write up on how it's even possible to achieve that when PCs from 20-30 years ago had no issue with such task.

At a guess, by something in the UI framework turning an O(n) task into an O(n^2) task. Seen that happen in person, the iPhone app was taking 20 minutes(!) to start up in some conditions, the developer responsible insisted the code couldn't possibly be improved, the next day I'd found the un-necessary O(n^2) op and reduced that to a few hundred milliseconds.

The over-abstraction of current programming is, I think, a mistake — I can see why the approach in React and SwiftUI is tempting, why people want to write code that way, but I think it puts too much into magic black-boxes. If you change your thinking just a bit, the older ways of writing UI are not that difficult, and much more performant.

I have a bunch of other related opinions about e.g. why VIPER is slightly worse than an entire pantheon of god classes combined with a thousand lines inside an if-block: https://benwheatley.github.io/blog/2024/04/07-21.31.19.html

bluedino

> I remember Slack eating up my CPU when it had to display several animanted emojis at once.

I remember using phpBB back in the late 2000's, viewing a page that had at least 100 animated emoticons on it.

It would slow IE6 down to a halt. But then I tried Chrome or Firefox (I forget which one) and it didn't even blink showing the same page. I even remember reading some developer posts about things like that at the time.

grrowl

Or more recently, NPM's infamous "Progress bar noticeably slows down npm install #11283" issue

[1]: https://github.com/npm/npm/issues/11283

guappa

A thing I discovered by myself when I was a teenager: do not update progress bars on every loop.

ericmcer

Why was the solution to optimize the animation instead of... using a static asset?

moffkalast

We paid for the whole CSS spec and we're gonna use the whole CSS spec.

teaearlgraycold

That would be more optimized for sure. But it wouldn't be scalable and would take more work to create than what looks like a pretty simple change.

jonas21

Because it provides an indication that the app is receiving your voice input?

ortsa

This reminds me of having to go into spotify's package files to track down their version of this animation (an animated svg of a bar chart) and kill it, because it would destroy performance on my PC so badly that it was affecting other programs, causing hitches and freezes.

The animation's still there — and my PC is better now, so it doesn't stutter — but I'm willing to bet it's still burning waaay too many watts, for something so trivial.

emmanueloga_

This is why it pays out to turn on "Paint Flashing" [1] in the web console, even if just from time to time.

--

1: https://web.dev/articles/simplify-paint-complexity-and-reduc...

moffkalast

Me, who already renders a canvas on the whole page so it highlights the whole thing constantly: "Hmm yes the painting here is made out of painting."

Dwedit

CSS animations make Flash look efficient by comparison.

pinoy420

[dead]

inDigiNeous

Reminds me of the performance optimization somebody discovered in Super Mario World for SNES, where displaying the player score in was very inefficient, taking about 1/6 of the frametime allocated.

"SMW is incredibly inefficient when it displays the player score in the status bar. In the worst case (playing as Luigi, both players with max score), it can take about a full 1/6 of the entire frame to do so, lowering the threshold for slowdown. Normally, the actual amount of processing time is roughly proportional to the sum of the digits in Mario's score when playing as Mario, and the to the sum of the digits in both players' scores when playing as Luigi. This patch optimizes the way the score is stored and displayed to make it roughly constant, slightly faster than even in the best case without."

https://www.smwcentral.net/?p=section&a=details&id=35746

barbariangrunge

As a gamedev, those slowdowns are common. Ui rendering, due to transparency, layering and having to redraw things, and especially from triggering allocations, can be a real killer. Comparing old vs new before allowing it to redraw is really helpful. I found layers and transparency was a killer in css as well in one project, but that was more about reducing layers there

lomase

Drawing to the screen is a IO operation, is going to be slow.

RankingMember

My favorite example of this is the GTA Online insane loading time issue that ended up being due to poor handling of a 10MB json file (and was finally tracked down by someone outside their org). Took a 6 minute load time down to just under 2 minutes:

https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...

pjs_

Reminds me of the incident where npm was 2X slower than it should have been because of the fancy terminal progress bar:

https://news.ycombinator.com/item?id=10974929

ognarb

I had a similar case when I work on a Matrix client (NeoChat) and I and the other devs were wondering why loading an account was so slow. Removing the loading spinner made it so much faster, because the animation to render the loading spinner uses 100% cpu.

jiggawatts

A common one for server apps is logging, especially to the console.

It’s far more expensive than people assume and in many cases is single-threaded. This can make logging the scalability bottleneck!

rcxdude

Logging to a 'serial' console can stall the whole kernel, wrecking latency, and this can easily show up in VMs since a lot of them emulate a lowest-common-denominator UART inerface as the kernel console.

qingcharles

The original Doom must have been heavily profiled by id though, surely? Obviously there is a bunch of things that were missed, but I was in game dev at Doom time and profiling was half the job back then.

pjc50

Well yes - it was an incredible feat of performance engineering. It's just that since then someone managed to make an extra three thousand commits worth of micro optimizations.

qingcharles

True. And Carmack and team only had so many hours in the day.

ramses0

Until razer comes in and lands a patch authored by an intern that tanks framerates to 1/3rd doing glowy USB shit every frame...

https://www.reddit.com/r/Doom/comments/8a1m9s/psa_deactivate...

fabiensanglard

What tools did you have in 1993? From what I understood, id Software used NeXT because the tooling was not up to the task.

qingcharles

Yikes. Now you're pushing my memory. I believe there was a profiler built into Visual Studio and we had some external tool, too. We had some sort of system set up with serial cable to debug and profile. The EXE would download onto our test PC next to us.

Problem for id is that you have no sane way of profiling the DOS port on NeXT.

rasz

There is no evidence of any profiling in source code. It wasnt a thing you did in 1993. The best you could do was compile whole game with new changes, run benchmark loop and compare results.

jlokier

The gprof profiler was used long before 1993. It dates back to 1982.

I did game engine dev professionally in 1994 (and non-professionally before that). We profiled both time and memory use, using GCC and gprof for function-level profiling, recording memory allocation statistics by type and pool (and measuring allocation time), microbenchmarking the time spent sending commands to blitters and coprocessors, visually measuring the time in different phases of the frame rendering loop to 10-100 microsecond granularity by toggling border or palette registers, measuring time spent sorting display lists, time in collision detection, etc.

You might regard most of those as not what you mean by profiling, but the GCC/gprof stuff certainly counts, as it provides a detailed profile after a run, showing hot functions and call stacks to investigate.

It's true that most of the time, we just changed the code, ran the game again and just looked at the frame rate or even just how fluid it felt, though :-)

vardump

I was profiling my code in the early nineties. Sure, I created primitive profiling tools on my own, but regardless.

While 486 AGI stalls were trivial, especially Pentium changed the game with its U and V pipes. You could no longer trivially eyeball what code is faster. Profiling was a must.

Heck, I even "profiled" my C64 routines in the eighties by changing the border color. You could clearly visually see how long your code takes to execute.

on_the_train

I'm currently at the second job in my life where updating the progress bar takes up a tremendous percentage of the overall performance. Because our "engineers" have never used a profiler. At a large international tech giant :(

null

[deleted]

Cthulhu_

I do understand it though, at bigger companies you're less likely to want / need to worry about smaller things. Of course, I'm willing to bet money that they implemented a progress bar themselves instead of using an off-the-shelf one.

on_the_train

MFC Progress bar. Like it's 1999

yjftsjthsd-h

> To get the big picture of performance evolution over time, I downloaded all 52 releases of fastDOOM, PCDOOMv2, and the original DOOM.EXE, wrote a go program to generate a RUN.BAT running -timedemo demo1 on all of them, and mounted it all with mTCP's NETDRIVE.

I'm probably not the real target audience here, but that looked interesting; I didn't think there were good storage-over-network options that far back. A little searching turns up https://www.brutman.com/mTCP/mTCP_NetDrive.html - that's really cool:)

> NetDrive is a DOS device driver that allows you to access a remote disk image hosted by another machine as though it was a local device with an assigned drive letter. The remote disk image can be a floppy disk image or a hard drive image.

jandrese

> I didn't think there were good storage-over-network options that far back.

Back in school in the early 90s we had one computer lab where around 25 Mac Plus machines were daisy chained via AppleTalk to a Mac II. All of the Plus machines mounted their filesystem from the Mac II. It was painfully slow, students lost 5-10 minutes at the start of class trying to get the word processor started. Heck, the Xerox Altos also used network mounts for their drives.

If you have networking the first thing someone wants to do is copy files, and the most ergonomic way is to make it look just like a local filesystem.

DOS was a bit behind the curve because there was no networking built-in, so you had to do a lot of the legwork yourself.

ben7799

There was already relatively deep penetration of this stuff in the corporate world and universities way back in the early 1990s.

Where I want to school we had AFS. You could sit down at any Unix workstation and login and it looked like your personal machine. Your entire desktop & file environment was there and the environment automatically pointed all your paths at the correct binaries for that machine. (While we were there I remember using Sun, IBM, and SGI workstations in this environment.)

When Windows came on campus it felt like the stone ages as none of this stuff worked and SMB was horrible in comparison.

These days it feels like distributed file systems are used less and less in lieu of having to upload everything to various web based cloud systems.

In some ways it feels like everything has become less and less friendly with the loss of desktop apps in favor of everything in the browser.

I guess I do use OneDrive, but it doesn't seem particularly good, even compared to 1990s options.

aaronbaugher

I was using HP Apollo systems with the Aegis OS in the early 90s, and they basically shared one filesystem across their token-ring network. Plug in two or more systems, login to one of them, and each other system's files would be under //othersystem.

I don't recall whether there were any security-minded limits you could put on what was shared, but for a team that was meant to share everything, it was pretty handy.

bluGill

I miss those days often. I have more than once computer, why it it such a big deal to share my filesystem. But many things don't work if you try (firefox for example doesn't like sharing settings that way). The web is great for some things, but local applications are often much more powerful and faster.

aeyes

My school had Windows NT and the experience was similar. Any workstation looked the same and had my data.

Later I saw some Citrix setups which would load the applications from the server. That also worked pretty OK.

With Windows you definitely had all the options to make this work in the late 90s.

brazzy

> There was already relatively deep penetration of this stuff in the corporate world and universities way back in the early 1990s.

Anyone remember Novell NetWare?

bluGill

Appletalk was horribly slow - 230.4 kbit/s. Ethernet was already 10mbit at the time (but a lot more expensive). General best practice would have been having the world processor installed on each machine and only saving files across the network, which would have made performance acceptable - but at the cost of needing a hard drive in all those plus machines (I don't recall the price a a harddrive at the time, but I'm guessing closer to $1000 for 20mb, compared to multi tb drives for around $100 today)

svilen_dobrev

> the worLd processor

made my day :)

you did have networking? waw

not here.. that's why Floppy-net was something.. as well as bus-304-net (like, write a floppy, hop on bus 304, go to other campus)

KerrAvon

AppleTalk over LocalTalk cables was slow -- AppleTalk could run over Ethernet at Ethernet speeds.

somat

There is a neat trick where ipxe can netboot dos from an iscsi target, with no drivers or config dos gets read write access to a network share(well, not a share, if you share it it gets corrupted fast, a network block device). it feels magical but I think ipxe is patching the bios to make disk access go over iscsi.

leshokunin

I’m curious: were there NAS’ or WebDAV mount in the DOS era? Obviously there was FTP and telnet and such. Just curious if remote mounts was a thing, or if the low bandwidth made it impossible

sedatk

Yes, there was Novell Netware that let you mount remote drives, and there were even file locking APIs in DOS to organize simultaneous access to files. In fact, DOOM's multiplayer code relied on part of Novell Netware stack (IPXODI and LSL). The remote mounts were mainly used on LANs though, not over Internet.

bombcar

Yes, it's basically what Netware was, and Novell was a HUGE company.

SMB (samba) is also from the DOS era. Most people only know of it from Windows, though.

There were various other ways to make network "drives" as the DOS drive interface was very simplistic and easy to grab onto.

It was rare to find this stuff until Win95 make network connections "free" (before then, you had to buy the networking hardware and often the software, separately!).

roywashere

In the 90s my student union ran a computer network mainly for gaming with DOS PCs and netware running on a Linux server with MARS. This was before they had internet access but it was great for lan gaming: command & conquer, doom, or quake. All games were started from network mounts. Fun times.

diet_mtn_dew

A network redirector interface (for 'redirecting' local resource access over a network) was added at least by DOS 3.1 in 1985, possibly earlier in 3.0 (1984)

[1] https://www.os2museum.com/wp/redirectors-and-dos-3-0/

bitwize

WebDAV didn't come out until the back half of the 90s, and it was slow to be adopted at first.

Back in the day, you could author a web page directly in GruntPage, and publish it straight to your web server provided said server had the FPSE (FrontPage Server Extensions), a proprietary Microsoft add-on, installed. WebDAV was like the open-standards response to that. Eventually in later versions of FrontPage the FPSE was deprecated and support for WebDAV was provided.

pjc50

WebDAV itself dates from 1999, well into the Windows 95 era. The Novell system pre dates it by a lot.

ndegruchy

The linked GitHub thread with Ken Silverman is gold. Watching the FastDOOM author and Ken work through the finer points of arcane 486 register and clock cycle efficiencies is amazing.

Glad to see someone making sure that Doom still gets performance improvements :D

kridsdale1

I haven’t thought of KenS in ages but back in the 90s I was super active in the Duke3D modding scene. Scripting it was literally my first “coding”.

So in a way, I owe my whole career and fortune to KenS. Cool.

vunderba

I feel like Duke 3D was probably the first mainstream accessible moddable FPS. Doom of course had plenty of level editors, but Duke Nukem brought the ability to alter and script AI as editable plaintext CON files, and of course any skills you learned on the BUILD engine were transferrable to any number of other games (Shadow Warrior, Blood, etc.)

Also shout out to anyone who remembers "wackplayer" - Duke's equivalent of the BEEP keyword.

paulryanrogers

Duke also had a map editor with a 3D editing mode. It allowed raising, lowering floors and ceilings, and picking textures. Ahead of its time. The complexity of brush-based true 3D really put a damper on good, built in editors.

hypercube33

Command and Conquer with Rules.ini was similar and had many fond memories of mine

nurettin

His blog was the first page I "surfed". Talking about duke3d map editor and his big project using voxels.

badsectoracula

AFAIK the CON scripting language (used in the *.CON files in DN3D) wasn't made by Ken Silverman but by the Duke Nukem 3D team at 3D Realms. I think it was Todd Replogle who wrote the CON stuff.

ehaliewicz2

Last year I emailed Ken Silverman about an obscure aspect of the Build Engine while working on a similar 2.5D rendering engine. He answered the question like he worked on it yesterday.

phire

There are some real gems in there.

I especially liked the idea of CR2 and CR3 as scratchpad registers when memory access is really slow (386SX and cacheless 386DXs). And the trick of using ESP as a loop counter without disabling interrupts (by making sure it always points to a valid stack location) is just genius.

ndegruchy

Yes! I know nothing about low level programming, but the idea of using a register that you don't need for a fast 'memory' location is particularly clever.

unleaded

One feature of FastDOOM I haven't seen mentioned here are all the weird video modes, some interesting examples:

- IBM MDA text mode: https://www.youtube.com/watch?v=Op2tr2lGK6Y

- EGA & Plantronics ColorPlus: https://www.youtube.com/watch?v=gxx6lJvrITk

- Classic blue & pink CGA: https://youtu.be/rD0UteHi2qM

- CGA, 320x200x16 with 'ANSI from Hell' hack: https://www.youtube.com/watch?v=ut0V1nGcTf8

- Hercules: https://www.youtube.com/watch?v=EEumutuyBBo

Most of these run worse than with VGA, presumably because of all the color remapping etc

toast0

> - EGA & Plantronics ColorPlus: https://www.youtube.com/watch?v=gxx6lJvrITk

Any love for Tandy Graphics Adapter? I'd hate to have to run in CGA :( would need a 286 build for my Tandy 1000 TL/2, if it was still alive.

Cthulhu_

That's awesome, just a great demonstration why these aspects of the game should be separated. It reminds me of the "modern" Clean Architecture for back-end applications.

tecleandor

The IBM MDA text mode is terrible... Love it!

jakedata

"IBM PS/1 486-DX2 66Mhz, "Mini-Tower", model 2168. It was the computer I always wanted as a teenager but could never afford"

Wow - by 1992 I was on my fourth homebuilt PC. The KCS computer shows in Marlborough MA were an amazing resource for tinkerers. Buy parts, build PC and use for a while, sell PC, buy more parts - repeat.

By the end of 1992 I was running a 486-DX3 100 with a ULSI 487 math coprocessor.

For a short period of time I arguably had the fastest PC - and maybe computer on campus. It outran several models of Pentium and didn't make math mistakes.

I justified the last build because I was simulating a gas/diesel thermal-electric co-generation plant in a 21 page Excel spreadsheet for my honors thesis. The recalculation times were killing me.

Degree was in environmental science. Career is all computers.

wk_end

"Wow"? Is it really necessary to give this guy a hard time for being unable to afford the kind of computers you had in 1992?

Anyway, there's no such thing as a "DX3". And the first 100MHz 486 (the DX4) came out in March of 1994, so I don't see how you were running one at the end of 1992.

My family's first computer - not counting a hand-me-down XT that was impossibly out-of-date when we got it in 1992 or so - was a 66MHz 486-DX2, purchased in early 1995.

I can't quite explain why, but as a matter of pride it's still upsetting - decades later - to see someone weirdly bragging about an impossible computer that supposedly outran mine despite a three year handicap.

thereticent

Is that really what "wow" means here? I took it more as "wow, I've been around forever / I must be old now" or something similarly tame.

bpoyner

That definitely brought back memories. Around '92, being a poor college student I took out a loan from my credit union for about $2,000 to buy a 486 DX2-50. For you younger people, that's about $4,000+ in today's money for a pretty basic computer. I dual booted DOS and Linux on that bad boy.

antod

A 486DX and a 487? I thought the 487 was only useful for the SX chips?

...looked it up, apparently the standard 487 was a full 486DX that disabled and replaced the original 486SX. Was this some sort of other unusually awesome coprocessor I hadn't heard of?

486sx33

Doubled throughput of certain calculations in certain tasks if motherboard supported it

Possibly something software like maple could take advantage of

cantrecallmypwd

Doesn't make any sense, perhaps it's AI-generated nonsense. There was a DX4 100 but no such thing as a "DX3". The 486 included an FPU so there'd be no reason to have a "487" which was a complete replacement for the 486SX chips. There were Pentium Overdrives but those were CPU replacements on the 486DX.

jakedata

The 486sx had a 16 bit external bus interface so it could work with 386 chipsets. The DX processors had a full 32 bit bus and correspondingly better throughput. The 486 never included an integrated FPU, you had to add a separate co-processor for that. I could go on about clock multipliers and base frequencies but I'll spare you.

bluedino

I think you're thinking of the 486SLC

The 486SX was fully 32-bit (unlike the SLC and 386SX) and the 486DX had the integrated FPU, and the 487 was a drop in 486DX which disabled the 486SX

hollandheese

You are thinking of the 386. You perfectly describe the situation on the 386 not the 486.

The 386SX had a 16 bit external bus interface so it could work with 286 chipsets. The DX processors had a full 32 bit bus and correspondingly better throughput. The 386 never included an integrated FPU, you had to add a separate co-processor for that.

Narishma

You're wrong. The 486SX had a 32-bit bus, just like the DX version. The difference between them is that the DX had an integrated FPU while the SX had it disabled and you had to add a separate 487 co-processor.

cantrecallmypwd

The 486DX had an FPU. It was the 486SX that lacked it. The "FPU" upgrade for a 486SX was a entire special version of the 486DX that disabled the original 486SX entirely.

ForOldHack

"It outran several models of Pentium and didn't make math mistakes." Total bragging rights. Total. You owned them. Good job.

mmphosis

On top of releasing often, Viti95 displayed outstanding git discipline where one commit does one thing and each release was tagged.

https://fabiensanglard.net/fastdoom/#:~:text=one%20commit%20...

kingds

> I was resigned to playing under Ibuprofen until I heard of fastDOOM

i don't get the ibuprofen reference ?

kencausey

Guess: headache from low frame rate?

fabiensanglard

Indeed.

apetresc

I legitimately thought it was some DOS compatibility layer or something. Like, you’d have to run Doom that way because of the low framerate natively.

null

[deleted]

sedatk

If the author reads this: John Carmack's last name was mistyped as "Carnmack" throughout the document.

fabiensanglard

Thank you for taking the time to report it. It has now been fixed.

mkl

Another typo s/game/gave/: "Another reason John game me".

CamperBob2

Speaking of Carmack, can you (or someone) elaborate on this quote?

>DOOM cycles between three display pages. If only two were used, it would have to sync to the VBL to avoid possible display flicker.

How does triple buffering eliminate VBL waits, exactly? There was no VBL interrupt on a standard VGA, was there?

fabiensanglard

Triple buffering mean you can render at any speed, there will always be a valid target where to render and you never need to wait or vsync.

This is not the case with double buffering. There can be a case, if the CPU renders fast enough, where it just finished rendering to the current target, but the previous target is still being sent to the CRT. In that case the CPU need to block on VBL.

anilgulecha

One non-cynical take on why modern software is slow, and not containing optimizations such as these: The standardization/optimization hypothesis.

If something is/has become a standard, then optimization takes over. You want to be fastest and meet all of the standard's tests. Doom is similarly now a standard game to port to any new CPU, toaster, whatever. Similarly email protocol, or a browser standard (WebRTC, Quic, etc).

The reason your latest web app/ electron app is not fast is that it is exploratory. It's updated everyday to meet new user needs, and fast-enough-to-not-get-in-the-way is all that's needed performance wise. Hence we see very fast IRC apps, but slack and teams will always be slow.

null

[deleted]

z3t4

It's not trivial to go back in versions to check for improvements or regressions, because some optimizations can introduce bugs that is later discovered, or you introduce a vital feature that degrades performance... So you can make your life easier by having automatic performance tests that are run before each release, and if you discovered a performance issue you write a regression test as usual... What I'm trying to say is: Do performance testing!

manoweb

Unlike the author, back in the day I would have preferred a 486DX50 to the DX2-66. 50MHz bus interface (including to the graphics card) instead of 33MHz

antod

My first job was AutoCAD drafting on a DX50 with 16MB. Quite high specced in the early 90s. Not sure I would've noticed the difference compared with a DX2 though.

fabiensanglard

Can you elaborate why? I understand the bus speed difference but the benchmark I remember always gave the 66Mhz winning over the 50Mhz.

Narishma

Were those benchmarks with the DX-50 or the DX2-50?

fabiensanglard

DX2-66 vs DX-50. A DX2-50 obviously lose to a DX-50.

rasz

>Optimize R_DrawColumn for Mode Y

Seeing this made a difference makes it clear Fabien ran fastdoom in Mode Y

>One optimization that did not work on my machine was to use video mode 13h instead of mode Y.

13h should work on anything, its the VBD that requires specific VESA 2.0 feature enabled (LFB * ). VBR should also work no problem on this IBM

Both 13h and VBR modes would probably deliver another ~10 fps on 486/66 with VESA CL5428.

* LFB = linear frame buffer, not available on most ISA cards. Somewhat problematic as it required less than 16MB ram or "15-16MB memory hole" enabled in bios. On ISA Cirrus Logic support depended on how the chip was wired to the bus, some vendors supported it while others used lazy copy and paste of reference design and didnt. With VESA Cirrus Logic lazy vendors continued to use same basic reference design wiring disabling LFB. No idea about https://theretroweb.com/motherboards/s/ibm-ps-1-type-2133a,-... motherbaord

Trixter

>13h should work on anything

You misinterpreted what he wrote. He wasn't saying that mode 13h didn't work; he meant that the optimizations in the mode 13h path of the executable weren't as good as the Mode Y path. It's the optimization that didn't work, not mode 13h itself.

fabiensanglard

> VBR should also work no problem on this IBM

I think this statement is incorrect. These modes requires support for VESA 2.0 which this IBM does not have.

> Somewhat problematic as it required less than 16MB ram or "15-16MB memory hole" enabled in bios.

Could be the issue. Do you have any documentation about this?

rasz

Load Univbe.exe. "15-16 hole" is only required for ISA cards to work in VBD mode.