Skip to content(if available)orjump to list(if available)

Inside OS/2 (1987)

Inside OS/2 (1987)

39 comments

·August 10, 2025

mikewarot

The cool thing about OS/2 2.1 was that you could easily boot off of a single 1.44 Mb floppy disk, and run multitasking operations, without the need for the GUI.

I had (and likely have lost forever) a Boot disk with OS/2, and my Forth/2 system on it that could do directory listings while playing Toccata and Fugue in D minor in a different thread.

I wrote Forth/2 out of pure spite, because somehow I heard that it just wasn't possible to write OS/2 applications in assembler. Thanks to a copy of the OS/2 development kit from Ward Christensen (who worked at IBM), and a few months of spare time, Forth/2 was born, written in pure assembler, compiling to directly threaded native code. Brian Matthewson from Case Western wrote the manual for it. Those were fun times.

cmiller1

> I wrote Forth/2 out of pure spite, because somehow I heard that it just wasn't possible to write OS/2 applications in assembler

I was thinking about this recently and considering writing a blog post about it, nothing feels more motivational than being told "that's impossible." I implemented a pure CSS draggable a while back when I was told it's impossible.

userbinator

Look at MenuetOS and KolibriOS for a newer multitasking OS, with a GUI, that also fits on a single floppy.

jeberle

That is very cool. I had a similar boot disk w/ DOS 3.x and TurboPascal. It made any PC I could walk up to a complete developer box.

Just to be clear, when you say "without the need for the GUI", more accurately that's "without a GUI" (w/o Presentation Manager). So you're using OS/2 in an 80x25 console screen, what would appear to be a very good DOS box.

kev009

OS/2 had an evolving marketing claim of "better DOS than DOS" and "better Windows than Windows" and they both were believable for a time. The Windows one collapsed quickly with Win95 and sprawling APIs (DirectX, IE, etc).

It exists in that interesting but obsolete interstitial space alongside BeOS of very well done single user OS.

kevindamm

Preemptive multithreading is better than cooperative multithreading (which windows 3 used) but then it's de-fanged by allowing the threads and process to adjust their own priority and set arbitrary lower bounds on how much time gets allotted to a thread per thunk.

Then there's this:

   > All of the OS/2 API routines use the Pascal extended keyword for their calling convention so that arguments are pushed on the stack in the opposite order of C. The Pascal keyword does not allow a system routine to receive a variable number of arguments, but the code generated using the Pascal convention is smaller and faster than the standard C convention.
Did this choice of a small speed boost over compatibility ever haunt the decision makers, I wonder? At the time, the speed boost probably was significant at the ~Mhz clock speeds these machines were running at, and Moore's Law had only just gotten started. Maybe I tend to lean in the direction of compatibility but this seemed like a weird choice to me. Then, in that same paragraph:

   > Also, the stack is-restored by the called procedure rather than the caller.
What could possibly go wrong?

mananaysiempre

16-bit Windows used the Pascal calling convention, with the documentation in the Windows 1.0 SDK only listing Pascal function declarations. (Most C programs for 16-bit Windows use FAR PASCAL in their declarations—the WINAPI macro was introduced with Win32 as a porting tool.) The original development environment for the Macintosh was a Lisa prototype running UCSD Pascal, and even the first edition of Inside Macintosh included Pascal declarations only. (I don’t know how true it is that Windows originated as a porting layer for moving (still-in-development) Excel away from (still-in-development) Mac, but it feels at least a bit true.) If you look at the call/return instructions, the x86 is clearly a Pascal machine (take the time to read the full semantics of the 80186’s ENTER instruction at some point). Hell, the C standard wouldn’t be out for two more years, and function prototypes (borrowed early from the still-in-development C++, thus the unhinged syntax) weren’t a sure thing. C was not yet the default choice.

>> Also, the stack is restored by the called procedure rather than the caller.

> What could possibly go wrong?

This is still the case for non-vararg __stdcall functions used by Win32 and COM. (The argument order was reversed compared to Win16’s __far __pascal.) By contrast, the __syscall convention that 32-bit OS/2 switched to uses caller cleanup (and passed some arguments in registers).

Uvix

I don't know if Windows started as a porting layer but it certainly ended up as one. Windows was already on v2.x by the time Excel was released on PC, but the initial PC version of Excel shipped with a stripped-down copy of Windows so that it could still run on machines without Windows. https://devblogs.microsoft.com/oldnewthing/20241112-00/?p=11...

p_l

Before Windows 3.0 made a big splash, it was major source of Windows revenue - bundling a stripped down windows runtime with applications as GUI SDK.

Windows 3.0 effort was initially disguised as update for this before management could be convinced to support the project.

dnh44

I loved OS/2 but I also remember the dreaded single input queue... but it didn't stop me using it until about 2000 when I realised it was time to move on.

chiph

Because of that, I got good at creating multi-threaded GUI apps. Stardock were champs at this - they had a newsgroup reader/downloader named PMINews that took full advantage of multithreading.

The rule of thumb I had heard and followed was that if something could take longer than 500ms you should get off the UI thread and do it in a separate thread. You'd disable any UI controls until it was done.

dnh44

I always liked Stardock; if had to use Windows I'd definitely just get all their UI mods out of the nostalgia factor.

flohofwoe

> Did this choice of a small speed boost over compatibility ever haunt the decision makers,

...in the end it's just another calling convention which you annotate your system header functions with. AmigaOS had a vastly different (very assembly friendly) calling convention for OS functions which exclusively(?) used CPU registers to pass arguments. C compilers simply had to deal with it.

> What could possibly go wrong?

...totally makes sense though when the caller passes arguments on the stack?

E.g. you probably have something like this in the caller:

    push arg3      => place arg 3 on stack
    push arg2      => place arg 2 on stack
    push arg1      => place arg 1 on stack
    call function  => places return address on stack
...if the called function would clean up the stack it would also delete the return address needed by the return instruction (which pops the return address from the top of the stack and jumps to it).

(ok, x86 has the special `ret imm16` instruction which adjusts the stack pointer after popping the return address, but I guess not all CPUs could do that back then)

agent327

AmigaOS only used D0 and D1 for non-ptr values, and A0 and A1 for pointer values. Everything else was spilled to the stack.

ataylor284_

Yup. If you call a function with the C calling convention with the incorrect number of parameters, your cleanup code still does the right thing. With the Pascal calling convention, your stack is corrupted.

rep_lodsb

Yeah, it's really irresponsible how Pascal sacrifices such safety features in the name of faster and more compact code... oh, wait, the compiler stops you from calling a function with incorrect parameters? Bah, quiche eaters!

maximilianburke

Callee clean-up was (is? is.) standard for the 32-bit Win32 API; it's been pretty stable now for coming up on 40 years now.

to11mtm

For 32 bit yes, although IIRC x64 convention is caller clean-up.

maximilianburke

That's why I said 32-bit Win32 :-)

rep_lodsb

On x86, the RET instruction can add a constant to the stack pointer after popping the return address. Compared to the caller cleaning up the stack, this saves 3 bytes (and about the same number of clock cycles) for every call.

There is nothing wrong with using this calling convention, except for those specific functions that need to have a variable number of arguments - and why not handle those few ones differently instead, unless you're using a braindead compiler / language that doesn't keep track of how functions are declared?

mananaysiempre

> There is nothing wrong with using this calling convention

Moreover, it can actually support tail calls between functions of arbitrary (non-vararg) signatures.

treve

This article is probably the first time I 'get' why OS/2 was seen as the future and Windows 3 as a stop-gap, even without the GUI. The OS/2 GUI never really blew me away and every early non-GUI versions of OS/2 are mentioned it always seemed a bit dismissive.

But seeing it laid out as just the multi-tasking kernel that it is it seems more obvious now as a major foundational upgrade of MS-DOS.

Great read!

pjmlp

After all these years COM is still now as cool as SOM used to be.

With meta-classes, implementation inheritance across multiple languages, and much better tooling in the OS tier 1 languages.

mananaysiempre

Cool, yes. Useful or a good idea, I dunno. Reading through the (non-reference) documentation on SOM, I’m struck by how they never could give a convincing example for the utility of metaclasses. (Saying this as someone who does love metaclasses in Python, which are of course an inferior interpretation of the same academic sources.) The SOM documentation is also surprisingly shallow given its size: with a copy of Brockschmidt, Box, the COM spec, and the Platform SDK manual, you could reimplement essentially all of COM (not ActiveX though), whereas the IBM’s documentation is more like “here’s how you use our IDL compiler and here are the functions you can call”. (This is in contrast with the Presentation Manager documentation, which is much tighter and more detailed than the one for USER/GDI ever has been.) From what I can infer of the underlying principles, I feel SOM is much more specific about its object model, which, given the goal is a cross-language ABI, is not necessarily a good thing. (I’d say that about WinRT too.)

And of course COM does do implementation inheritance: despite all the admonitions to the contrary, that’s what aggregation is! If you want a more conventional model and even some surprisingly fancy stuff like the base methods governing the derived ones and not vice versa, BETA-style, then WinRT inheritance[1] is a very thin layer on top of aggregation that accomplishes that. Now if only anybody at Microsoft bothered to document it. As in, at all.

(I don’t mean to say COM is my ideal object model/ABI. That would probably a bit closer to Objective-C: see the Maru[2]/Cola/Idst[3] object model and cobj[4,5] for the general direction.)

[1] https://www.interact-sw.co.uk/iangblog/2011/09/25/native-win...

[2] https://web.archive.org/web/20250507145031/https://piumarta....

[3] https://web.archive.org/web/20250525213528/https://www.piuma...

[4] https://dotat.at/@/2007-04-16-awash-in-a-c-of-objects.html

[5] https://dotat.at/writing/cobj.html

pjmlp

Because at the time it was obvious, Smalltalk was the C++ companion on OS/2, a bit like VB and .NET came to be on Windows years later.

Aggregation is not inheritance, rather a workaround, using delegation. And it has been always a bit of the pain to set up, if one wants to avoid writing all the boilerplate by hand.

As for WinRT, I used to have it in high regard, until Microsoft management managed to kill everything good that UWP was all about, and now only folks that cannot avoid it, or Microsoft employees on Windows team, care about its existence.

mananaysiempre

> Because at the time [the utility of metaclasses] was obvious, Smalltalk was the C++ companion on OS/2 [...].

Maybe? I have to admit I know much more about Smalltalk internals than I ever did about actually architecting programs in it, so I’ll need to read up on that, I guess. If they were trying to sell their environment to the PC programmer demographic, then their marketing was definitely mistargeted, but I never considered the utility was obvious to them rather than the whole thing being an academic exercise.

> Aggregation is not inheritance, rather a workaround, using delegation. And it has been always a bit of the pain to [...] avoid writing all the boilerplate by hand.

Meh. Yes, the boilerplate is and always had been ass, and it isn’t nice that the somewhat bolted-on nature of the whole thing means most COM classes don’t actually support being aggregated. Yet, ultimately, (single) implementation inheritance amounts to two things: the derived object being able to forward messages to the base one—nothing but message passing needed for that; and the base object being able to send messages to the most derived one—and that’s what pUnkOuter is for. That’s it. SOM’s ability to allocate the whole thing in one gulp is nice, I’d certainly rather have it than not, but it’s not strictly necessary.

Related work: America (1987), “Inheritance and subtyping in a parallel object-oriented language”[1] for the original point; Fröhlich (2002), “Inheritance decomposed”[2], for a nice review; and Tcl’s Snit[3] is a nice practical case study of how much you can do with just delegation.

> As for WinRT, I used to have it in high regard, until Microsoft management managed to kill everything good that UWP was all about [...].

Can’t say I weep for UWP as such; felt like the smartphonification of the last open computing platform was coming (there’s a reason why Valve got so scared). As for WinRT, I mean, I can’t really feel affection for anything Microsoft releases, not least because Microsoft management definitely doesn’t, but that doesn’t preclude me from appreciating how WinRT expresses seemingly very orthodox (but in reality substantially more dynamic) implementation inheritance in terms of COM aggregation (see link in my previous message). It’s a very nice technical solution that explains how the possibility was there from the very start.

[1] https://link.springer.com/chapter/10.1007/3-540-47891-4_22

[2] https://web.archive.org/web/20060926182435/http://www.cs.jyu...

[3] https://wiki.tcl-lang.org/page/Snit%27s+Not+Incr+Tcl

cyberax

That's because the core of COM is just a function table with fixed 3 initial entries (QueryInterface/AddRef/Release). I had a toy language that implemented COM and compiled to native code, it produced binaries that could run _both_ on Novel Netware and Windows (Netware added support for PE binaries in 98, I think).

The dark corner of COM was IDispatch.

mananaysiempre

Yeah, IUnknown is so simple there isn’t really much to implement (that’s not a complaint). I meant to reimplement enough of the runtime that it, say, can meaningfully use IMarshal, load proxy/stub DLLs, and such.

As for IDispatch, it’s indeed underdocumented—there’s some stuff in the patents that goes beyond the official docs but it’s not much—and also has pieces that were simply never used for anything, like the IID and LCID arguments to GetIDsOfNames. Thankfully, it also sucks: both from the general COM perspective (don’t take it from me, take it from Box et al. in Effective COM) and that of the problem it solves (literally the first contact with a language that wasn’t VB resulted in IDispatchEx, changing the paradigm quite substantially). So there isn’t much of an urge to do something like it for fun. Joel Spolsky’s palpable arrogance about the design[1,2] reads quite differently with that in mind.

[1] https://www.joelonsoftware.com/2000/03/19/two-stories/ (as best as I can tell, the App Architecture villains were attempting to sell him on Emacs- or Eclipse-style extensibility, and he failed to understand that)

[2] https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...

wkjagt

> OS/2, Microsoft’s latest addition to its operating system line

Wasn't it mostly an IBM product, with Microsoft being involved only in the beginning?

mananaysiempre

The article is from December 1987, when nobody yet knew that it would end up that way. The Compaq Deskpro 386 had just been released in 1986 (thus unmooring the “IBM PC clones” from IBM), the September 1987 release of Windows/386 2.01 was only a couple of months ago (less if you account for print turnaround), and development of what would initially be called NT OS/2 would only start in 1988, with the first documents in the NT Design Workbook dated 1989. Even OS/2 1.1, the first GUI version, would only come out in October 1988 (on one hand, really late; on the other, how the hell did they release things so fast then?..).

zabzonk

Microsoft unwrote a lot of the code that IBM needlessly wrote.

I worked as a trainer at a commercial training company that used the Glockenspiel C++ compiler that required OS/2. It made me sad. NT made me happy.

Hilift

Microsoft was only interested in fulfilling the contracts, and some networking components such as NetBIOS and LAN Manager, then winding down. This was due to Microsoft had already been in discussion with David Cutler, and had hired him in October 1998 to essentially port VMS to Windows NT. Windows NT 3.1 appeared in July 1993.

https://archive.org/details/showstopperbreak00zach

p_l

While NT OS/2 effort started earlier, Windows 3.0 was apparently an unsanctioned originally rogue effort started by one developer, initially masquerading as update to "Windows-as-Embedded-Runtime" that multiple graphical products were shipping with, not just Microsoft's

Even when marketing people etc. got enthused enough that the project got official support and release, it was not expected to be such a hit of a release early on and expectation was that OS/2 effort would continue, if perhaps with a different kernel.

fredoralive

This is from 1987, the IBM / Microsoft joint development agreement for OS/2 didn't fall apart until around 1990, and there was a lot of Microsoft work in early OS/2 (and conversely, non-multitasking MS-DOS 4.0 was largely IBM work).

chasil

Windows NT originally shipped with an OS/2 compatibility layer, along with POSIX and Win32.

I'm assuming that all of it was written mainly, if not solely, by Microsoft.

rbanffy

If you count the beginning as the time between OS/2 1.0 up until MS released Windows 3, then it makes sense. IBM understood Microsoft would continue to collaborate on OS/2 more or less forever.

SV_BubbleTime

As an outside to all the history and lore… IBM is probably one of the most confusing companies I can think of.