Skip to content(if available)orjump to list(if available)

Learning from the Amiga API/ABI

Learning from the Amiga API/ABI

50 comments

·June 1, 2025

flohofwoe

AmigaOS still has a special place in my heart, probably the most elegantly designed piece of software I've ever seen (apart from that ugly DOS part of course which was shoehorned in because of deadline pressure).

There was a single fixed location in the entire system (address 0x4 aka ExecBase), and everything an AmigaOS application required was 'bootstrapped' from that one fixed address.

All OS data structures were held together by linked lists, everything was open and could be inspected (and messed up - of course terrible for security, but great for learning, exploring and extending).

mpweiher

Amiga Exec completely spoiled me when it came to elegance of operating systems.

Everything I learned about after it was a huge disappointment, including Mach. Particularly because it demystified the OS. Just a bunch of of lists, and due to the OO nature, they were the same kinds of lists.

Here's what a node looks like: next, previous, a type, a priority, a name.

A task? A node. With a bunch of extra state.

An interrupt? A node. With a lot less extra state.

A message? A node. With an optional reply port if the message requires a reply.

Reply port? Oh, that's just a port.

A port? Yeah, a node, a pointer to a task that gets signaled and a list of messages.

How do you do I/O? Send special messages to device ports.

No "write() system call", it's queues at the lowest levels and at the API layer.

vidarh

To me, a few things that stands out, that I'm increasingly looking to emulate, that are mostly not about the low level API/ABI:

* Assigns. Basically aliases for paths, but ephemeral and can be joined together. E.g. the search path for executables is the assign C:. The search path for dynamic libraries ins libs:. I've added basic, superficial support for assigns to my shell. It's a hack, but being able to just randomly add mnemonics for projects is nice, and not having to put them in the filesystem as a symlink somehow also feels nicer even if it only saves a few characters.

* Datatypes. AmigaOS apps can open modern formats even if the app hasn't been updated for 30 years as long as they use datatypes: Just drop a library and descriptor file into the system.

* Screens. I'm increasingly realising I want my wm to let apps open their own virtual desktops, and close them again, as a nice way of grouping windows without having to manually manage it, and might add that - it'd be fairly easy, and on systems that don't support it the atoms added would just be a no-op. The dragging was nice to show off at the time, but less important. Ironically, given the Amiga was one of a few systems offering overlapping windows when it launched, screens often served as a way for apps themselves to tile their workspaces on a separate screen/desktop, and my own wm setup increasingly feels Amiga-ish - I have a single desktop with floating windows and a file manager, just like the Amiga Workbench screen, and a bunch of virtual desktops with tiling windows.

In terms of the API, one of the things I loved was more something that evolved: The use of "dispatch/forwarding" libraries, like e.g. XPK, that would provide a uniform API to do something (like compression) and an API for people implement plugins. So much of the capabilities of Amiga apps are down to the culture of doing that, and which Datatypes was an evolution of, that means the capabilities of old applications keeps evolving.

orionblastar

My Amiga 1000 ran rings around the Macintosh with the same CPU, because it had custom co-processors that speed things up and worked as a team.

You can still have that Amiga feeling on old PCs by using AROS: https://aros.sourceforge.io/

Findecanor

I think that it also having preemptive multitasking also made a huge difference to make the system responsive and feel fast. The GUI, with its windows and gadgets (= "widgets"/"controls") ran in Intuition's task. The mouse pointer was moved by a vblank interrupt and could thus never lag.

On Macintosh, the whole GUI ran practically in the active app's event loop. The whole system could be held up by an app waiting for something.

Microsoft made the mistake of copying Apple when they designed MS-Windows. Even this day, on the latest Windows, which although it has had preemptive multitasking since 1995, a slow app can still effectively hold up the user interface thus preventing you from doing anything but wait for it.

When Apple in the late '80s wanted to make their OS have preemptive multitasking, they hired the guy who had written Amiga's "exec" kernel: Carl Sassenrath.

jchw

> Even this day, on the latest Windows, which although it has had preemptive multitasking since 1995, a slow app can still effectively hold up the user interface thus preventing you from doing anything but wait for it.

Could you explain what you mean here? If you were to make your event loop or wndprocs hang indefinitely it would not hang the Windows interface for the rest of the machine, it would just cause ANR behavior and prompt you to kill the program. As far as I can remember it's been that way since at least Windows 2000.

flohofwoe

An example I run into almost each day: run a non-trivial build in Visual Studio which consumes all available CPU cores. Now try to use the trackpad to scroll some random window content, it just doesn't work, at best you get a UI view that jumps around instead of smooth scrolling.

AFAIK Windows is supposed to boost the CPU priority of the UI during user input, but apparently that doesn't work.

AmigaOS also boosted the CPU priority of the UI during mouse movement, except it actually worked.

PS: and instead of fixing the issue from the ground up in the OS (which admittedly is probably impossible) the VS team instead added a feature called 'Low Priority Builds':

[1] https://developercommunity.visualstudio.com/t/Limit-CPU-usag...)

[2] https://devblogs.microsoft.com/cppblog/msbuild-low-priority-...

zozbot234

I think what OP's saying is that on the Amiga it was idiomatic to let the UI be handled by a dedicated thread/task. That was also the norm on other notable systems such as BeOS. It's still a good guideline today but not so easy to apply.

Findecanor

I mean the windows. An app can open a window and hold it there, and meanwhile you can't move the window, you can't move it to the back and you can't minimise it.

detourdog

I always thought even though the Mac was black & white the screen resolution was much nicer than the Amiga.

badc0ffee

Unlike the Amiga, it's high resolution, has square pixels, and is never interlaced.

leptons

The original Macintosh had a resolution of 512 x 342 pixels. The Amiga 1000 had several resolutions up to 640 x 400 @ 16 colors, and could utilize 4096 colors in lower resolutions. The reality distortion field seems to still be working, I guess. The Amiga was better in every way practical way.

AndrewStephens

The differences between the Amiga and Mac are a fascinating study in priorities. The Amiga was orientated around home use so its graphics hardware was designed to be attached to a TV. That design goal affected the entire system, to the point that the system clock changes frequency depending on whether the machine is outputting PAL or NTSC.

Technically the Amiga could display a rock solid hires picture but only on a special monitor that I personally never saw.

The priority on the Mac was to have a high quality monitor for black and white graphics. They put a lot of effort into drawing libraries to make the most of in-built display.

The result was that the Amiga was perfectly fine for playing games or light word processing but if you actually needed to stare at a word processor or spreadsheet for 8 hours a day you really wanted a Mac.

null

[deleted]

icedchai

Are you forgetting about interlace? You won’t want to use 640x400 for very long due to the low refresh rate.

detourdog

I think you are missing the point. I was an actual user of both and preferred the macintosh. The Amiga graphics were color but underwhelming in resolution.

zozbot234

> My Amiga 1000 ran rings around the Macintosh with the same CPU, because it had custom co-processors

And the earliest ARM machines ran rings around the Amiga because they had a custom-designed RISC CPU, so they could dispense with the custom co-processors. (They still cost a lot more than the Amiga, since they targeted the expensive education sector. Later on ARM also got used for videogame arcades with the 3DO.)

bitwize

Want to give an Amiga user an orgasm? Fuck them gently and at the right moment, nibble their ear and whisper the words "custom chips" into it.

By contrast, there's a story about some Microsoft engineers taking a look at the Macintosh and asking the Apple engineers what kind of custom hardware they needed to pull off that kind of interface. The Apple guys responded, "What are you talking about? We did this all with just the CPU." Minds blown.

The designers of the Mac (and the Atari ST) deserve mad credit for achieving a lot with very little. Even though, yes, the Amiga was way cooler in the late 80s.

fractallyte

And the Amiga was faster than the Mac when emulating a Mac.

I know this first hand, because I got my first email address with CompuServe, running their software under emulation, while using my Amiga's dial-up modem. (I had to sneak the Mac ROM images from the computers at school...)

krige

In one amusing juncture of really bad port and emulation/hardware speed, Sim City 2000 was reported to run better as Shapeshifter-emulated Mac version, than the Amiga native version, as long as you could meet the hardware requirements.

This was due to several factors, chief of which was that the SC2000 Amiga port was made under extreme time pressure and, probably, very low budget. Later patches alleviated that to some degree, but patching your game in 1993? Who did that? What you got on your floppies was usually what you were stuck with barring some extreme cases of negligence.

o11c

A lot of this really seems dishonest.

"no dynamic linking" (by implementing dynamic linking)

"no zombies" (as long as your programs aren't buggy)

I fail to see any meaningful distinction from what we have today. If it was more reliable, it was due to being smaller in scope and with a barrier to entry.

icedchai

Not to mention any errant program could easily take down the OS because there was no memory protection. Amiga users quickly became familiar with the “guru meditation” screen, which was the system’s BSOD.

jrmg

“Microkernel” (but everything’s in the same address space)

o11c

There is an important distinction between "one address space" and "one set of memory permissions". The former is usually a good idea (for debuggability if nothing else!) if you don't need to support `fork`; the latter is the problem.

On modern Linux systems you can even do separate sets of memory permissions within a single process (and single address space), with system calls needed only at startup; see `pkeys(7)`.

https://www.man7.org/linux/man-pages/man7/pkeys.7.html

(note however that there aren't enough pkeys available to avoid the performance problem every microkernel has run into)

ghusbands

I don't think there's intentional dishonesty in it - the author is just biased to seeing everything Amiga very positively.