The UCSD p-System, Apple Pascal, and a dream of cross-platform compatibility
80 comments
·April 16, 2025wduquette
The UCSD p-System was amazing. I used it on a Heathkit-branded PDP-11, the Apple II, and an HP-9000 workstation; and though the author doesn't mention it, the first version of Borland's Turbo Pascal for CP/M and DOS had a UI that was clearly influenced by the p-System's UI.
The coolest thing about UCSD Pascal when I first encountered it was it supported "full screen" programs, notably the system's text editor, via the `gotoxy(x, y)` intrinsic. This procedure moved the cursor to the specified character cell on the terminal. Prior to this I'd only used line-oriented editors.
mbessey
I did mention the Turbo Pascal connection briefly, and I'll probably make a more in-depth comparison in a later post on just the IDE.
I used a fairly early version of Turbo Pascal for DOS for several years after High School. I can still remember the absolute terror of realizing you'd pressed "R" without saving first.
wduquette
My bad; I missed the Turbo Pascal reference.
I first heard of Turbo Pascal in a magazine called Profiles, published by Kaypro for owners of their computers; I'd recently gotten a Kaypro 4, which ran CP/M-80, my first computer of my very own, and I was pining for Apple Pascal/UCSD Pascal. I read the ad (and maybe a review?); it was $49.95, and I ordered it immediately. Nor was I disappointed.
dumdedum123
For a blast from the past
https://www.pcjs.org/software/pcx86/lang/borland/pascal/3.02...
dumdedum123
Oh the memories! You are exactly right. I remember this as well.
sitkack
I never used it, are you saying you could Run the current program and it might accidentally bring your entire system down without having saved the program?
Seems like at least a two file circular buffer with autosave wouldn't take up too much space, or maybe streaming diffs into a compressed buffer (even on a 286, this shouldn't be too much trouble).
mbessey
Yes, that exactly. Part of what made Turbo Pascal so fast was that it kept your entire program, and the compiler, in memory.
You had an option from the main menu to "compile" or "run", which included compiling, but NOT saving your edits first. You could save first, but on a floppy-based system, that could take a while.
I want to say that behavior changed in Turbo Pascal version 2, or 3?
cardiffspaceman
But control-Kdsr saves your work to the device it came from and runs the program. Approximately the WordStar command set with additions for the task at hand.
kragen
I think that is what he is saying, though I can't remember the TP command set well enough.
Turbo Pascal wasn't written on a 286; it was written for CP/M, where I think it required 48KiB of RAM. A "fairly early version of Turbo Pascal for DOS" might have required 64KiB?
You can't really stream things onto a floppy disk (remember that early home computers and the IBM PC didn't have hard disks; they didn't become standard equipment until the late 80s). You have to write a whole sector at a time, which can take a second or two to seek the disk to the appropriate track; rotating the disk to the right sector takes a significant fraction of a second. Journaling your edits to a journal file was a feature that EDT on VAX/VMS had around that time, but there wasn't really a practical way to do that on a home computer.
musicale
> Heathkit-branded PDP-11
The idea that you could save money by soldering together your own PDP-11 system from parts, and that there was a company that actually sold the kits (as well as assembled versions), is terrific.
And today (assuming you can find a vintage DCJ11 CPU or equivalent) you still can build your own hardware PDP-11 via PDP-11/Hack and other designs! (Though personally I'll probably go for an FPGA version.)
wduquette
I watched my dad build the PDP-11, terminal, and paper tape reader/punch. Eventually we got a dual-8” floppy drive; he might have built that, too, I don’t remember.
dlinder
Around 1995, our high school "Pascal I" and "Pascal II" classes were taught in a forgotten Apple //e lab in the Math wing of the school. The PC and Mac labs were occupied by typing, word processing, and desktop publishing classes. I think every other kid in class groaned, but to a hamfest scrounger of PDPs, Vaxen, and weird UNIX workstations, UCSD p-System Pascal on Apple hardware was weirdly intriguing, the cherry on top being that the whole lab was served by a Corvus hard disk shared over, I think, an "Omninet" network. We'd all come in, turn on the lights, turn on the computers, and then have the lecture portion of class while this poor early NAS would serve Pascal to 20-odd machines simultaneously. I think we saved our work on floppy disks, though maybe that was a backup, as I think I recall turning in our work by saving to the Corvus? Even at the time, it all had a very "you are living the early experimental days" feeling to it.
icedchai
That brings back memories. My high school also had a Corvus. You could definitely save files to it. I remember writing some Basic programs and it would show up as a Prodos "device" (or maybe it was a volume.) That was the first time saw any type of network.
stevekemp
I "recently" wrote a CP/M emulator, and I have a lot of love for the kinda vintage software out there that still runs on it.
https://github.com/skx/cpmulator/
Over the past few days I've seen posts on hacker news discussion 6502 assembly, people coming to the infocom games, and similar things. There's a lot of interest out there in this retro stuff, even now.
mbessey
Surely some of it is just nostalgia for a "simpler" time, but I think there is a legitimate reason to preserve and celebrate these older systems, too.
It's essentially impossible for a single person to build something as complex as a modern PC "from scratch", or indeed to build an operating system that compares to Windows, Linux, or MacOS.
These old microcomputer systems are simple enough for one person or a small team to understand and build, and they are/were capable of doing "useful work", too, and not so overly-abstracted like some "teaching systems" are.
I think that for me, part of the point of digging into something like the p-System is to show some of the brilliant (and stupid) ideas that went into building something as ambitious as a "universal operating system" in the mid-1970s.
mst
Having cut my teeth on early Archimedes machines, I have a deep fondness for arm2's 16 instructions and the (lost during a house move, I suspect) assembly book I had that gave me enough of a description of the internals of the chip that I could desk check my assembly in my head with reasonable confidence that I was mentally emulating what the chip was actually doing rather than just what outputs I'd get for a given set of inputs.
Having to remember where I'd put the relevant chunk of assembler any time I needed a division routine was, admittedly, less fun, but the memories remain fond nevertheless :)
WalterBright
I sometimes think about that. Consider the early versions of MS-DOS. A modern programmer could crank that out with little difficulty in a short time.
kragen
I think Tim Paterson did crank it out with little difficulty in a short time? He even called it "Quick and Dirty Operating System".
kragen
Probably what you want to check out is Oberon, which is a modern PC built basically from scratch, along with an operating system that compares to Windows, Linux, or MacOS, built originally not by a single person but by maybe a dozen people. It's capable enough that it was the daily driver for numerous students during the 80s; the earliest versions of it were built in-house by necessity because graphical workstations weren't a product you could buy yet. Wirth's RISC CPU architecture avoids all the braindamage in things like the Z80 and the 80386. I think that, with their example to work from, a single person could build such a thing.
Oscar Toledo G. also wrote a similar graphical operating system in the 01990s and early 02000s, working on the computers his family designed and built (though using off-the-shelf CPUs). You can see a screenshot of the browser at http://www.biyubi.com/art30.html and read some of his reflections on the C compiler he wrote for the Transputer in his recent blog post at https://nanochess.org/transputer_operating_system.html.
There's a lacuna in the recursivity of Wirth's system: although he provides synthesizable source code for the processor (in Verilog, I think) there's no logic synthesis software in Oberon so that you can rebuild the FPGA configuration. Instead you have to use, IIRC, Xilinx's software, which won't even run under Oberon. Since then, though, Claire Wolf has written yosys, so the situation is improving on that front.
CP/M is interesting because it's close to being the smallest system where self-hosted development is bearable; the 8080 is just powerful enough that you can write a usable assembler and WYSIWYG text editor for it. But I don't think that makes it a good example to follow. We saw this weekend that Olof Kindgren's SeRV implementation of RISC-V can be squoze into 5900 transistors (in one-atom-thick molybdenum disulfide, no less) https://arstechnica.com/science/2025/04/researchers-build-a-... https://news.ycombinator.com/item?id=43621378 which is about equivalent to the 8080 and less than the Z80. And Graham Smecher's "Minimax" https://github.com/gsmecher/minimax is only two or three times the size of SeRV and over an order of magnitude faster.
There's no reason to repeat the mistakes Intel made in the 01970s today. We know how to do better!
musicale
> There's no reason to repeat the mistakes Intel made in the 01970s today. We know how to do better!
CP/M, WordStar, and Turbo Pascal were/are pretty good though!
As you suggest, someone really should port an open source FPGA toolchain to Oberon to honor Prof. Wirth's great work.
musicale
I like how your emulator (like RunCPM) can work with native directories and files. It's much more convenient than messing around with disk images.
stevekemp
Thanks! One of my biggest frustrations with the retro-scene is having to deal with old compression-formats, and disk-archives, so that was very much a design choice.
Many of the recent/modern emulation projects work the same way. In addition to RunCPM there's also the excellent rust-based iz-cpm project which I enjoyed studying at times.
sillywalk
In a similar vein, the game Another World[0] had a VM, and the IBM System/38 (aka AS/400 aka iSeries aka System i(5) aka i)[1][2] had/has a Technology Independent Machine Interface that acts as an abstract machine that sits underneath most of the Operating System.
[0] https://fabiensanglard.net/anotherWorld_code_review/index.ph...
[1] https://treasures.scss.tcd.ie/hardware/TCD-SCSS-T.20121208.0...
creeble
VersaCAD was the only commercial program I can remember for p-System. It ran from floppies, or a hard drive that could be set to boot it.
It was a great CAD program, I many ways ahead of AutoCAD in its time. But AutoCAD was written in C, which proved far more popular (and, ultimately, more portable) than UCSD-p.
cduzz
The first Wizardry[1] was built on the P system.
My dad spent a small fortune buying an IBM PC with 544k of memory and an external Davong hard drive that emulated an enormous floppy drive. He also put UCSD P system on this beast, along with a tecmar graphics master...
I used the p system's editor to write school papers for a long time. It was some weird modal editor; he switched to Dos and turbo pascal after a while...
I got to use this computer for games when he wasn't using it and found that the _wizardry_ save game disks were formatted in UCSD P system format and I could even noodle around with the save games (mostly resulting in the game crashing).
[1]https://en.wikipedia.org/wiki/Wizardry:_Proving_Grounds_of_t...
musicale
> Get Apple Pascal up and running in some kind of emulator on my Mac, so I can experience it again
I wonder if Lisa Pascal will run in a Lisa emulator...
> Build a p-machine emulator, in Rust
Probably a p-code interpreter and/or p-system VM! (Analogous to the JVM but for Pascal/p-system rather than Java and its bytecode. p-code translator/JIT compiler probably left as an exercise for the reader.) I'm surprised that nobody seems to have written one in JavaScript and/or webassembly... the latter basically being p-code for the 2020s.
null
mbessey
I haven't seen a web-based p-System, either, which was a little surprising to me. You can run either the Apple or CP/M versions through emulating the entire computer, though.
That is probably why nobody's felt the need to make a p-System for the web.
SomeHacker44
I used this in/around 1982 or 83 on an Apple ][+. I remember hacking Wizardry, my favorite game, and discovering it seemed to run on Apple Pascal as well. Such fun times. I love this hacking project of the OP.
kragen
I used the p-System on a Heathkit H-89.
I think the overall approach of future-proofing your software by compiling it to a simple, portable virtual machine is valid. Since the p-System, in addition to the JVM and Zork Z-machine mentioned in this post, we've seen Smalltalk-80, PostScript, Open Firmware aka OpenBoot, Glulx, the ASS/400, the Open Software Foundation's ANDF (the architecture-neutral distribution format), Google's NaCl and pNaCl, Microsoft's CIL, JS as a compilation target, WebAssembly, uxn, and the revival of old video game consoles in emulation as a stable software target.
A problem with this approach is that most of these portable platform layers are still far too unstable for reliable archival; even video game emulators face a constant struggle to maintain compatibility as they are updated to keep up with whatever platform they're running on. Platforms like the JVM, which make more concessions to efficiency than MAME, have even more difficulty, so the JVM's slogan of "write once, run anywhere" was widely mocked as "write once, debug everywhere". But it's a good aspiration. I'd like to see it realized in a practical way.
My memory of the p-System is that it was almost unusably slow, a problem made worse by its filesystem being so simple it didn't support fragmentation, so sometimes you had to defragment your floppy disk in order to write new files onto it. It's true that its UI was screen-oriented, as wduquette said, and it was driven by a Lotus-1-2-3-like menu system, which enhanced its usability quite a lot.
Being a pure bytecode interpreter was a serious handicap, especially on the sub-1-MIPS machines we were running it on. EUMEL managed to make a go of it. I never got a chance to use EUMEL on an actual Z80, but I hear it was usably fast; I suspect the EUMEL virtual-machine instruction set (which included string operations) and operating environment went a long way towards compensating for the slowness of bytecode interpretation, much as Numpy does on CPython today.
I suspect you could have done a better job with a bytecode more like Dalvik, designed for efficient JIT compilation by leaving less work for the JIT compiler. But Deutsch and Schiffman didn't publish the first JIT-compilation paper until a few years after the p-System was released. (Schiffman told me a self-deprecating joke about this which I guess I can't really repeat.)
Long Tien Nguyen and Alan Kay published a paper on designing a very simple virtual machine for such digital preservation 10 years ago: https://tinlizzie.org/VPRIPapers/tr2015004_cuneiform.pdf
I think these ideas point the way to achieving the kind of future-proofness that the p-System was shooting for.
mbessey
Performance of the p-System is definitely an issue on the Apple II, especially in the "OS" interface and editor, which is all interpreted. But running applications built on it wasn't half-bad.
It's also important to remember that to a large extent, Apple Pascal on the Apple II and other late 1970s home computers wasn't competing with sophisticated native-code compiler suites, but with interpreted BASIC and with assembly language.
Pascal was vastly more-productive than writing in Assembler, and much faster in execution than Apple BASIC. It even had reasonable support for integrating assembly routines for places where you really needed the speed.
The p-System was A LOT more usable on the HP Motorola 68k workstations I used it on. Those were more than adequately fast for the sort of software we were writing for them in 1985.
Thanks for the link to cuneiform, I think I read that paper once, long ago. Will definitely check it out.
kragen
The interpreted BASIC on most home computers was Microsoft BASIC-80, which, as I remember it, was also painfully slow. There were lots of programs in it, and it was good enough for some games, but for the most part "real software" for those computers was written in assembly language. Even Turbo Pascal was written in assembly language, not Pascal.
I think now we know how to do better.
TheOtherHobbes
Computers are more like an assembly of subsystems than a single thing, and you can just about get away with agnostic byte code as long as you ignore most of the subsystems.
So cross-platform byte code is sort of viable on mid-80s text terminal systems with limited memory and addressing. But as soon as you start adding graphics cards, video acceleration, sound, and AI accelerators, you need to add abstraction layers which will be limited and inefficient and limited compared to the hardware.
And if the hardware isn't available you can either say 'This won't run at all' or emulate it in software, which will be even slower.
kragen
There's some truth to that, but I think you're overstating the case. The vast majority of software we run on a day-to-day basis doesn't use any of the stuff you mentioned, so nothing is stopping it from moving to platform-agnostic bytecode, except that that bytecode doesn't exist yet.
For example, 2-D graphics acceleration hardware (whether in the form of character generators or in the form of blitters and line drawing) was really important for usability up to the 01990s. This is a major reason the X-Windows protocol is so big and complicated: it needed a way to expose the acceleration capabilities of the hardware to applications so they could draw with it. This was, as you said, "limited and inefficient and limited compared to the hardware". But basically, as CPUs got faster, we gave up on all that stuff around the turn of the millennium, and now most 2-D applications really just want to fill up pixel buffers and swap the displayed buffer between screen refreshes. It's a very, very simple interface (you might say "inefficient abstraction layer").
Something similar happened with sound. In the 80s and 90s our sound cards did square and sawtooth waves, LFSR noise generation, envelopes, FM synthesis, wavetable synthesis, etc. Different sound cards had different instruments! Now all I want to do with my sound card is send it a sequence of samples, maybe get back a sequence of samples from the microphone, maybe choose from among multiple outputs. Another very, very simple interface.
3-D games do still use 3-D acceleration, of course. That's not a simple interface. They also depend pretty heavily on SIMD instructions. The same is true of video codecs.
But my SSH client, my mail server, my IRC client, my text editor, my compiler, my filesystem, my Game of Life simulator, my system logger, my Sudoku game, my circuit design program (KiCad), my PDF viewer, my audio editor (Audacity), and so on — those aren't using "graphics cards, video acceleration, sound, and AI accelerators", except through the very, very simple interfaces we're talking about above. Most of them don't even use floating-point math! They could easily be in platform-agnostic bytecode because they already do ignore most of the subsystems in my computer.
wahern
> This is a major reason the X-Windows protocol is so big and complicated
X was trying to, in a sense, remote hardware acceleration. Wayland doesn't bother at all, clients render their windows locally and share (or send) a pre-rendered graphic. But if you use across the network an older X app, such as one that uses server-side fonts, etc, the experience is often much smoother, IME, as compared to the Wayland-universe alternatives.[1] Once upon a time even web browsers, like ancient versions of Netscape, were shockingly responsive over the network, even with mixed text and graphics; almost indistinguishable from local (and this at a time when X11 on a 486 was a smoother experience than Windows). The popular toolkits now render the window on the client even when using X, so those capabilities are largely unused today.
[1] In that case, the composition and rendering of all the widgets, text, and images within a window is truly local, i.e. on your local X server.
null
ahefner
Not ideal, but MS-DOS seems to me like the most practical universal software platform. DOSBOX isn't going anywhere.
kragen
It is of some practical use, but there are a lot of slightly incompatible versions of the IBM PC and of MS-DOG, so it doesn't offer the kind of strong reproducibility that I'm looking for.
kwertyoowiyop
Just think, Pascal on an Apple II cost about $1,800 in today’s dollars.
timbit42
Was that before or including the extra hardware (RAM) to run it?
whartung
From the Terak Museum[0] from the Terak thread[1] there was this anecdote
> What does the Terak have to do with the Macintosh and MacPaint? The Macintosh's operating system was bootstrapped on an Apple Lisa computer. The Lisa's OS was written on the Lisa using a port of the UCSD Pascal compiler and P-System. The Lisa's port of the P-System was prepared on an Apple II, which had its own version of the P-System that was developed by Bill Atkinson, the Apple programmer who later wrote MacPaint. Atkinson ported the P-System to the Apple II while visiting UCSD, who helped Apple with the port using a Terak. Some people think he got the idea for MacPaint from the paint programs he saw in use on the graphics-intensive, square-pixel Terak. Thanks in part to Gary Capell (gary@cs.su.oz.au) for parts of this story.
Its a great anecdote. And it also makes one think "They wanted to use Pascal so badly, they were willing to use UCSD Pascal for it."UCSD Pascal was a wonder and a pioneer. Unfortunately, it was in the early era of microcomputers. When microcomputers were, frankly, horrible. File this anecdote under "its amazing we managed to get any software written at all" back then.
In the P-System, you had the core VM, and everything else was compiled into that P-code. The shell, the compiler, the file utilities, everything. Again, it's a marvel. It's (almost) self hosted (did it come with an assembler? I don't recall). But if you had a VM running, everything else was self hosted. The compiler compiled itself, being written in UCSD Pascal. Marvel yes, speedy, not so much. It certainly qualified as "better than nothing".
It had a lousy file system. Files had to be continuous, which makes it difficult to write to more than one file at a time. Compressing the disk structure was a routine process. The editor was also very interesting. Also, text files naturally compressed whitespace -- important with Pascal source code where whitespace is probably 20-30% of the space on disk.
However, on the other hand, the runtime was quite sophisticated. P-Code was position independent, so it could read, run, and flush code at will. The code was segmented into chunks, "overlays" being the norm. But as user of the code, it was mostly invisible to you (you know, the way RPC is invisible). If you look at the original Macintosh memory and resource manager, and how it stored applications in segments, you can see the lineage straight from what UCSD was doing back in 1977. And, of course, UCSD Pascal had that novel feature of being an actual, usable Pascal for, well, demonstrable system level programming, and also large system design through the use of Units and such. Novel at the time.
The real shame of the UCSD eco-system, specifically today, was when USCD licensed it to SofTech (something like that), who came out with P-System IV (P-III was a unique port to, I think, a Sage 68K machine). The ship had sailed for "but it's not DOS" types of systems that SofTech was trying to squeeze UCSD back into. But it had some cool features, notably it supported co-routines.
But, while P-System 1.5 and P-System II are all flying free around the interwebs, P-IV is not.
It IS well documented, but whoever owns SofTech today, hasn't released the legacy stuff to world.
Also, as a shout out to the Blog author. Do a search for, I think, the game SunDog. This was written in P-System IV, and they have a P-machine in C (or C++) that you can look at.
The part of the P-Machine I haven't quite grokked, is the way it handles stack frames. Because Pascal allows nested functions (and scopes), it had primitives to access "variable 3, 4 stacks frames up" kind of thing, so its a bit of a maze (plus the first class support for the segments in the runtime as well). I was looking at trying to port it to the 65816. The P-System would naturally work well with a 128K 65816 using their data bank model. You could have the runtime in its own 64K bank, the P-Code in its own, and then 64K of data ram in a 3rd. I thought that would be a neat '816 project.
[0] https://www.threedee.com/jcm/terak/index.html [1] https://news.ycombinator.com/item?id=43708726
mbessey
I will definitely check out SunDog, if I can find it. I haven't yet decided whether I'm making a VM for version II or version IV, or both, yet. I want to be able to run code from Apple Pascal directly, so will likely start from II.5.
An interesting piece of trivia about the Apple III version of Apple Pascal that I learned from this discussion is that it apparently puts p-code and data in their own 64k segments, like you were talking about for your 65816 version.
whartung
A nit about Apple pascal is that it did rely on some custom machine language routines, but it may have been just for graphics.
That’s interesting about the Apple ///. Honestly don’t know much about that machine.
If nothing else, if you can hunt down SunDog, it can show the p-machine in something higher level than 6502 or Z80. But the IV machine is pretty different than the 1.5-2 machines.
One problem I had getting started was just trying to figure out how to read the floppy images that are available. I obviously didn’t try really hard, but it was enough to calm my sails at the time. Udo Monk has some nice images on his Z80 pack site.
Mn7cB_3kL
[dead]
nxobject
Apropos of that: know QEMU has an extensive hardware emulation library, but it shouldn't be taken for granted – Apple M-series support isn't quite there (here's a console-only solution [1]), and it would a significant platform to lose emulation for.
wahern
What's the relevance of Docker here? It's not mentioned in the article, and more generally I can't think of cases where Docker would help with backward compatibility, except perhaps making it easier to, e.g., handle old code with hardcoded paths (i.e. a fancier chroot).
kragen
Like any chroot, Docker images include all your library dependencies, which keeps your code from being broken by library upgrades — or from having its security vulnerabilities closed.
The UCSD "Computer Scientists" were a small group of undergraduates working in Ken Bowles' lab. We were supposedly following Professor Bowles' directions but he was a fairly conservative physicist and we had lots of radical ideas - fortunately he was tolerant. The p-code was not just machine independent - by careful design it was approximately 1/4 the size of native code on those early 8 and 16-bit microprocessors, allowing us to effectively almost quadruple the amount of code we could fit in 64K - minus the interpreter which was 8K of machine code and minus another 8K on PDP-11s for I/O space. We would also use native code for hotspots without appreciably expanding code size. This key idea is what allowed us to have a high-level OS and development environment on those dinky machines when everyone else was compromising quality to get things to fit. Alas, CopyLeft had not yet been invented, the UC sold the P-System and we lost legal access to the code we'd written.