Skip to content(if available)orjump to list(if available)

NetBSD on a JavaStation

NetBSD on a JavaStation

75 comments

·March 5, 2025

deadlyllama

I remember when Java was exciting. There were several attempts at open source Java OSes like JOS (https://jos.sourceforge.net/). A Java applet runtime for the PalmPilot. My thesis on dynamic aliasing protection was based on a dynamic Java-esque runtime. But... Java got a reputation for being heavyweight.

And yes, as others have said, instead we got the modern web, with (for example) web based word processors requiring orders of magnitude more compute power than a desktop of the early Java era.

okeuro49

I can remember trying to run applets on a consumer machine.

It wasn't a good experience.

In the meantime, computers became fast enough to run the modern web. The average phone can run tens of these web based wordpressors.

jeroenhd

Web applets were a terrible experience all round. Downloaded JAR files usually just worked, though. The GUI looked odd because it wasn't using normal operating system controls, but in terms of performance it was no slower than any native program except for in the most extreme cases.

Java on the web was pretty terrible from beginning to end, but The Java Web could've worked.

Now that we have the web, we're moving back to the Javaverse in the form of apps (which, on Android, are actually Java(-like)). Every big website has one of those "for the full experience, download our app" banners. Other sites use WASM to bring back the Java applet days, now without third party plugin full of security holes. Google Docs renders to a virtual canvas in the browser in the same way an applet would've back in 2003, except it would've been able to open files directly from the file system.

And lo and behold, the new system is also a terrible experience.

mark_undoio

> Google Docs renders to a virtual canvas in the browser in the same way an applet would've back in 2003, except it would've been able to open files directly from the file system.

I'd have said the situation back then was a bit better than that - a Java applet wouldn't have been able to access your filesystem by default, for instance.

Part of the benefit of Sun's Java was that the bytecode itself could be statically verified to only have good behaviour and the plugin would then sandbox what it could access at runtime. The plugin itself would obviously have had bugs - like all software - but it's not obvious to me that was intrinsically worse than having all that code as part of the browser (as we do now).

I'd contrast it with ActiveX and (I think), which was very free about what its applets could do (basically just Windows executable code, I think). Flash I'm less clear on the limitations of.

We have moved on in other ways, of course - browsers are architected to isolate processes more, including use of things like seccomp.

danieldk

but in terms of performance it was no slower than any native program except for in the most extreme cases.

Java applications were really slow, and certainly much slower than native programs, until HotSpot became the default in J2SE 1.3. It's distance history now, but I remember a lot of excitement about Java in 1996 (compile once) and then disappointment of how slow it was.

(After some iterations HotSpot became a really good JIT compiler.)

_glass

To be fair, Java Swing was my first GUI programming experience, and is still the best I had. For Desktop, fast iteration, no budget, and works anywhere, it's basically Swing or Electron.

mynameajeff

Love digging around projects like JOS. I had never heard of it before, and there really doesn't seem like much else online about it beyond the info that can be found from that link. There's always something melancholy about retroactively watching the activity of a project like JOS have such a swarm of activity and then just quietly and unceremoniously dying off.

pjmlp

Don't forget Electron mess.

tomaytotomato

> Hard as it may be to imagine, there was a time when Java was brand new and exciting. Long before it became the vast clunky back-end leviathan it is today, it was going to be the ubiquitous graphical platform that would be used on everything from cell phones to supercomputers: write once, run anywhere.

As someone who started their software career at Java version 8, I wouldn't say the trend in Java has been to become more clunky.

If we separate frameworks from the core libraries of Java, its more modular, has better functionality with things like Strings, Maps, Lists, Switch statements, Resource (file, http) accessing etc.etc.

For frameworks we have Spring Boot, which can be as clunky or as thin as you want for a backend.

For IC cards, and small embedded systems, I can still do that in the newer versions of Java just with a smaller set of libraries.

Maybe the author is nostalgic for a time (which I didn't experience - was busy learning how to walk), but Java can do all the things JDK version 1 can, and so much more. No?

seabrookmx

I don't think the comparison is new Java to old Java, I think it's Java vs. it's competitors.

When Java was new, scripting/dynamic languages hadn't matured enough to be true competitors so you were left with C/C++, Delphi and the like. In that landscape, Java is beyond exciting.

Nowadays there are so many alternatives that didn't exist then. And it's not debatable that many of those languages (Dart, C#, Typescript, Kotlin) move faster when it comes to language features. Whether you want/need them is subjective, sure. But back in the day Java was that hot, fast moving language.

MisterTea

> write once, run anywhere.

Was such a great promise. I remember visiting PCExpo in the late 90's and Sun's booth had a Java demo running on three machines: Linux x86, Windows X86 and Solaris Sparc (OSX wasn't even revealed yet). You could run a few demos you selected from a menu one of which was a 3D ship with accelerated OpenGL which really thrilled me - cross platform everything, even CAD and gaming. Amazing! The future is finally here.

And it never happened. Bummer. Instead we got a badly hacked up hypertext viewer with various VM's duck taped to the sides.

chasil

"Thankfully, despite its age and total lack of security, NFS is still well supported under Linux."

NFSv4 can run over TCP, which means that any encrypted wrapper can carry it. While SSH port forwarding can be used, stunnel is a better fit for batch environments. Wireguard is another option from this perspective.

Encrypted ONC RPC works at a lower level of the protocol to secure NFS, which is documented in RFC-9289.

Obviously, none of this will help with a machine using RARP and TFTP over 10baseT.

DonHopkins

NFS originally stood for "No File Security".

https://news.ycombinator.com/item?id=33384073

Speaking of YP (which I always thought sounded like a brand of moist baby poop towelettes), BSD, wildcard groups, SunRPC, and Sun's ingenuous networking and security and remote procedure call infrastructure, who remembers Jordan Hubbard's infamous rwall incident on March 31, 1987?

https://news.ycombinator.com/item?id=31822138

EvanAnderson

> NFS originally stood for "No File Security"

I often heard a different "F" word in that acronym in place of "File".

hiAndrewQuinn

>The Java-chip thing proved more difficult to realize than anticipated

I've been very slowly upping my Java-fu over the past year or so to crack into the IC market here in the Nordics. Naturally I started by investigating the JVM and its bytecode in some detail. It may surprise a lot of people to know that the JVM's bytecode is actually very, very much not cleanly mappable back to a normal processor's instruction set.

My very coarse-grained understanding is: If you really want to "write once, run anywhere", and you want to support more platforms than you can count on one hand, you eventually kind of need something like a VM somewhere in the mix just to control complexity. Even moreso if you want to compile once, run anywhere. We're using VM here in the technical sense, not in the Virtualbox one - SQLite implements a VM under the hood for partly the same reason. It just smooths out the cross-compilation and cross-execution story a lot, for a lot of reasons.

More formally: A SQLite database is actually a big blob of bytecode which gets run atop the Virtual DataBase Engine (VDBE). If you implement a VDBE on a given platform, you can copy any SQLite database file over and then interact with it with that platform's `sqlite3`, no matter which platform it was originally built on. Sound familiar? It's rather like the JVM and JAR files, right?

Once you're already down that route, you might decide to do things like implement things like automatic memory management at the VM level, even though no common hardware processor I know has a native instruction set that reads "add, multiply, jump, traverse our object structure and figure out what we can get rid of". VDBE pulls this kind of hat trick too with its own bytecode, which is why we similarly probably won't ever see big hunking lumps of silicon running SQLiteOS on the bare metal, even if there would be theoretical performance enhancements thataways.

(I greatly welcome corrections to the above. Built-for-purpose VMs of the kind I describe above are fascinating beasts and they make me wish I did a CS degree instead of an EE one sometimes.)

tabony

It's not a directly mappable to a register-based microprocessor but it's directly mappable to a stack-based microprocessor.

e.g. The PSC 1000 microprocessor (1994) could run Java directly: https://en.wikipedia.org/wiki/Ignite_(microprocessor)

Stack-based microprocessors tend to perform worse than register-based ones and I assume there wasn't a huge reason to develop a Java-on-chip for a "Java computer.” (1) It would have not run non-Java software easily and (2) the future of stack-based microprocessors wasn't as bright.

sillywalk

Sun also produced the MAJC[0] (Microprocessor Architecture for Java Computing) processor, a VLIW design. It was only used in one of Sun's graphic boards.

[0] https://en.wikipedia.org/wiki/MAJC

jraph

> It may surprise a lot of people to know that the JVM's bytecode is actually very, very much not cleanly mappable back to a normal processor's machine code or instruction set

I believe this is a very sensible decision: being too close to a real architecture would probably tie the bytecode to similar architectures too much and make it quite useless as opposed to compiling to an actual architecture.

The bytecode being abstract enough is likely a good thing to be able to achieve okay performance everywhere. Like, you wouldn't want the bytecode to specify a fixed number of registers.

What may also surprise many people thinking Java is a bloated language is that the Java bytecode is actually quite simple, straightforward to understand, clean and also very well documented. It's an interesting thing to look into, even for someone not involved day to day in some Java.

geokon

I remember learning in school that it's a relatively simple stack machine, but when I look at the instruction set online it's actually ~200 opcodes..

Not what I'd describe as "simple, straightforward"

renewedrebecca

That's fewer opcodes than a 6502 or Z80 microprocessor.

ielillo

IIRC correctly the original Java VM was a stack based machine. That made sense when it was first created since a stack based machine is the simplest system you can create that run code and since it only need three registers, one for the instruction, one for the first data and one for the top of the stack for the other data. The problem is that you need to push and pop a lot from the stack during runtime which means more memory accesses and more time spent on gathering the data than on doing actual operations. That also underutilizes the processor registers since on a normal processor you would be using two data registers at most. This was one of the early issues with java running slowly on android and the reason of the creation of the Dalvik VM which was a register one.

geokon

Naiive question.. If the opcodes are the same, how can you go from a stack machine to a register one?

jerf

You compile the stack based code into register code. It is, of course, easier to say than to do, but it is within the range of a skilled team, not absurdly complicated.

You can think about it on a really small scale:

    PUSH 1
    PUSH 2
    PUSH 3
    ADD
    PUSH 4
    MULT
    ADD
is not that hard to conceptually rewrite into

    STORE 1, r1
    STORE 2, r2
    STORE 3, r3
    ADD r2, r3 INTO r2
    STORE 4, r3
    MULT r2, r3 INTO r2
    ADD r1, r2 INTO r1
Of course, from there, you have to deal with running out of registers, then you're going to want to optimize the resulting code (for instance, generally small numbers like this can fit into the opcodes themselves so we can optimize away all the STORE instructions easily in most if not all assembly languages), but, again, this is all fairly attainable code to developers with the correct skills, not pie-in-the-sky stuff. Compiler courses do not normally deal directly with this exact problem, but by the time you finish one you'd know enough to tackle this problem since the problem since the problem that compiler courses do deal with is a more-or-less a superset of this problem.

jonjacky

This 20-page paper on the Java Optimized Processor is quite interesting. It has been implemented on FPGAs:

https://www.jopdesign.com/doc/rtarch.pdf

via https://www.jopdesign.com/ which includes a link to the github repo --- with 20-year-old files!

bitwize

It's not common, as only one was ever made, but the Lisp processor described in Sussman and Steele's paper "Design of LISP-based Processors, or SCHEME: A Dielectric LISP, or Finite Memories Considered Harmful, or LAMBDA: The Ultimate Opcode", had built-in, hardware-implemented garbage collection.

I was once at a meetup for Lisp hackers, and discussing something or another with one of them, who referred to Lisp as a "low-level language". When I expressed some astonishment at this characterization, he decided I needed to be introduced to another hacker named "Jerry", who would explain everything.

"Jerry" turned out to be Gerald Sussman, who very excitedly explained to me that Lisp was the instruction set for a virtual machine, which he and a colleague had turned into an actual machine, the processor mentioned above.

hiAndrewQuinn

Indeed, the old Lisp machines were exactly what I was thinking of as the possible exception here.

DonHopkins

https://news.ycombinator.com/item?id=37130128

Lynn Conway, co-author along with Carver Mead of "the textbook" on VLSI design, "Introduction to VLSI Systems", created and taught this historic VLSI Design Course in 1978, which was the first time students designed and fabricated their own integrated circuits:

>"Importantly, these weren’t just any designs, for many pushed the envelope of system architecture. Jim Clark, for instance, prototyped the Geometry Engine and went on to launch Silicon Graphics Incorporated based on that work (see Fig. 16). Guy Steele, Gerry Sussman, Jack Holloway and Alan Bell created the follow-on ‘Scheme’ (a dialect of LISP) microprocessor, another stunning design."

[...]

https://news.ycombinator.com/item?id=29953548

The original Lisp badge (or rather, SCHEME badge):

Design of LISP-Based Processors or, SCHEME: A Dielectric LISP or, Finite Memories Considered Harmful or, LAMBDA: The Ultimate Opcode, by Guy Lewis Steele Jr. and Gerald Jay Sussman, (about their hardware project for Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course) (1979) [pdf] (dspace.mit.edu)

http://dspace.mit.edu/bitstream/handle/1721.1/5731/AIM-514.p...

I believe this is about the Lisp Microprocessor that Guy Steele created in Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

My friend David Levitt is crouching down in this class photo so his big 1978 hair doesn't block Guy Steele's face:

The class photo is in two parts, left and right:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

Here are hires images of the two halves of the chip the class made:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

The Great Quux's Lisp Microprocessor is the big one on the left of the second image, and you can see his name "(C) 1978 GUY L STEELE JR" if you zoom in. David's project is in the lower right corner of the first image, and you can see his name "LEVITT" if you zoom way in.

Here is a photo of a chalkboard with status of the various projects:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

The final sanity check before maskmaking: A wall-sized overall check plot made at Xerox PARC from Arpanet-transmitted design files, showing the student design projects merged into multiproject chip set.

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

One of the wafers just off the HP fab line containing the MIT'78 VLSI design projects: Wafers were then diced into chips, and the chips packaged and wire bonded to specific projects, which were then tested back at M.I.T.

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

Design of a LISP-based microprocessor

http://dl.acm.org/citation.cfm?id=359031

https://donhopkins.com/home/AIM-514.pdf

Page 22 has a map of the processor layout:

https://donhopkins.com/home/LispProcessor.png

We present a design for a class of computers whose “instruction sets” are based on LISP. LISP, like traditional stored-program machine languages and unlike most high-level languages, conceptually stores programs and data in the same way and explicitly allows programs to be manipulated as data, and so is a suitable basis for a stored-program computer architecture. LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. An instruction set can be designed for programs expressed as trees of record structures. A processor can interpret these program trees in a recursive fashion and provide automatic storage management for the record structures. We discuss a small-scale prototype VLSI microprocessor which has been designed and fabricated, containing a sufficiently complete instruction interpreter to execute small programs and a rudimentary storage allocator.

Here's a map of the projects on that chip, and a list of the people who made them and what they did:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

1. Sandra Azoury, N. Lynn Bowen Jorge Rubenstein: Charge flow transistors (moisture sensors) integrated into digital subsystem for testing.

2. Andy Boughton, J. Dean Brock, Randy Bryant, Clement Leung: Serial data manipulator subsystem for searching and sorting data base operations.

3. Jim Cherry: Graphics memory subsystem for mirroring/rotating image data.

4. Mike Coln: Switched capacitor, serial quantizing D/A converter.

5. Steve Frank: Writeable PLA project, based on the 3-transistor ram cell.

6. Jim Frankel: Data path portion of a bit-slice microprocessor.

7. Nelson Goldikener, Scott Westbrook: Electrical test patterns for chip set.

8. Tak Hiratsuka: Subsystem for data base operations.

9. Siu Ho Lam: Autocorrelator subsystem.

10. Dave Levitt: Synchronously timed FIFO.

11. Craig Olson: Bus interface for 7-segment display data.

12. Dave Otten: Bus interfaceable real time clock/calendar.

13. Ernesto Perea: 4-Bit slice microprogram sequencer.

14. Gerald Roylance: LRU virtual memory paging subsystem.

15. Dave Shaver Multi-function smart memory.

16. Alan Snyder Associative memory.

17. Guy Steele: LISP microprocessor (LISP expression evaluator and associated memory manager; operates directly on LISP expressions stored in memory).

18. Richard Stern: Finite impulse response digital filter.

19. Runchan Yang: Armstrong type bubble sorting memory.

The following projects were completed but not quite in time for inclusion in the project set:

20. Sandra Azoury, N. Lynn Bowen, Jorge Rubenstein: In addition to project 1 above, this team completed a CRT controller project.

21. Martin Fraeman: Programmable interval clock.

22. Bob Baldwin: LCS net nametable project.

23. Moshe Bain: Programmable word generator.

24. Rae McLellan: Chaos net address matcher.

25. Robert Reynolds: Digital Subsystem to be used with project 4.

Also, Jim Clark (SGI, Netscape) was one of Lynn Conway's students, and she taught him how to make his first prototype "Geometry Engine"!

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

Just 29 days after the design deadline time at the end of the courses, packaged custom wire-bonded chips were shipped back to all the MPC79 designers. Many of these worked as planned, and the overall activity was a great success. I'll now project photos of several interesting MPC79 projects. First is one of the multiproject chips produced by students and faculty researchers at Stanford University (Fig. 5). Among these is the first prototype of the "Geometry Engine", a high performance computer graphics image-generation system, designed by Jim Clark. That project has since evolved into a very interesting architectural exploration and development project.[9]

Figure 5. Photo of MPC79 Die-Type BK (containing projects from Stanford University):

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

[...]

The text itself passed through drafts, became a manuscript, went on to become a published text. Design environments evolved from primitive CIF editors and CIF plotting software on to include all sorts of advanced symbolic layout generators and analysis aids. Some new architectural paradigms have begun to similarly evolve. An example is the series of designs produced by the OM project here at Caltech. At MIT there has been the work on evolving the LISP microprocessors [3,10]. At Stanford, Jim Clark's prototype geometry engine, done as a project for MPC79, has gone on to become the basis of a very powerful graphics processing system architecture [9], involving a later iteration of his prototype plus new work by Marc Hannah on an image memory processor [20].

[...]

For example, the early circuit extractor work done by Clark Baker [16] at MIT became very widely known because Clark made access to the program available to a number of people in the network community. From Clark's viewpoint, this further tested the program and validated the concepts involved. But Clark's use of the network made many, many people aware of what the concept was about. The extractor proved so useful that knowledge about it propagated very rapidly through the community. (Another factor may have been the clever and often bizarre error-messages that Clark's program generated when it found an error in a user's design!)

9. J. Clark, "A VLSI Geometry Processor for Graphics", Computer, Vol. 13, No. 7, July, 1980.

[...]

The above is all from Lynn Conway's fascinating web site, which includes her great book "VLSI Reminiscence" available for free:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

These photos look very beautiful to me, and it's interesting to scroll around the hires image of the Quux's Lisp Microprocessor while looking at the map from page 22 that I linked to above. There really isn't that much too it, so even though it's the biggest one, it really isn't all that complicated, so I'd say that "SIMPLE" graffiti is not totally inappropriate. (It's microcoded, and you can actually see the rough but semi-regular "texture" of the code!)

This paper has lots more beautiful Vintage VLSI Porn, if you're into that kind of stuff like I am:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

A full color hires image of the chip including James Clark's Geometry Engine is on page 23, model "MPC79BK", upside down in the upper right corner, "Geometry Engine (C) 1979 James Clark", with a close-up "centerfold spread" on page 27.

Is the "document chip" on page 20, model "MPC79AH", a hardware implementation of Literate Programming?

If somebody catches you looking at page 27, you can quickly flip to page 20, and tell them that you only look at Vintage VLSI Porn Magazines for the articles!

There is quite literally a Playboy Bunny logo on page 21, model "MPC79B1", so who knows what else you might find in there by zooming in and scrolling around stuff like the "infamous buffalo chip"?

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

naikrovek

I remember seeing a Java microprocessor for sale years ago. It claimed that the CPUs native instruction set is Java bytecode.

I can't find that exact microcontroller that I remember, I think the domain is gone, but there are other things like this, including some FPGA cores which make the same claim that I remember from that microcontroller I read about in the early 2000s. I wonder how those would perform compared to a JVM running on a traditional instruction set on the same FPGA.

aleph_minus_one

> I remember seeing a Java microprocessor for sale years ago. It claimed that the CPUs native instruction set is Java bytecode.

Could it be some older ARM core supporting Jazelle?

> https://en.wikipedia.org/wiki/Jazelle

concretely possibly a ARM926EJ-S?

> https://en.wikipedia.org/wiki/ARM9#ARM9E-S_and_ARM9EJ-S

Various other "Java processors" are listed on

> https://en.wikipedia.org/wiki/Java_processor

dleslie

> After many months of searching I found a Mr Coffee JavaStation for sale in Canada; unfortunately the seller only accepted payments through a Canadian banking service which is pretty much inaccessible outside Canada.

If they mean Interac E-Transfers, then their inability to access it may have prevented them from running afoul of a common scam. Online classified ads will offer desirable items that are also often expensive and niche, and will ask the would-be purchaser to pay for it via an e-Transfer. And then you never hear from them again.

Always ensure the product exists, or the service is rendered, before using Interac E-transfer.

https://www.getcybersafe.gc.ca/en/e-transfer-fraud-protect-y...

toast0

Only delayed. Eventually they had a friend move to Canada in order to straw purchase the JavaStation on their behalf. (Maybe there were other motivations for moving to Canada, like ketchup chips)

dleslie

Ah, I didn't read much past what I quoted; I became distracted.

486sx33

Maybe you’re missing part of the point… you can’t send an interac transfer from a US bank account, so unless you have a Canadian bank account, you can’t do it !

mardifoufs

Yes but usually listings that ask for interac e-transfers are a scam in the first place! They are basically impossible to revert to scammers really like them. So even if they had access to interact transfers, they probably shouldn't have bought the listed item anyways.

ephaeton

I dearly remember setting up NetBSD on various sparc stations and ultra sparcs (a II, and an Ultra 60) and running them alongside a set of various other RISCs and CISCs of late 90s. Based on the paper 'attack of the lemmings' (IIRC) by matthias something (IIRC), I wanted to create a 'how to portably code C' course that would run with just the basic netbsd tools - compiler, editor, test system, make, ... - write once, commit, have the whole weird-ass machine park response to the unit test for a given exercise. Sadly never made it happen fully. Still - NetBSD! fun times, great documentation and such a knowledgeable crowd! Enjoy the voyage!

chasil

I am assuming that the major reason that you wanted to do this is that SPARC is big-endian. It works in the native order of TCP/IP, and the hton/ntoh macros are null at the socket level in C.

NetBSD can run Raspberry Pis big-endian. This is a much easier platform to obtain and configure than SPARC.

The targets appear to be earmv7hfeb and aarch64eb.

https://wiki.netbsd.org/ports/evbarm/

ephaeton

yeah, machines of different endianness, and, ideally, different alignment requirements. Always wanted to get an alpha, as well. Had hpux / hp300 ?, sparc, sparc64, 386, x86_64, maybe another arch. This was in 2005'ish, mind you. Idea was to write code that would portably work on linux and netbsd on at least said architectures, ideally more.

DonHopkins

I has having lunch with some hardware designers from SGI and Sun, and the SGI people mentioned jokingly that the MIPS could be both big-endian and little-endian, which they called SPIM. Then they pointed out much to the embarrassment of the Sun people (including me at the time) that the little-endian version of the SPARC would be called CRAPS.

markus_zhang

> Sun’s bootloader environment from that period was called OpenBoot, and consisted of a FORTH interpreter, from which you can interrogate the device tree and pretty much do whatever you want.

This sounds interesting. I have read quite a few FORTH posts on HN but never gave the thing a look. It is really different than anything I have looked at. For example, for functional languages I never got pass Scheme's ' symbol, but at least I get most of the syntax. FORTH really is another level.

luke8086

Bootloader developers used to be particularly fond of Forth.

For example, for many years the FreeBSD's 3rd-stage loader used FICL (Forth Inspired Command Language) for scripting [1]. It's still supported, although in the recent years it was deprecated in favor of Lua [2].

[1] https://github.com/freebsd/freebsd-src/tree/main/stand/forth

[2] https://github.com/freebsd/freebsd-src/tree/main/stand/lua

markus_zhang

Interesting. I know that embedded developers used it a lot too back then, for satellites. Not sure how popular it is now.

eschaton

This was the basis for IEEE-1275 Open Firmware, which nobody uses any more but was the standard for SPARC, PowerPC, and post-PowerPC POWER. It’s where device trees came from and frankly is how everything should be booting these days, not u-boot or UEFI or custom secure boot chains (which are entirely possible with Open Firmware too).

There are BSD, GPL, and other Open Source variants of Open Firmware you can get and fool around with today and if you’re building a new product you should still consider whether an Open Firmware would work for you versus one of its inferior successors.

DonHopkins

https://news.ycombinator.com/item?id=33681531

I've frequently written about Mitch Bradley's Forthmacs / Sun Forth / CForth / OpenBoot / OpenFirmware on HN. I was his summer intern at Sun in 1987, and used his Forth systems in many projects!

[...]

https://news.ycombinator.com/item?id=29261810

Speaking of Forth experts -- there's Mitch Bradley, who created OpenFirmware:

[...]

Here's the interview with Mitch Bradley saved on archive.org:

https://web.archive.org/web/20120118132847/http://howsoftwar...

I've previously posted some stuff about Mitch Bradley -- I have used various versions of his ForthMacs / CForth / OpenFirmware systems, and I was his summer intern at Sun in '87!

Mitch is an EXTREMELY productive FORTH programmer! He explains that FORTH is a "Glass Box": you just have to memorize its relatively simple set of standard words, and then you can have a complete understanding and full visibility into exactly how every part of the system works: there is no mysterious "magic", you can grok and extend every part of the system all the way down to the metal. It's especially nice when you have a good decompiler / dissassembler ("SEE") like ForthMacs, CForth, and OpenFirmware do.

https://news.ycombinator.com/item?id=9271644

[...]

https://news.ycombinator.com/item?id=38689282

Mitch Bradley came up with a nice way to refactor the Forth compiler/interpreter and control structures, so that you could use them immediately at top level! Traditional FORTHs only let you use IF, DO, WHILE, etc in : definitions, but they work fine at top level in Mitch's Forths (including CForth and Open Firmware).

[...]

https://github.com/MitchBradley/openfirmware

https://github.com/MitchBradley/cforth

markus_zhang

Thanks man, this is a treasure trove. Diving straight into it when on bus.

DonHopkins

I love to occasionally just recreationally read over the OpenFirmware source code as fine literature, especially the kernel and metacompiler, since it's just such elegant beautifully polished and refined code, the results of so many decades of meticulous work on so many platforms and devices.

metacompile.fth: https://github.com/MitchBradley/openfirmware/blob/master/for...

kernel.fth: https://github.com/MitchBradley/openfirmware/blob/master/for...

arm64: https://github.com/MitchBradley/openfirmware/tree/master/cpu...

emacs: https://github.com/MitchBradley/openfirmware/tree/master/cli...

olpc: https://github.com/MitchBradley/openfirmware/tree/master/dev...

video: https://github.com/MitchBradley/openfirmware/tree/master/dev...

amd7990: https://github.com/MitchBradley/openfirmware/tree/master/dev...

pci: https://github.com/MitchBradley/openfirmware/tree/master/dev...

fcode: https://github.com/MitchBradley/openfirmware/tree/master/ofw...

gui: https://github.com/MitchBradley/openfirmware/tree/master/ofw...

inet: https://github.com/MitchBradley/openfirmware/tree/master/ofw...

Forth is really a transparent "glass box" where you can see through and understand it all from top to bottom, and OpenFirmware includes a museum of drivers and modules and extensions for everywhere it's ever been and all of its missions, like Superman's Crystal Fortress of Solitude!

https://en.wikipedia.org/wiki/Fortress_of_Solitude

>The Fortress contained an alien zoo, a giant steel diary in which Superman wrote his memoirs (using either his invulnerable finger, twin hand touch pads that record thoughts instantly, or heat vision to engrave entries into its pages), a chess-playing robot, specialized exercise equipment, a laboratory where Superman worked on various projects such as developing defenses to kryptonite, a room-sized computer, communications equipment, and rooms dedicated to all of his friends, including one for Clark Kent to fool visitors. As the stories continued, it was revealed that the Fortress was where Superman's robot duplicates were stored. It also contained the Phantom Zone projector, various pieces of alien technology he had acquired on visits to other worlds, and, much like the Batcave, trophies of his past adventures. Indeed, the Batcave and Batman himself made an appearance in the first Fortress story. The Fortress also became the home of the bottle city of Kandor (until it was enlarged), and an apartment in the Fortress was set aside for Supergirl.

yjftsjthsd-h

Odd that it uses RARP to get an IP but then uses DHCP for NFS configuration. (Or is it the baked in firmware using RARP and then the modern NetBSD kernel using DHCP? That would make more sense)

Also:

> You need to rename the file with a specific format: the IP address of the JavaStation, but in 8 capitalized hex digits, followed a dot, and then the architecture (in this case “SUN4M”). So, in this example the IP address (as defined in rarpd above) is 192.168.128.45, which in hex is C0A8802D.

This is of course the correct way to do it, but if you're lazy you can just tail the tftpd logs and see what filename it tries to download, rename the file on the server, and reboot again to pick it up. (I did this when netbooting raspberry pis)

toast0

> Or is it the baked in firmware using RARP and then the modern NetBSD kernel using DHCP? That would make more sense

Yes, firmware only knows how to use rarp and tftp to fetch a kernel or a better bootloader, kernel is modern and speaks DHCP. This is a pretty common pattern with netbooting; some will bootp rather than rarp, sometimes you use tftp to fetch something that can do an http fetch, etc. Always lots of fun :D

eb0la

A lot of old hardware uses TFTP and RARP to boot. RARP just will get you the ip address, and the rest is hardcoded somehow in the machine - needs very little memory on boot. For BOOTP you need some intelligence to know where are your files. TFTP is also cheap in memory to use. UDP without flow, no nothing. Just send me the next packet in sequence when I ask you to do so.

I remember having trouble some years ago upgrading old Cisco routers because the image was bigger than what TFTP can handle.

torcete

I remember doing this when I was working for Sun Microsystems. We had to install Solaris in a quite large number of Sun computer for a big client and we did all of them with tftp.

bayindirh

Big fleets are still installed with TFTP + HTTP/FTP.

torcete

I had no idea. Interesting and cool at the same time!

bayindirh

It's very cool. Getting a couple racks of new servers and installing all of them from your desk without any interaction is very enjoyable.

I also love installing/cabling servers, but not needing to leave your desk to (re)provision hardware is pretty life changing. Considering your desk can be anywhere around the world due to work travels.

DonHopkins

Are you the poor Unix system administrator at Sun with the Worst Job in the World, who had to install Solaris on Scott McNealy's and Ed Zander's and other VP's workstations?

The Worst Job in the World, from Michael Tiemann <tiemann@cygnus.com>:

https://www.donhopkins.com/home/catalog/unix-haters/slowlari...

PS: Fuck Trump supporting anti-vaxer Scott "You have zero privacy, get over it" McNealy. May he run Solaris in hell. If you installed it on him, then good for you, he deserved it!

Scott McNealy has long been one of Trump’s few friends in Silicon Valley:

https://www.sfchronicle.com/politics/article/Scott-McNealy-h...

Former Sun Micro CEO Scott McNealy, known for his provocative quotes, says Trump is doing a 'spectacular job' amid the coronavirus crisis. That's not how many tech experts see it:

https://www.businessinsider.com/scott-mcnealy-praises-trumps...

Sun on Privacy: "Get Over It":

https://www.wired.com/1999/01/sun-on-privacy-get-over-it/

neilv

> Hard as it may be to imagine, there was a time when Java was brand new and exciting. Long before it became the vast clunky back-end leviathan it is today, it was going to be the ubiquitous graphical platform that would be used on everything from cell phones to supercomputers: write once, run anywhere.

> Initially I drank the kool-aid and was thrilled about this new “modern” language that was going to take over the world, and drooled at the notion of Java-based computers, containing Java chips that could run java byte-code as their native machine code.

Exactly. I was lucky to see Java when it was still called Oak, and then I developed some of the first (non-animation) Java applets and small desktop applications outside of Sun/JavaSoft. It was very exciting (speaking as a programmer in C, C++, Smalltalk, a little Self, a little Lisp, and other languages at the time). The language itself wasn't as cool as Lisp or Smalltalk, but it was a nice halfway compromise from C++, with some of its own less exotic but nice features and ergonomics. It was already in the browsers, had next-gen embedded systems for the Internet at the forefront from the beginning, there was a proof-of-concept of a better kind of Web browser using it, Sun putting even putting it in rings for ubiquitous computing, there were thin clients that could get interesting (combined with Sun's "The Network Is The Computer", even if historically techies didn't like underpowered diskless workstations, except to give to non-techies), etc., and it only promised to get better...

Then I turned my back for a sec., and the next time I looked, Java had been kicked out of the browser, and most all of the energy (except for the Android gambit) seemed to be focused on pitching Java for corporate internal software development. And suddenly no one else seemed to want to touch it, even if there wasn't much better. (Python, for example, from the same era, was one person's simplified end user extension language; and not intended for application development.)

Yet another case of technology adoption not going how you'd initially think it would.

markus_zhang

I'm curious about what happened. IIRC, as you mentioned too in your reply, that Java was supposed to run in embedded devices. It was supposed to be lean and fast. But I can't imagine the modern Java doing that...

ciberado

Version 1.0 ran quite smoothly once we upgraded the machines from 4MB to 8MB of RAM (seriously!). But, of course, at those times 8MB was much more memory than the early smartishphones carried, so their Java version was heavily stripped down and almost good-for-nothing.

On the PC Front, James Gosling and cia made were an amazing team, but they pushed for very academic and cumbersome patterns that converged in the EJB architecture. Nobody in its sane mind would fall in love with that.

Two or three years later, the internet bubble fallout affected every technology.

bzzzt

EJB was not invented by Gosling but adopted from IBM. It combined over-engineered concepts from the mainframe world combined with objects and too much XML configuration.

Nowadays we've got Kubernetes with YAML for that.

bzzzt

It's not Java, it's the programmer. There are lots of non-hacker types churning out inefficient code using inefficient abstractions. There are also people using Java for high-frequency trading application with realtime performance needs.

hiAndrewQuinn

It's true! If memory serves from the Jane Street podcast, the literal NYSE ran for years on a single-threaded Java application. I still struggle to wrap my head around the wizardry that kind of thing must have required.

Foobar8568

You had different JVM, some could run on smart card https://en.m.wikipedia.org/wiki/Java_Card

There were also java realtime JVM with different latency promises.

Basically you had different versions of the JVM, optimized for its use case. I guess when Sun was bought by Oracle, everything died.

pjmlp

There are also java realtime JVM with different latency promises.

https://www.ptc.com/en/products/developer-tools/perc

https://www.aicas.com/wp/products-services/jamaicavm/

Additionally there are plenty of others around,

And then there are flavours, like microEJ, Android, what Ricoh, Xerox ship on their copiers, BlueRay,....

Folks' hate for Oracle make them forget that the Java push into the industry was not Sun alone, rather Sun, Oracle, IBM trio.

Oracle has since early Java days embraced the technology, and even had their own flavour of JavaStation, called Network Computer.

Oracle has been a better Java steward than the alternative, being Java dying in version 6, losing Maxime (whose ideas live on GraalVM),...

No one else jumped to acquire Sun, and Google missed their opportunity to own Java, after torpedoing Sun.

Nowadays Google has their own .NET.

layer8

Java Card is still very much a thing.

toast0

I mean, it runs on phones and Blu-Ray players. Of course, our phones now need 4 gb of ram so they don't have to swap out their launchers...

pjmlp

Where I stand, outside HN circles, I see Java all over the place, including embedded.

And frankly Electron is a much worse experience than Swing apps, but there is nothing like helping Chrome taking over Web and desktop as the platform to rule them all. /s

sgt

Was not a big fan of Swing, but sure most things beat Electron because it doesn't quite "right". There is definitely some desktop experience being lost, aside from it being a memory and CPU hog.

jen20

The only reasonable “feeling” Java app I’m aware of on the desktop is IntelliJ (and derivatives) - but AFAIK JetBrains have had to fork almost every part of the ecosystem to make that a reality.

dehrmann

> Java chips that could run java byte-code as their native machine code.

Hah! Even ISAs are somewhat detached from truly native machine code, these days.