Skip to content(if available)orjump to list(if available)

LINUX is obsolete (1992)

LINUX is obsolete (1992)

187 comments

·February 8, 2025

ryao

  While I could go into a long story here about the relative merits of the
  two designs, suffice it to say that among the people who actually design
  operating systems, the debate is essentially over. Microkernels have won.
The developers of BSD UNIX, SunOS, and many others would disagree. Also, the then upcoming Windows NT was a hybrid kernel design. While it has an executive "micro-kernel", all of the traditional kernel stuff outside the "microkernel" runs in kernel mode too, so it is really a monolithic kernel with module loading.

While the original post was written well before NeXTSTEP, the Mach 3.0 kernel was converted into a monolithic kernel in NeXTSTEP, which later became MacOS. The reality is that Mach 3.0 was just still slow performance wise, much like how NT would have been had they had made it into an actual micro-kernel.

In the present day, the only place where microkernels are common are embedded applications, but embedded systems often don't even have operating systems and more traditional operating systems are present there too (e.g. NuttX).

lizknope

> While the original post was written well before NeXTSTEP, the Mach 3.0 kernel was converted into a monolithic kernel in NeXTSTEP, which later became MacOS.

The original Tanenbaum post is dated Jan 29, 1992.

NeXTSTEP 0.8 was released in Oct 1988.

https://en.wikipedia.org/wiki/NeXTSTEP#Release_history

3.0 was not the conversion into a monolithic kernel. That was the version when it was finally a microkernel. Until that point the BSD Unix part ran in kernel space.

https://en.wikipedia.org/wiki/Mach_(kernel)

NeXTSTEP was based on this pre-Mach 3.0 architecture so it would have never met Tanenbaum's definition of a true microkernel.

> Mach received a major boost in visibility when the Open Software Foundation (OSF) announced they would be hosting future versions of OSF/1 on Mach 2.5, and were investigating Mach 3 as well. Mach 2.5 was also selected for the NeXTSTEP system and a number of commercial multiprocessor vendors.

OSF/1 was used by DEC and they rebranded it Digital Unix and then Tru64 Unix.

After NeXT was acquired by Apple they updated a lot of the OS.

https://en.wikipedia.org/wiki/XNU#Mach

> The basis of the XNU kernel is a heavily modified (hybrid) Open Software Foundation Mach kernel (OSFMK) 7.3.[3] OSFMK 7.3 is a microkernel[6] that includes applicable code from the University of Utah Mach 4 kernel and from the many Mach 3.0 variants forked from the original Carnegie Mellon University Mach 3.0 microkernel.

> The BSD code present in XNU has been most recently synchronised with that from the FreeBSD kernel. Although much of it has been significantly modified, code sharing still occurs between Apple and the FreeBSD Project as of 2009

Back in the late 2000's Apple hired some FreeBSD people to work on OS X.

Before Apple bought NeXT they were working with OSF on MkLinux which ported Linux to run on top of the Mach 3.0 microkernel.

https://en.wikipedia.org/wiki/MkLinux

> MkLinux is the first official attempt by Apple to support a free and open-source software project.[2] The work done with the Mach 3.0 kernel in MkLinux is said to have been extremely helpful in the initial porting of NeXTSTEP to the Macintosh hardware platform, which would later become macOS.

> OS X is based on the Mach 3.0 microkernel, designed by Carnegie Mellon University, and later adapted to the Power Macintosh by Apple and the Open Software Foundation Research Institute (now part of Silicomp). This was known as osfmk, and was part of MkLinux (http://www.mklinux.org). Later, this and code from OSF’s commercial development efforts were incorporated into Darwin’s kernel. Throughout this evolutionary process, the Mach APIs used in OS X diverged in many ways from the original CMU Mach 3 APIs. You may find older versions of the Mach source code interesting, both to satisfy historical curiosity and to avoid remaking mistakes made in earlier implementations.

So modern OS X is a mix of various code from multiple versions of Mach and BSD running as a hybrid kernel because as you said Mach 3.0 in true microkernel mode is slow.

ryao

I had forgotten that NeXTSTEP went back that far. Thanks for the correction.

ww520

Back in the times when I read that statement, it had immediately lost credibility to me. The argument was basically an appeal-to-authority/argument from authority. It put Tanenbaum onto the "villain" side in my mind, someone who was willing to use his position of authority to win argument rather than merits. Subsequent strings of microkernel failures proved the point. The moment Microsoft moved the graphic subsystem from user mode into the kernel mode to mitigate performance problem was the death of microkernel in Windows NT.

pjmlp

Meanwhile they moved it back into userspace by Windows Vista, and nowadays many kernel subsystems run sandboxed by Hyper-V.

One of the reasons for the Windows 11 hardware requirements, is that nowadays Windows always runs as a guest OS.

betaby

> kernel subsystems run sandboxed by Hyper-V

What subsystems? Is there a documentation outlining that.

> nowadays Windows always runs as a guest OS

What is hypervisor in that case?

johnisgood

If the GPU driver crashes, your kernel does not crash on Windows still, right? It continued working fine on Windows 7.

ryao

It depends on what part of the GPU driver crashes. The kernel mode part crashing caused countless BSODs on Vista. The switch to WDDM was painful for Windows. Things had improved by the time Windows 7 was made.

fmajid

Not just that but between 3.51 and 4.0 many NT drivers like graphics were moved to ring 0, trading performance for robustness.

ryao

Do you mean robustness for performance?

throw16180339

IIRC, didn't they move drivers back to userspace in Windows Vista or Windows 7?

stevemk14ebr

No drivers are kernel modules

tonyedgecombe

Yes, I remember blue screening a customers server just by opening a print queue.

Numerlor

Are graphics just specially handled by the kernel with how recoverable they are from crashes?

yndoendo

Windows standard file locking prevents a number of useful user experiences that file base OSes like Linux and BSD provide. Main they can update the files while open / in use.

Windows needs to stop a service or restart one to apply updates in real-time. Ever watched the screen flash during updating on Windows? That is the graphic stack restarting. This is more present on slower dual and quad core CPU systems. Microsoft needed to do this to work around how they handle files.

Windows even wired HID event processing in the OS to verify that the display manager is running. If the screen ever goes black during updates, just plug-in a keyboard and press a key to restart it.

* There are ways to prevent a file lock when open a file in Windows but it not standard and rarely used by applications, even ones written by Microsoft.

p_l

Vista moved graphics mostly out of kernel even if part of GDI was still handled internally, but essentially the driver model changed heavily and enabled restartable drivers and by Windows 7 IIRC the new model was mandatory.

In "classic" Windows and NT 4.0 - 5.2 GDI would draw directly into VRAM, possibly calling driver-specific acceleration routines. This is how infamous "ghosting" issues when parts of the system would hang happened.

With new model in Vista and later, GDI was directed at separate surface that was later used as texture and composited on screen. Some fast paths were still available to bypass that mainly for full screen apps, and were improved over time.

pjmlp

All Intel CPUs have a Minix 3.0 powering their management engine.

Modern Windows 11 is even more hybrid than Windows NT planned to be, with many key subsystems running on their own sandbox managed by Hyper-V.

ryao

The management engine is an embedded processor.

InTheArena

This is the thread that I read in high school that made me fall in love with software architecture. This was primarily because Tanenbaum’s position was so obviously correct, yet it was also clear to all that Linux was going to roll everyone, even at that early stage.

I still hand this out to younger software engineers to understand the true principle of architecture. I have a print off of it next to my book on how this great new operating system and SDK from Taligent was meant to be coded.

mrs6969

But why linux won? We know now it won, but what is the reason. Tanenbaum was theoraticaly correct. İf HN exist back then, I would argue most devs here would say Minix will last longer, monolithics is indeed an old idea that had been tried and tested etc.

Same question for the iphone. There are some link from HN where people saying iphone is dead bcs it does not support flash. But it didnt. Why it didnt?

İs performance really the only key factor when it comes to software design ?

cross

Linux won in large part because it was in the right place, at the right time: freely available, and rapidly improving in functionality and utility, and it ran on hardware people had access to at home.

BSD was mired in legal issues, the commercial Unix vendors were by and large determined to stay proprietary (only Solaris made a go of this and by that time it was years too late), and things like Hurd were bogged down in second-system perfectionism.

Had Linux started, maybe, 9 months later BSD may have won instead. Had Larry McVoy's "sourceware" proposal for SunOS been adopted by Sun, perhaps it would have won out. Of course, all of this is impossible to predict. But, by the time BSD (for example) was out of the lawsuit woods, Linux had gained a foothold and the adoption gap was impossible to overcome.

At the end of the day, I think technical details had very little to do with it.

NikkiA

In early 1992 I emailed BSDI asking about the possibility of buying a copy of BSD/386 as a student - the $1000 they wanted was a little too high for me. I got an email back pointing me at an 'upstart OS' called linux that would probably suit a CS student more, and was completely free, that week I think it was 0.13 I downloaded that week, it got renamed 0.95 a few weeks later, there was no X (I think 0.99pl6 was the first time I ran X on it, from a yggdrasil disc in august 1992) but it was freedom from MSDOS.

Ironically, 386BSD would have been brewing at the same time with a roughly similar status.

pjmlp

And most commercial UNIXes would still be around, taking as they please out of BSD systems.

lizknope

Linux was free. You see that Linus says Tanenbaum charges for Minix.

I started running Linux in October 1994.

One of the main reasons I chose Linux over Free/NetBSD was the hardware support. Linux supported tons of cheap PC hardware and had bug workarounds very quickly.

I had this IDE chip and Linux got a workaround quickly. The FreeBSD people told me to stop using cheap hardware and buy a SCSI card, SCSI hard drive, and SCSI CD-ROM. That would have been another $800 and I was a broke college student.

https://en.wikipedia.org/wiki/CMD640

Linux even supported the $10 software based "WinModem" I got for free.

drewg123

I started running linux in 1992 or so. I converted to FreeBSD right around the time you were starting with Linux because I had the opposite experience:

I was new *nix sysadmin, and I needed good NFS performance (replacing DEC ULTRIX workstations in an academic dept with PCs running some kind of *nix). I attended the 1994 Boston USENIX and spoke to Linus at the Linux BOF, where he basically told me to pound sand. He said NFS performance was not important. So I went down the hall to the FreeBSD BOF and they assured me NFS would work as well in FreeBSD as it did in ULTRIX, and they were right.

I've been a FreeBSD user for over 30 years now, and a src committer for roughly 25 years. I often wonder about the alternate universe in which I was able to convince Linus of the need for good NFS performance..

ryao

  But why linux won? We know now it won, but what is the reason. Tanenbaum was theoraticaly correct. İf HN exist back then, I would argue most devs here would say Minix will last longer, monolithics is indeed an old idea that had been tried and tested etc.
In every situation where a microkernel is used, a monolithic version would run faster.

  İs performance really the only key factor when it comes to software design ?
Usually people want software to do two things. The first is do what it is expected to do. The second is to do it as fast as possible.

It seems to me that the microkernel idea came from observing that virtual memory protection made life easier and then saying “what would life be like if we applied virtual memory protection to as much of the kernel as we can”. Interestingly, the original email thread shows that they even tried doing microkernels without virtual memory protection in the name of portability, even though there was no real benefit to the idea without virtual memory protection, as you end up with everything being able to write to each other’s memory anyway such that there is no point,

  Same question for the iphone. There are some link from HN where people saying iphone is dead bcs it does not support flash. But it didnt. Why it didnt?
Flash was a security nightmare. Not supporting it was a feature.

thfuran

>Usually people want software to do two things. The first is do what it is expected to do. The second is to do it as fast as possible.

If performance is second, it seems to be a very, very distant second for most uses. So much software is just absurdly slow.

WhyNotHugo

Monolithic systems are faster to design and implement. Systems with decoupled components require more time to design, implement and iterate. A lot more time.

This doesn't just apply to kernels. It applies to anything in software; writing a huge monolith of intertwined code is always going to be faster than writing separate components with clear API boundaries and responsibilities.

Of course, monolithic software ends up being less safe, less reliable, and often messier to maintain. But decoupled design (or micro-kernels) can take SO MUCH longer to develop and implement, that by the time it's close to being ready, the monolithic implementation has become ubiquitous.

n4r9

Torvalds points out in the linked thread that Linux was already freely available and that it was easier to port stuff to it. Convenience often wins over technical superiority when it comes to personal use.

saati

> Tanenbaum was theoraticaly correct.

He was only correct in a world where programmer and cpu time is free and infinite.

IX-103

I understand CPU time, as micro-kernels tend to be less efficient, but why do you include programmer time?

My understanding is that it's easier to develop drivers for a micro-kernel. If you look at FUSE (filesystem in user space), and NUSE (network in user space), as well as the work with user-space graphics drivers, you see that developers are able more rapidly implement a working driver and solve more complicated problems in user space than in kernel space. These essentially treat Linux as a micro-kernel, moving driver code out of the kernel.

marcosdumay

Care to elaborate how working in a microkernel instead of a monolithic one wastes programmer time? Because AFAIK, every single evidence we have points the exact opposite.

Also, microkernels only waste CPU time because modern CPUs go to great lengths to punish them, for no comparable gain for monolithic kernels, apparently because that's the design that they always used.

null

[deleted]

lizknope

I went to an advanced high school then that had Internet access. We had multiple Sun3/4 and IBM AIX system. I really wanted a Unix system for myself but they were so expensive. The students who graduated a year ahead of me and started college started emailing me about this cool new thing called Linux. Just reading about it was exciting even though I didn't even own a PC to install it. I saved up all my money in 1994 to buy a PC just to run Linux.

abetusk

I've heard of this debate but haven't heard an argument of adoption from a FOSS perspective. From Wikipedia on Minix [0]:

> MINIX was initially proprietary source-available, but was relicensed under the BSD 3-Clause to become free and open-source in 2000.

That is a full eight years after this post.

Also from Wikipedia on Linux becoming FOSS [1]:

> He [Linus Torvalds] first announced this decision in the release notes of version 0.12. In the middle of December 1992 he published version 0.99 using the GNU GPL.

So this post was essentially right at the cross roads of Linux going from some custom license to FOSS while MINIX would remain proprietary for another eight years, presumably long after it had lost to Linux.

I do wonder how much of an effect, subtle or otherwise, the licensing helped or hindered adoption of either.

[0] https://en.wikipedia.org/wiki/Minix

[1] https://en.wikipedia.org/wiki/History_of_Linux

otherme123

I installed my first linux in 1996. It came in a CD with a computer magazine: a free OS. That was huge, for me at least. Said CDs were filled with shareware software like winzip, that you had to buy or crack to use at 100%. Meanwhile there was this thing called Linux, for free, that included a web server, ftp, firewall, a free C compiler, that thing called latex that produced beautiful documents... The only thing it required from you was to sacrifice a bit of confort in the UI, and a bit of extra effort to get better results.

I didn't heard about Minix until mid 2000's maybe, and it was like an old legend of an allegedly better-than-linux OS that failed because people are dumb.

abetusk

1997-1998 was about the time I first installed Linux (slackware) from a stack of 3.5" floppy disks. By then, Linux had picked up enough momentum, which is why, I guess, you and I both had access to CD/floppy installation methods.

The folklore around the Linux/Minix debate, for me, was that "working code wins" and either microkernel wasn't as beneficial as was claimed or through grit and elbow grease, Linux pushed through to viability. But now I wonder how true that narrative is.

Could it have been that FOSS provided the boost in network effects for Linux that compounded its popularity and helped it soar past Minix? Was Minix hampered by Tanenbaum gatekeeping the software and refusal to cede copyright control?

To me, the licensing seems pretty important as, even if the boost to adoption was small, it could have compounding network effects that helped with popularity. I just never heard this argument before so I wonder how true it is, if at all.

lproven

> The folklore around the Linux/Minix debate, for me, was that "working code wins" and either microkernel wasn't as beneficial as was claimed

Hang on. That does not work.

You need to be careful about the timeline here.

Linus worked with and built the very early versions of the Linux kernel on Minix 1.

Minix 1 was not a microkernel, only directly supported 8088 and 8086 (and other architectures, but the point here is not 80286 or 80386, so no hardware memory management) and it was not FOSS.

Minix 2 arrived in 1997, was FOSS, and supported the 80386, i.e. x86-32.

Minix 3 was the first microkernel version and was not released until 2005.

You are comparing original early-era Linux with a totally different version of Minix that didn't exist yet and wouldn't for well over a decade.

In the early 1990s, the comparison was:

Minix 1: source available, but not redistributable; 16-bit only, max 1MB of RAM, no hardware memory protection, and very limited.

Linux 0.x to 1.x: FOSS, 32-bit, fully exploited 32-bit PCs. 4GB of RAM if you could afford it, but could use 4MB - 8MB for normal non-millionaire people.

LeFantome

It is not at all subtle. If Minix was free, Linus may never Have written Linux at all. It cost $50 (as I recall). Linus hated that.

The first Linux license was that you could not charge for Linux. As it grew in popularity, people wanted to be able to charge for media (to cover their costs). So, Linus switched to the GPL which kept the code free but allowed charging for distribution.

kazinator

Academically, Linux is obsolete. You couldn't publish a paper on most of it; it wouldn't be original. Economically, commercially and socially, it isn't.

Toasters are also obsolete, academically. You couldn't publish a paper about toasters, yet millions of people put bread into toasters every morning. Toasters are not obsolete commercially, economically or socially. The average kid born today will know what a toaster is by the time they are two, even if they don't have one at home.

forinti

My father is a retired physics professor. I tried debating him once about an aqueduct in a town near us that was built in the early XX century.

His view is that it was moronic because communicating vessels had already been known for centuries.

I tried arguing that maybe they didn't have the materials (pipes), or maybe dealing with obstructions would have been difficult, etc. After all, this was a remote location at that time.

I think that the person who built it probably didn't know about communicating vessels but that it is also true that the aqueduct was the best solution for the time and place.

Anyway, debating academics about practical considerations is hard.

JodieBenitez

> Writing a new OS only for the 386 in 1991 gets you your second 'F' for this term. But if you do real well on the final exam, you can still pass the course.

what a way to argue...

otherme123

It's the fallacy of authority barely disguised. It works wonders with students. Luckily Linus didn't fall for it.

snovymgodym

Yeah these lines from Tanenbaum stuck out to me as well. To be fair this response only comes after Linus delivers a pretty rude rebuttal to Tanenbaum's initial points which were still somewhat arrogant but civilly stated.

In the grand scheme of things, the whole thread is still pretty tame for a usenet argument and largely amounts to two intelligent people talking past each other with some riffing and dunking on each other mixed in.

Makes me come back and appreciate the discussion guidelines we have on this site.

intelVISA

The debate that cemented Tanenbaum as a smug clown in my mind since '92, his poor students!

lproven

Nah. He was right then and he's right now.

You need to understand the theory and the design if you want to design something that will last for generations without becoming a massive pain to maintain.

Linux now is a massive pain to maintain, but loads of multi-billion-dollar companies are propping it up.

If something only keeps working because thousands of people are paid to labour night and day to keep it working via hundreds of MB of patches a day, that is not a demo of good design.

mhandley

There's an element of "Worse is Better" in this debate, as in many real-world systems debates. The original worse-is-better essay even predates the Linux vs Minix debate:

https://dreamsongs.com/RiseOfWorseIsBetter.html

Gabriel was right in 1989, and he's right today, though sometimes the deciding factor is performance (e.g. vs security) rather than implementation simplicity.

wongarsu

Another big factor is conceptual simplicity, rather than implementation simplicity. Linux is conceptually simple, you can get a good mental model of what it's doing with fairly little knowledge. There is complexity in the details, but you can learn about that as you go. And because it is "like the unix kernel, just bigger" there have always been a lot of people able and willing to explain it and carry the knowledge forward.

Windows in comparison has none of that. The design is complex from the start, is poorly understood because most knowledge is from the NT 4.0 era (when MS cared about communicating about their cool new kernel), and the community of people who could explain it to you is a lot smaller.

It's impressive what the NT Kernel can do. But most of that is unused because it was either basically abandoned, meant for very specific enterprise use cases, or is poorly understood by developers. And a feature only gives you an advantage if it's actually used

pjmlp

Ironically it actually is, from 2025 perspective.

Not only does microservices and Kubernetes all over the place kind of diminishes whatever gains Linux could offer as monolithic kernels, the current trend of cloud based programing language runtimes being OS agnostic in serverless (hate the naming) deployment, also makes irrelevant what is between the type-2 hypervisor and language runtimes.

So while Linux based distributions might have taken over the server room as UNIX replacements, it only matters for those still doing full VM deployments in the style of AWS EC2 instances.

Also one of the few times I agree with Rob Pike,

> We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy.

> At the risk of contradicting my last answer a little, let me ask you back: Does the kernel matter any more? I don't think it does. They're all the same at some level. I don't care nearly as much as I used to about the what the kernel does; it's so easy to emulate your way back to a familiar state.

-- 2004 interview on Slashdot, https://m.slashdot.org/story/50858

david_draco

and MINIX is the most common operating system, thanks to Intel https://www.networkworld.com/article/964650/minix-the-most-p... :)

firesteelrain

Containers still run on some form of Linux or Windows so not following your point.

pjmlp

Containers === kernel services on microkernels.

Explicit enough?

Such is the irony of using a monolithic kernel for nothing.

As for Windows, not only has kept its hybrid approach throughout the years, Windows 10 (optionally) and Windows 11 (enforced), runs as a guest on Hyper-V, with multiple subsystems sandboxed, DriverGuard, Virtualization-Based Security, Secure Kernel, UMDF.

KAKAN

> As for Windows, not only has kept its hybrid approach throughout the years, Windows 10 (optionally) and Windows 11 (enforced), runs as a guest on Hyper-V, with multiple subsystems sandboxed, DriverGuard, Virtualization-Based Security, Secure Kernel, UMDF.

Any source for this? This seems interesting to read about

eduction

> microservices and Kubernetes

So glad we’ve moved past being blinded by computing fads the way Tanenbaun was.

wolrah

> Linus "my first, and hopefully last flamefest" Torvalds

If only he knew...

mrlonglong

Actually, Minix kinda won. Its descendents currently infest billions of Intel processors living inside the ME.

tredre3

Between smartphones, smart TVs, IOT devices (cameras/doorbells/smart home/sensors/etc), all modern cars, and servers we're probably pushing the 100B linux devices on this planet.

Intel likely "only" has hundreds of millions of CPUs deployed out there.

mrlonglong

I just checked, over the last 10 years that they've used Minix as their ME operating system, they've sold an average of 50M processors a year.

Ok, I take it back. Linux is the undisputed champion of the world.

hackerbrother

It’s always heralded as a great CS debate, but Tanenbaum’s position seems so obviously silly to me.

Tanenbaum: Microkernels are superior to monolithic kernels.

Torvalds: I agree— so go ahead and write a Production microkernel…

bb88

Gnu Hurd has been under development since 1990.

14 years ago (2011) this thread happened on reddit:

https://www.reddit.com/r/linux/comments/edl9t/so_whats_the_d...

Meanwhile in 1994 I knew people with working linux systems.

p_l

Hurd failed not because of microkernel design, in 1994 multiple companies were shipping systems based on Mach kernel quite succesfully.

According to some people I've met who claimed to witness things (old AI Lab peeps) the failure started with initial project management and when Linux offered alternative GPLed kernel to use, that was enough to bring the effort even more to halt.

RainyDayTmrw

Most famously these days, Mac OS (formerly known as Mac OS X, to distinguish it from all of the earlier ones) is built on top of Darwin/XNU, which descends from Mach.

pjmlp

As always don't mix technical issues with human factors.

sedatk

> so go ahead and write a Production microkernel

He has though. Tanenbaum's created the most popular production OS in the world, and it's microkernel based: https://www.networkworld.com/article/964650/minix-the-most-p...

mqus

Is the article really right though? I imagine that much more stuff runs some linux on any machine than there are running intel processors. Even if it was true in the past, it likely has shifted in linux favor even more

sedatk

That doesn’t make the article not right for the time it was published.

johnisgood

https://blog.minix3.org/tag/news/

Last post is from 2016. Any news on MINIX front?

lproven

AST retired. Nobody's picked up the banner. Damned shame.

https://www.osnews.com/story/136174/minix-is-dead/

Intel had profited tens to hundreds of millions of dollars from Minix 3. Minix replaced ThreadX (also used as the Raspberry Pi firmware) running on ARC RISC cores. Intel had to pay for both.

If Intel reinvested 0.01% of what it saved by taking Minix for free, Minix 3 would be a well-funded community project that could be making real progress.

It already runs much of the NetBSD userland. It needs stable working SMP and multithreading to compete with NetBSD itself. (Setting aside the portability.)

But Intel doesn't need that. And it doesn't need to pay. So it doesn't.

acmj

People often forget the best way to win a tech debate is to actually do it. Once multiple developers criticized that my small program is slow due to misuse of language features. Then I said: fine, give me a faster implementation. No one replied.

msla

Here's the debate in a single compressed text file.

https://www.ibiblio.org/pub/historic-linux/ftp-archives/suns...

ViktorRay

The realization that in 2058 some people will be reading comments from 2025 Hacker News threads and will feel amused at all the things we were so confidently wrong about.

;)

scarface_74

https://news.ycombinator.com/item?id=32919

I don't think what the iphone supports will matter much in the long run, it's what devices like these nokias that will have the biggest impact on the future of mobile http://www.nokia.com/A4405104

———

No one is going to stop developing in Flash or Java just because it doesn't work on iPhone. Those who wanna cater to the iPhone market will make a "watered down version" of the app. Just the way an m site is developed for mobile browser.Thats it.

——

If another device maker come up with a cheaper phone with a more powerful browser, with support for Java and Flash, things will change. Always, the fittest will survive. Flash and java are necessary evils(if you think they are evil).

——

So it will take 1 (one) must-have application written in Flash or Java to make iPhone buyers look like fools? Sounds okay to me.

——

The computer based market will remain vastly larger than the phone based market. I don't have real numbers off hand, but lets assume 5% of web views are via cellphones

thebeardisred

jll29

A self-proclaimed VC (but really just a business angel syndicate gateway keeper with no real money, as I later found out) once told me (in 2005) "Even if it will be possible to use the Internet from one's phone one day, it will be too expensive for ordinary people to use it."

This was already wrong when he said it to me (I was pitching a mobile question answering system developed in 2004), as then an ugly HTML cousin called WAP already existed. I have never taken any risk capital investor that did not have their own tech exist seriously since then.

Sharlin

Uh, as the page says, these were cheap feature phones for emerging markets. In 2007 Nokia had smartphones vastly more capable than the original iPhone. They just didn’t have a large touchscreen.

scarface_74

And the all knowing pg said that the iPhone will never have more than 5% market share

https://news.ycombinator.com/item?id=33083

I mean it had more space than the Nomad and wireless. What else could he have wanted?

npsomaratna

Back in the '90s, I read a book called the "Zen of Windows 95 Programming." The author started off with (paraphrased) "If you're reading this 25 years in the future, and are having a laugh, here's the state of things in '95"

I did re-read that section again 25 years later...

layer8

Did you have a laugh?

jppope

I am terrified to read my own comments from a year ago... I can't even imagine 25 or 30 years from now.

StefanBatory

I'm afraid to read what I wrote last month, I cringe at the though of myself reading old posts.

daviddever23box

<^> this - adaptability is of far greater utility than stubbornness.

lproven

You probably will.

I mean, here's a piece of mine from 25 years ago.

https://archive.org/details/PersonalComputerWorldMagazine/PC...

I stand by that.

But I wrote things for the Register when I started there full-time 3.3 years ago that now I look at with some regret. I'm learning. I'm changing.

We all learn and we all change. That is good. When you stop changing, you are dead.

Don't be worried about changing your mind. Be worried about if you stop doing so.

Karellen

Don't focus on how naive you were then, think about how much you've grown since. Well done!

Imagine if you don't learn anything new in the next 25 years, and all your opinions stay completely stagnant. What a waste of 25 years that will be.

nialse

How about retrospective ranking of comments based on their ability to correctly predict the future? Call it Hacker Old Golds?

_thisdot

Easily available are AskReddit threads from 2014 asking predictions about 2024

Onavo

Fun fact, Reddit only soft deletes your comments. So all those people using Reddit deletion/comment mangling services to protest only deprive their fellow users of their insights. Reddit Inc. can still sell your data.

lizknope

Back around 2003 our director said "This customer wants to put our camera chip in a phone." I thought it was a dumb idea.

I remember when the first iPhone was released in Jan 2007 that Jobs said all the non-Apple apps would be HTML based.

I thought it was dumb. Release a development environment and there will be thousands of apps that do stuff they couldn't even think of.

The App Store was started in July 2008.

deadbabe

We’re not that optimistic about the future here.

davidw

Maybe someone will hide a copy of HN in a durable format in a cave and someone will rediscover it one day.

jll29

Last time I checked, parchment was the most durable medium mankind ever used on a regular basis.

I find it an interesting question to ponder what we consider worthwhile retaining for more than 2000 years (from my personal library, perhaps just the Bible, TAOCP, SICP, GEB and Feynman's physics lectures and some Bach organ scores).

EDIT: PS: Among the things "Show HN" has not yet seen is a RasPi based parchment printer...

deadbabe

It would be an interesting project to create an entire archive of books of HN discussions and preserve them for hundreds of years for archivists to explore. I hope they find this comment.

the_cat_kittles

hopefully people have progressed to the point where hn has been completely forgotten

esseph

Are you sure?

The husk of slashdot is still around.

null

[deleted]