Skip to content(if available)orjump to list(if available)

CharlotteOS – An Experimental Modern Operating System

embedding-shape

This is probably a better introduction it seems, than specifically the kernel of the OS: https://github.com/charlotte-os/.github/blob/main/profile/RE...

> URIs as namespace paths allowing access to system resources both locally and on the network without mounting or unmounting anything

This is such an attractive idea, and I'm gonna give it a try just because I want something with this idea to succeed. Seems the project has many other great ideas too, like the modular kernel where implementations can be switched out. Gonna be interesting to see where it goes! Good luck author/team :)

Edit: This part scares me a bit though: "Graphics Stack: compositing in-kernel", but I'm not sure if it scares me because I don't understand those parts deeply enough. Isn't this potentially a huge hole security wise? Maybe the capability-based security model prevents it from being a big issue, again I'm not sure because I don't think I understand it deeply or as a whole enough.

Philpax

The choice of a pure-monolithic kernel is also interesting; I can buy that it's more secure, but having to recompile the kernel every time you change hardware sounds like it would be pretty tedious. Early days, though, so we'll see how that decision works out.

vlovich123

Why would you buy it’s more secure. Traditionally in windows in-kernel compositing was a constant source of security vulnerabilities. Sure rust may help the obvious memory corruption possibilities but I’m not convinced.

astrange

A monolithic kernel and resource locators that automatically mount network drives? That's just macOS.

(You don't have to recompile the kernel if you put all the device drivers in it, just keep the object files around and relink it.)

Rohansi

Why would you need to recompile if hardware changes? Linux manages just fine as a monolithic kernel that ships with support for many devices in the same kernel build.

ofrzeta

It's true that you can compile everything in but it's not really the standard practice. On a stock distro you have dozens of dynamic modules loaded.

incognito124

Recompiling the whole kernel just to change drivers seems like a deal-breaker for wider adoption

skissane

Recompile (or at least relink) the kernel to change drivers (or even system configuration) is a bit of a blast from the past - in the 1960s thru 1980s it used to be a very common thing, it was called “system generation”. It was found in mainframe operating systems (e.g. OS/360, OS/VS1, OS/VS2, DOS/360); in CP/M; in Netware 2.x (3.x onwards dropped the need for it)

Most of these systems came with utilities to partially automate the process, some kind of config file to drive it, Netware 2.x even had TUI menuing apps (ELSGEN, NETGEN) to assist in it

pjmlp

Quite common on Linux early days.

Also the only approach for systems where people advocate for static linking everything, yet another reason why dynamic loading became a thing.

surajrmal

If this kernel ever gets big enough where this might matter, I'm sure they can change the design. Nothing is set in stone forever and for the foreseeable future it's unlikely to matter.

jadbox

In theory, wouldn't it be possible for the Linux kernel to also provide a URI "auto mount" extension too?

KerrAvon

In practice, the problem with URIs is that it makes parsing very complex. You don’t really want a parser of that complexity in the kernel if you can avoid it, for performance reasons if nothing else. For low-level resource management, an ad-hoc, much simpler standard would be significantly better.

embedding-shape

Chuck Multiaddr in there (https://multiformats.io/multiaddr/), can be used for URLs, file paths, network addresses, you name it. Easy to parse as well.

BobbyTables2

Wish OP had put that as the main readme.

The intro page is currently useless.

embedding-shape

To be fair, the submission URL goes to the kernel specifically, so the README is good considering the repository it's in. The link I put earlier I found via the GitHub organization, which does give you an overview of the OS as a whole (not just the kernel): https://github.com/charlotte-os/

bionsystem

I believe redox is doing the same (the everything as an URI part)

yjftsjthsd-h

Skimming https://doc.redox-os.org/book/scheme-rooted-paths.html and https://doc.redox-os.org/book/schemes.html , I think they've slightly reworked that to a more-unixy approach, but yeah still fundamentally more URI than traditional VFS

whatpeoplewant

This looks like a very interesting project! Good luck to the team.

user3939382

I’m working on one with a completely new hardware comms networking infra stack everything

the__alchemist

I love seeing projects in this space! Non-big-corp OSSes have been limited to Linux etc; would love to explore the space more and have non-Linux, non-MS/Apple options. For example, Linux has these at the core which I don't find to be a good match for my uses:

  - Multi-user and server-oriented permissions system.
  - Incompatible ABIs
  - File-based everything; leads to scattered state that gets messy over time.
  - Package managers and compiling-from-source instead of distributing runnable applications directly.
  - Dependence on CLI, and steep learning curve.
If you're OK with those, cool! I think we should have more options.

grepfru_it

Haiku, plan9, redox, and Hurd comes to mind

Reactos if you need something to replace windows

Implementing support for docker on these operating systems could give them the life you are looking for

Zardoz84

BSD exists Also Open Solaris Minix etc...

ogogmad

> Package managers and compiling-from-source instead of distributing runnable applications directly.

Docker tries to partially address this, right?

> Dependence on CLI, and steep learning curve.

I think this is partially eased by LLMs.

the__alchemist

But you can see the theme here: Adding more layers of complexity to patch things. LLMs do seem to do a better job than searching forum posts! I would argue that Docker's point is to patch compatibility barriers in Linux.

ofrzeta

So, what's modern about it? "novel systems like Plan 9" is quite funny because Plan 9 is 30 years old.

pjmlp

The sad part is that there are too many ideas of old systems lost in a world that 30 years later seems too focused on putting Linux distributions everywhere.

linguae

Indeed. I am reminded of what Alan Kay has repeatedly referred to as a “pop culture” of computing that has become widespread in technical communities since the 1980s, when the spread of technology grew faster than educational efforts. One result is there are many inventions and innovations from the research community that never got adopted by major players. The corollary to “perfect is the enemy of the good” is good-enough solutions have amazingly long lifetimes in the marketplace.

There are many great ideas in operating systems, programming languages, and other systems that have been developed in the fast 30 years, but these ideas need to work with existing infrastructure due to costs, network effects, and other important factors.

What is interesting is how some of these features do get picked up by the mainstream computing ecosystem. Rust is one of the biggest breakthroughs in systems programming in decades, bringing together research in linear types and memory safety in a form that has resonated with a lot of systems programmers who tend to resist typical languages from the PL community. Some ideas from Plan 9, such as 9P, have made their way into contemporary systems. Features that were once the domain of Lisp have made their ways into contemporary programming languages, such as anonymous functions.

I think it would be cool if there were some book or blog that taught “alternate universe computing”: the ideas of research systems during the past few decades that didn’t become dominant but have very important lessons that people working on today’s systems can apply. A lot of what I know about research systems comes from graduate school, working in research environments, and reading sites like Hacker News. It would be cool if this information were more widely disseminated.

pjmlp

There is actually a talk like that from like two years ago, have to see if I find it again.

grepfru_it

There was also a period of time where everyone and their mom was writing a new operating system trying to replicate Linux’ success

pjmlp

Isn't what all those UNIX clones keep trying to do?

Razengan

Yeah the more you read up on computing history from barely even 40 years ago, it seems that most of the things that we take for granted today became so more through politics (and in the case of Microsoft, bullying) than merit.

Razengan

Regarding Microsoft, this was before even the "Browser Wars" they'd send suited people to the offices of Japanese PC manufacturers and threaten to revoke their Windows licenses if they even OFFERED customers the CHOICE of an alternative operating system!!

This and other dirt is on any YouTube video about the history/demise of alternative computing platforms/OSes.

IshKebab

That's still newer than Linux's system design.

ofrzeta

In an operating system course I attended it was mostly Unix and everyone was used to bashing Windows NT ("so crappy, bsod etc.") but we had Stallings' book and I was surprised to learn that NT was in many ways an improvement over Unix and Linux.

exe34

NT the kernel is quite good. windows nt itself was not always great.

not4uffin

I’m very happy I’m seeing more open source kernels being released.

More options (and thus) competition is very healthy.

jancsika

> GPLv3 or later (with proprietary driver clarification)

What's that parenthetical mean?

nathcd

Looks like it's explained here: https://github.com/charlotte-os/Catten/blob/main/License/cla...

Specifically, "Users may link this kernel with closed-source binary drivers, including static libraries, for personal, internal, or evaluation use without being required to disclose the source code of the proprietary driver.".

jancsika

Ok, even Doug Crockford has mucked around with licensing before, so this is definitely a digression and not aimed at CharlotteOS which looks fascinating:

I wish there was a social stigma in Open Source/Free Software to doing anything other than just picking a bog standard license.

I mean, we have a social stigma even for OS developers about rolling your own crypto primitives. Even though it's the same very general domain, we know from experience that someone who isn't an active, experienced cryptographer would have close to a zero percent chance of getting it right.

If that's true, then it's even less likely that a programmer is going to make legally competent (or even legally relevant) decisions when writing their own open source compatible license, or modifying an existing license.

I guess technically the "clarification" of a bog standard license is outside of my critique. Even so, their clarification is shoe-horned right there in a parenthetical next to the "License" heading, making me itchy... :)

shevy-java

Written in Rust. Hmm.

SerenityOS is written in C++.

I'd love some kind of meta-language that is easy to read and write, easy to maintain - but fast. C, C++, Rust etc... are not that easy to read, write and maintain.

cultofmetatron

fast necessitates manual control -> more semantics for low level control) that need to be expressible, ie: more complex

easy to understand, maintain -> computer does more work for you to "figure things out" in a way that simply can't be optimal under al conditions.

TLDR: what you're asking for isn't really possible without some form of AGI

card_zero

What languages are easy to understand and maintain, anyway?

kragen

It's comforting to see that capabilities with mandatory access control have become the new normal.

ForHackernews

How does this compare to SerenityOS? At a glance, it looks more modern and free from POSIX legacy?

pjmlp

Interesting, and kudos for trailing other paths, and not being yet another POSIX clone.

varispeed

Modern operating system, ready to face challenges of today political landscape, should natively support "hidden" encrypted containers, that is you would log in to completely different, separate environment depending on password. So that when under threat could disclose a password to an environment you are willing to share and attacker would have no way of proving there is any other environment present.

Razengan

It would be easy to tell for anyone seriously after you: If I kidnap you and make you log into your computer, and you log into the decoy state, it'd be obvious to see that the last time you visited any website etc. was over a month ago and so on.

varispeed

For sure you'd have to use it from time to time.

mixmastamyk

Or, write a login script to touch files at random.

Razengan

Something I thought about long ago was that it would be better/easier to divide user accounts into "personas": different sets of public-facing IDs, settings etc.

This could be done at every level: the operating system, the browser, websites..

So if you don't care about the website knowing it's the same person, instead of having multiple user accounts on HN, Reddit, you could log into a single account, then choose from a set of different usernames each with their own post history, karma, etc.

If you want to have different usernames on each website, switch the browser persona.

At the OS level, people could have different "decoy" personas if they're at risk of state/partner spying or wrench-based decryption, and so on.