Skip to content(if available)orjump to list(if available)

Learning C3

Learning C3

135 comments

·May 29, 2025

lerno

Some other links links on C3 that might be interesting:

Interviews:

- https://www.youtube.com/watch?v=UC8VDRJqXfc

- https://www.youtube.com/watch?v=9rS8MVZH-vA

Here is a series doing various tasks in C3:

- https://ebn.codeberg.page/programming/c3/c3-file-io/

Some projects:

- Gameboy emulator https://github.com/OdnetninI/Gameboy-Emulator/

- RISCV Bare metal Hello World: https://www.youtube.com/watch?v=0iAJxx6Ok4E

- "Depths of Daemonheim" roguelike https://github.com/TechnicalFowl/7DRL-2025

Tsoding's "first impression" of C3 stream:

- https://www.youtube.com/watch?v=Qzw1m7PweXs

Defletter

C3 looks promising, but any language that supports nulls needs null-restricted types, not whatever those contract comments are. If I wanted to have to null-check everything, or YOLO it, I would just write Java... and even Java is seeking to fix this: https://openjdk.org/jeps/8303099

lerno

It's an interesting problem. Originally I experimented with both having `` and `&` syntax, so `int&` being a ref (non null) and `int` being a pointer. The thing you notice then are two things:

1. You want almost all pointer parameters non null.

2. Non-null variables is very hard to fit in a language without constructors.

Approaches to avoid constructors/destructors such as ZII play very poorly with ref values as well. What you end up with is some period of time where a value is quasi valid - since non-null types need to be assigned and it's in a broken state before it's initially assigned.

It's certainly possible to create generic "type safe" non-null types in C3, but they are not baked into the language.

lerno

I'm unable to edit this now... that should teach me not to comment and then go to kendo practice... It should say '*' and '&' and 'int&' and 'int*'

bryanlarsen

the hacker news markdown parser seems to have swallowed your asterisks, which are essential to understanding your comment.

trealira

Yeah, if you want to use asterisks without italicizing your text, you need to escape them with backslashes, and then you can write things like 5 * 2 * 1 = 10. That is, you'd write it like this:

  5 \* 2 \* 1 = 10

lerno

Yes, and I was too slow to getting back and trying to edit it. Sorry about that.

aidenn0

> Approaches to avoid constructors/destructors such as ZII play very poorly with ref values as well. What you end up with is some period of time where a value is quasi valid - since non-null types need to be assigned and it's in a broken state before it's initially assigned.

I don't see that as a problem; don't separate declaration from assignment and it will never be unassigned. Then a ZII non-null pointer is always a compile-time error.

wavemode

> don't separate declaration from assignment and it will never be unassigned

That's tricky when you want to write algorithms where you can start with an uninitialized object and are guaranteed to have initialized the object by the time the algorithm completes. (Simplest example - create an array B which contains the elements of array A in reverse order.)

You can either allow declaring B uninitialized (which can be a safety hazard) or force B to be given initial values for every element (which can be a big waste of time for large arrays).

lerno

I don't quite see what you mean. As an example, let's say you use ZII and allocate 100 objects in a single allocation. These are now zero initialized and so either invalid (which should not happen) or do not hold non-null types. Can you explain how you intend this scenario to be resolved in your case?

Otherwise it's quite straightforward that they have an uninitialized state (zero) and are then wired up when used. Trying to prevent null pointers here is something that the program to do. However, making the compiler guarantee without requiring constructors it is a challenge I don't know how to tackle.

90s_dev

I'm on the fence about function contracts like this. I've seen them for a decade in other languages, but never really used them, so I can't say how I feel about them.

But having them be inside comments is just weird.

Jtsummers

It's a directive that happens to be placed at the tail end of a comment. Reading the documentation the doc comment stops being a comment-proper with the first @-directive, after that it's a list of directives. SPARK started in comments, ACSL is placed in specially marked comments. SPARK 2014 moved into Ada proper using Ada 2012 features (aspects). The difference between SPARK 2014's annotation and this is basically, are the annotations above the function or after the function declaration?

joshring2

It is different yes, having read a good amount of it by now I find it work's pretty well in practice. It means you can incrementally adopt them if you like and code with or without them looks quite similar assuming you documented your code, the function signatures look the same as well which I appreciate.

monkeyelite

Why is there only one way to solve a problem?

netbioserror

Nim solves this problem by only having two explicit, restricted nullable types: Pointers and references. Pointers are manually managed, references are automatically managed, both start as nil and must have their referenced objects instantiated manually.

The entire rest of the language is built on pass-by-value using stack values and stack-managed hidden unique pointers. You basically never actually need to use a ref or a pointer unless you're building an interface to a C or C++ library. I having written a 40k line production application with no reference or pointer types anywhere. Almost any case you'd need is covered by simply passing a compound type or dynamic container as a mutable value, where it's impossible to perform any kind of pointer or reference semantics on it. The lifetime is already managed, so semantically it's just a value.

abujazar

Will everything blow up when they create C4?

drob518

Upvoted for humor.

rdtsc

Interestingly there is also C2: http://c2lang.org

sgt

There's also C4, but that's either an explosive or a notation language for modeling software architecture.

rdtsc

Hopefully the notation language folks take full advantage of puns associated with explosives.

lerno

Every time

lerno

Yes, C3 started as a variant of C2.

plainOldText

Has anyone tried both C3 and Hare[1]. How do they fare? There seems to be quite the overlap between the two.

[1] https://harelang.org/

mustermannBB

Problem with Hare is that it is (or at least was last time I checked) Linux/Unix only and so by design. That kinda makes it DOA for many.

plainOldText

Indeed. There’s a port for macOS though.

And yet out all these newer C-like languages, it looks like Hare probably takes the crown for simplicity. Among other things, Hare uses QBE[1] as a backend compiler, which is about 10% the complexity of LLVM.

[1] https://c9x.me/compile/

lerno

The downside of QBE is that it then requires an assembler and a linker. And QBE's only input and output is still text.

Plus the "frontend -> QBE -> assembler -> binary" process is slower than "frontend -> LLVM -> binary". And LLVM is known for being a fairly slow compiler.

sitkack

QBE is an art project. Read the source.

amelius

There's also Zig in the C-alternatives space.

https://ziglang.org/

mapcars

There is also Odin: https://odin-lang.org/

Would be nice to have a list of these and comparisons

uecker

There is also C23 and at some point C2Y.

C23 got typeof, constexpr constants, enums with underlying type, embed, auto, _BitInt, checked integers, new struct compatibility rules, bit constants, nullptr, initialization with {}, and various other improvements and cleanups. Modern C code - while still being simple - can look quite different than what people might be used to.

C2Y already already got named loops, countof, if with declarations, case range expressions, _Generic with type arguments, and quite a lot of UB removed from the core language. (our aim is also to have a memory safe subset)

rubit_xxx17

I love this.

But this was distracting:

> Macros are a bag of worms. Sure, they can be a great source of protein, but will you really see me eating them? I might use worms when I'm fishing, but I don't see much use for them around the home. To express my opinion outside of a metaphor: macros have niche use cases, are good at what they do, but shouldn't be abused. One example of this abuse would be making a turing-complete domain-specific language inside of some macro-supporting programming language.

Daril

Based on this comparison :

https://c3-lang.org/faq/compare-languages/

One would argue that the best C/C++ alternative/evolution language to use would be D. D also has its own cross-platform GUI library and an IDE.

I wonder for which reasons D doesn't have a large base adoption.

lerno

I can only speak for myself:

1. It is so big.

2. It still largely depends on GC (less important actually)

It keeps adding features, but adding features isn't what makes a language worth using. In fact, that's one of the least attractive things about C++ as well.

So my guess:

1. It betted wrong on GC trying to compete with C++.

2. After failing to get traction, kept adding features to it – which felt a bit like there was some feature that would finally be the killer feature of the language.

3. Not understanding that the added features actually made it less attractive.

4. C++ then left the GC track completely and became a more low level alternative to, at which point D ended up in a weird position: neither high level enough to feel like a high level alternative, nor low level enough to compete with C++.

5. Finally: the fact that it's been around for so long and never taking off makes it even harder for it to take off because it's seen as a has-been.

Maybe Walter Bright should create a curated version of D with only the best features. But given how long it takes to create a language and a mature stdlib, that's WAY easier said than done.

arp242

The dmd compiler not being open source until 2017[1] made it more or less a non-starter for a great many use cases. That would have been okay in the 80s, but with tons of languages to choose from since the 90s/00s, your language needs something very special to sell licenses.

[1]: Specifically: "The Software is copyrighted and comes with a single user license, and may not be redistributed. If you wish to obtain a redistribution license, please contact Digital Mars."

pjmlp

I think the biggest issue has been trying to always chase the next big thing that eventually could bring mindshare to D, while not finishing the previous attempts, so there are quite a few half baked features by now.

Even Andrei Alexandrescu eventually refocused on C++, and is contributing to some of the C++26 reflection papers.

fuzztester

>while not finishing the previous attempts

I agree, and that applies to many software projects, and not just programming languages only.

>so there are quite a few half baked features by now

what are some of those half baked features?

GoblinSlayer

Indeed, first get traction, then add as many features as you want and become perl. That's the real carcinization.

zamalek

6. It has exceptions.

Many people consider that an anti-feature.

PaulHoule

Strikes me as so so.

defer is the kind of thing I would mock up in a hurry in my code if a language or framework lacked the proper facilities, but I think you are better served with the with statement in Python or automated resource management in Java.

Similarly I think people should get over Optional and Either and all of that, my experience is that it is a lot of work to use those tools properly. My first experience with C was circa 1985 when I was porting a terminal emulator for CP/M from Byte magazine to OS-9 on the TRS-80 Color Computer and it was pretty traumatic to see how about 10 lines of code on the happy path got bulked up to 50 lines of code that had error handling weaved all around it and through it. When I saw Java in '95 I was so delighted [1] to see a default unhappy path which could be modified with catch {} and fortified with finally {}.

It's cool to think Exceptions aren't cool but the only justification I see for that is that it can be a hassle to populate stack traces for debugging and yeah, back in the 1990s, Exceptions were one of the many things in the C++ spec that didn't actually work. Sure there are difficult problems with error handling such as errors don't respect your ideas of encapsulation [2] but those are rarely addressed by languages and frameworks even though they could be

https://gen5.info/q/2008/08/27/what-do-you-do-when-youve-cau...

putting in ? or Optional and Either though are just moving the deck chairs on the Titanic around.

[1] I know I'm weird. I squee when things are orderly, more people seem to squee when they see that Docker lets them run 5 versions of libc and 7 versions of Java and 15 versions of some library.

[2] Are places where the "desert of the real" intrudes on "the way things are spozed to be"

lerno

C3 error handling is fairly novel though. It tries to find a sweet spot between composability, explicitness and C compatibility.

The try-catch has nice composability:

    try {
        int x = foo_may_fail();
        int y = bar_may_fail(x);
    } catch (... ) {
        ...
    }
Regular Result types need to use flatmap for this, and of course error codes or multiple returns also struggle with this. With C3:

    int? x = foo_may_fail();
    int? y = bar_may_fail(x);
    if (catch err = y) {
       ...
       return;
    }
    // y is implicitly unwrapped to "int" here
This is not to say it would satisfy you. But just to illustrate that it's a novel approach that goes beyond Optional and Either and has a lot in common with try-catch.

throwawaymaths

a nitpick:

a bit down the page there is stuff on the case syntax. The fact that "you can't have an empty break" is a good choice, but the fact that having two cases do the same thing has syntax

    case X:
    case Y:
is footgun waiting to happen. I would strongly suggest the authors of C3 make stacking cases look like this:

    case X, Y:

lerno

"case X, Y" works for 3-4 values, but for something longer problems accumulate:

    case SOME_BAD_THING, SOME_OTHER_CONDITION, HERE_IS_NUMBER_THREE:
        foo();
        int y = baz();
Placing them on the next row is fairly hard to read

    case SOME_BAD_THING, SOME_OTHER_CONDITION, 
      HERE_IS_NUMBER_THREE, AND_NUMBER_FOUR, AND_NUMBER_FIVE,
      AND_THE_LAST_ONE:
        foo();
        int y = baz();
In C I regularly end up with lists that have 10+ fallthroughs like this, because I prefer complete switches over default for enums at least.

    case SOME_BAD_THING:
    case SOME_OTHER_CONDITION:
    case HERE_IS_NUMBER_THREE:
    case AND_NUMBER_FOUR:
    case AND_NUMBER_FIVE:
    case AND_THE_LAST_ONE:
        foo();
        int y = baz();
  
I understand the desire to use "case X, Y:" instead, and I did consider it at length, but I found the lack of readability made it impossible. One trade off would have been:

    case SOME_BAD_THING,
    case SOME_OTHER_CONDITION,
    case HERE_IS_NUMBER_THREE,
    case AND_NUMBER_FOUR,
    case AND_NUMBER_FIVE,
    case AND_THE_LAST_ONE:
        foo();
        int y = baz();
But it felt clearer to stick to C syntax, despite the inconsistency.

fn-mote

> In C I regularly end up with lists that have 10+ fallthroughs like this [...]

Frankly, that seems like a code smell, not a problem that needs a solution within the language.

lerno

No, it's not a problem. If you think it's a problem, write a C compiler in C and come back to me and show me your code that doesn't have that. :)

fragmede

seems subtle to distinguish between case 3,4: for values 3 or 4, and case (3,4): for an array with the value [3,4]

throwawaymaths

oof. To me switch/case mentally implies constant time matching and routing, I wonder if that is the case (it could be if arrays have compile-time known length).

lerno

You have both in C3:

    switch (x) {
       case 0:
         ...
       case 1 + 1:
         ...
    }
This will behave in the normal way. But you can also have:

    switch {
        case foo() > 0:
          ...
        case bar() + baz() == s:
          ...
    }
In which case it lowers to the corresponding if-else.

jcaguilar

I only wish that the syntax was changed to make it easier to search/grep for the definition of functions and types. Odin makes this so nice, you can search for “<function|type name> ::”. Maybe moving the return type to after the closing parenthesis would be enough?

2 more wishes: add named parameters and structured concurrency and I think it would be a very cool language.

lerno

It was the minimal change from C. It's fairly easy regex out the types, so while not as nice as Odin, it should be straightforward.

Named parameters are already in the language.

Regarding concurrency, I don't want to pick a single concurrency model over another. I will see what hooks I can make for userland additions, but the language will not be opinionated about concurrency.

synergy20

I wish C3 has simple RAII/object/class built-in(no inheritance needed, no Polymorphism is fine, just some Encapsulation better than c's struct with function pointers), then it becomes a more powerful c, and a much simpler c++, really a sweet spot in the middle of both and works for 90% of the c/c++ use cases.

TheMagicHorsey

After using Rust on a couple of projects, I understand the appeal of simpler languages like C3, Zig, and Odin. As one commenter very aptly put on the Zig subreddit ... "I used Zig for (internal tool) because I wanted to quickly write my tool and debug it, and not spend all my time debugging my knowledge of Rust."

tayo42

Is Zig really that common at this point that you'd feel comfortable using it for a work project? Its not just going to piss off the next person and have them need to rewrite it? I guess Rust has the same problem to some extent but there is a lot of resources for writing Rust out there now

throwawaymaths

I suppose the nice thing about zig is that for many things, porting back to C is relatively straightforward and if you wanted to incrementally do it, there's a way to do that, too.

TheMagicHorsey

I wouldn't use Zig for something production critical, but other people like TigerBeetle have decided its good enough for them, and they seem to be doing fine commercially, so I just refrain from saying its not production ready.

But one things for sure ... there's just not a lot of sample Zig code out there. Granted its simpler than Rust, but your average AI tool doesn't get how to write idiomatic Zig. Whereas most AI tools seem to get Rust code okay. Maybe idiomatic Zig just isn't a thing yet. Or maybe idiomatic Zig is just like idiomatic C ... in the eye of the beholder.

chrisco255

Depends on the project and the team, yeah? In my opinion, Zig is simple and lends itself to simpler patterns. Ultimately though it's always a trade-off to consider talent, project scope, team preferences, technical challenges, long-term maintenance, etc.