Skip to content(if available)orjump to list(if available)

A review of Nim 2: The good and bad with example code

cb321

This is a decent overview, but misses a few nice things. Interested readers should not assume it is exhaustive (generally something they should not assume..)

E.g., because the feature is so rare (controversial?) it doesn't get mentioned much, but you can also define your own operators in Nim. So, if you miss bitwise `|=` from C-like PLangs, you can just say:

    proc `|=`*[T,U](a: var T, b: U) = a = a or b
Of course, Nim has a built in `set[T]` for managing bit sets in a nicer fashion with traditionally named set theoretic operators like intersection, union, etc. https://github.com/c-blake/procs makes a lot of use of set[enum] to do its dependency analysis of what Linux /proc files to load and what fields to parse, for example (and is generally much faster than C procps alternatives).

This same user-defined operator notation for calls can be used for templates and macros as well which makes a number of customized notations/domain specific languages (DSLs) very easy. And pragma macros make it easy to tag definitions with special compile-time behaviors. And so on.

creata

For anyone curious, the precedence is based on the name of the operator: https://nim-lang.org/docs/manual.html#syntax-precedence

j1elo

That's cool! One I do miss (from nowhere else, I just thought it should exist) sometimes in C is `||=`, and that's sorely missed when bitwise operators do indeed have `|=` (which must not be conflated because it is very different by not implementing short-circuiting to skip right-hand side of operations).

cb321

Nim also has term rewriting macros (e.g. https://scripter.co/notes/nim/#term-rewriting-macros) which can transform patterns. The relevance is that you could combine that with `||=` to probably get whatever short-circuit|not semantics you want on the RHS. Or also do bignum/matrix libraries where arithmetic can be streamlined (e.g. jumbo operations matched to convert N passes into 1-pass), { at least potentially. Often the scale matters as in fits in "available" L1 then many passes might be more autovec friendly and so faster, or doesn't fit then one pass is much faster. It all depends, etc., etc. }

vanderZwan

> In fact, you could use Nim as a production-ready alternative to the upcoming Carbon language. Nim has fantastic interoperability with C++, supporting templates, constructors, destructors, overloaded operators, etc. However, it does not compile to readable C or C++ code, which is unlike Carbon's goals.

Well, that really depends on what the reason for one's interest in Carbon is, which is slightly hinted at by the last sentence. From what I understand big goal is to be able to do automated migration of large C++ codebases at Google to a saner language. Mond had a nice blogpost musing about it[0]. Nim is not that.

Of course, neither is Carbon yet, and we'll have to wait and see if it reaches that point or if it ends up on killedbygoogle.com. I'm rooting for Carbon though, it's a cool idea.

Anyway, that is a different ambition than looking for a successor language that lets you use existing C++ code without requiring that the latter is changed, which is what Nim is suggested to be good at here.

[0] https://herecomesthemoon.net/2025/02/carbon-is-not-a-languag...

dualogy

> Nim has fantastic interoperability with C++

Last year I asked around in the Nim community if "the C++ interop" will allow me to easily link-to-and-import in Nim a C++ lib (in this case, a 3D engine called WickedEngine) and thus make a game using its surface API from Nim instead of writing it all in C++.

There seemed to be no straightforward way to do so whatsoever. Sure you can import old-school C APIs. Sure maybe you can have Nim transpile to C++ code. But "fantastic interoperability" didn't have my fantasy here in mind: something like `@importcpp "../libwickedengine/compilecommands.json"` and boom, done, including LSP auto-complete =)

It would be the same for other major C++ libs then: think LLVM, Dear Imgui, Qt, OpenCV, libtorrent, FLTK, wxWidgets, bgfx, assimp, SFML......

Sure, I get it, "unlike C, C++ doesn't have an ABI. These C++ libs should maintain and expose a basic C API". I agree! But still..

elcritch

There's a wrapper for unreal engine using the C++ interopt. It's doable but not automatic. https://github.com/jmgomez/NimForUE

Mentally I view Nim as a better, safer, easier C++ now. Anything I wanted to do in C/C++ I can do in Nim, but far easier. Not exactly a Carbon competitor but still an alt C++ 2.0 with C++ interopt.

zozbot234

I'm a bit skeptical about the "fantastic interop" with C++ also. If it was that easy, the Rust folks would have done that already; whereas it seems that they're still looking into it. And Rust is being developed for LLVM, a compiler that's also shared with C++.

cb321

With `nim cpp` the Nim compiler actually just generates C++ from the Nim source for the backend to compile. So, calling C++ code is just emitting the calls at a C++ source level and so is straightforward. The situation with Rust "sharing" LLVM is very different, as that is not a source-to-source compiler.

C++ code calling Nim code is also not usually as straightforward. So, "fantastic" here may apply only in one call direction.

jibal

Rust doesn't have an option to generate C++ code; Nim does.

> Rust is being developed for LLVM, a compiler that's also shared with C++.

Not at all relevant (and LLVM is a backend target, not a compiler).

zozbot234

Rust cbindgen has an option to generate C code, which is generally also valid C++.

barchar

The coolest thing about nim is that it's macros participate in the type system and overload resolution and can work with both type checked and non type checked code

cb321

The "dual" of this type-dispatch of template/macros is that the scoping rules also allow you to use a template or a macro to define a bunch of things which are only "in effect" in a sub-scope, like, e.g. that old C hazard of pointer arithmetic - "defined but contained": https://forum.nim-lang.org/t/1188#7366

In a lot of little ways, Nim is a lot like a statically typed Lisp with a vaguely Python-ish surface syntax, although this really doesn't give enough credit to all the choice one has writing Nim code.

dataangel

This is the comment that is going to get me to actually try Nim

hugs

one of us! one of us!

banashark

Regarding the js backend: how is the size of the produced artifacts?

I recall seeing a comparison of “transpile to js” languages and noted Kotlin and nim as the two that were outputting MBs of js compared to the tens or low hundreds of kbs that other languages were outputting.

cb321

Like literally anything, it will depend upon how much your code does, what libraries it uses and so on, but here's a trivial little example to at least dispell a worry of multi-megabyte outputs for trivial things:

    echo echo 1 > j.nim
    nim js j.nim
    node j.js
    >>> see 1\\n <<<<
    ls -l j.js
    >>> 36636 Sep  1 12:54 j.js <<<
    nim js -d:release j.nim
    ls -l j.js
    >>> 11369 Sep  1 12:56 j.js <<<
So, with -d:release stripping away a lot of debugging logic, it's not so bad. Even with d:release there is probably ~50% of the text of that j.js that is just C comments which could be trivially stripped away. E.g., cpp<j.js|wc -c gives 6350 for that very same 11369 file. There are js minification things one could also run on the output. People do complain about this, but people complain a lot. It's probably not so uncompetitive for less trivial programs that do a little bit more work, both minified, apples-to-apples care & all that.

banashark

Good to know. My references are typescript, fable, and cljs for “what does it generally look like bundle-size wise, what can I expect as I add more libraries/functionality, etc

summarity

Wrote a post about it here: https://summarity.com/nim-alpine

banashark

Interesting. So it was 11k for the dropdown component, but if you eschew the std lib inclusion (which sounds fairly impractical), it goes down to 3k.

When you have a page with many alpine/nim components like this, how does the size increase relative to the # of components added (roughly of course)?

ethin

My (personal) problem with Nim is that it assumes a Unix universe for everything. Which is good until (1) you want to do something on Windows and (2) you want to use other Nim libraries from Nimble. Nim will happily allow you to use the MSVC compiler, but a lot of good libraries don't and force GCC via pragmas which directly pass compiler options to "just make it work" or something. Last time I tried discussing this in the Nim matrix chat, I got some... Quite hostile responses. Not in the insulting me way, but in the "bro, just use Linux" way. Maybe things have changed; when I did use Nim I found it to be quite the pleasant language to work with excepting the nimble library problem.

netbioserror

I'm the developer of an in-production sensor analysis backend program written in Nim. Our server scripts invoke it on individual or batches of records, so it doesn't continuously run, and we get free parallelism via the shell. I make copious use of Datamancer dataframes. The program is entirely processing logic. I have maybe 3 lines of memory semantic code in 40k lines. I rely on Nim's default behavior, wherein dynamic types such as collections are stack-managed hidden unique pointers treated as value types.

The performance is impressive. I've done some exercises on the side to compare Nim's performance to C++ building large collections along with sequential and random access, and -d:release from Nim puts out results that are neck-and-neck with -O3 for C++. No special memory tricks or anything, just writing very Pythonic, clear code.

Feel free to ask me anything.

thomasmg

Which compile options do you recommend for best performance, but such that it is still memory-save? (I assume you use memory-save, right?) Currently I use "nim c --opt:speed". Compared to other languages (Go, Rust mostly) the runtime performance (for my use cases) is a bit slower, for some of the cases. Hm, it might be that you disable memory safety, if you compare against C++...

netbioserror

I just use -d:release, along with a bunch of other options related to static compilation with musl-libc. I've tried -d:danger before and the reliability of my calculations went completely out the window. I think Datamancer and Arraymancer are dependent upon some of those checks and guarantees. The --opt flag didn't make enough of a difference in my case.

null

[deleted]

bckr

What’s the environment management story like?

netbioserror

I try to keep it simple, using Make to build, and so have ran into issues with Nimble. The upcoming Atlas is supposed to fix these issues, but I don't know enough about it yet. But I remember running into conflicting name resolution with Nimble when multiple versions of a package were installed; I believe it was trying to choose betweeen a Nim 1.x package version and a Nim 2.x, which are kept in separate Nimble folders (pkgs and pkgs2).

I'm probably going to sit down and give Atlas a try soon, and migrate my dependencies.

elcritch

No need to migrate dependency with Atlas. Should "Just Work". If not file a GH issue.

Note, I fixed up Atlas a few months ago for Araq (Nim's BDFL). It uses a simpler design where pkgs are put in a local `deps` folder. It works fantastic and has replaced Nimble and it's magic for me. Plus the local deps folder is easy for LLM cli tools to grep.

Just make sure to install the latest version!

P.S. @netbioserror I'm working on a sensors project. Shoot me an email if you want to talk sensors/iot/nim! Emails on my GH

dundercoder

I’ve loved working in nim. I’ve only written some toy projects so far but it’s fast. Anyone find a good ide/language plugin for it?

banashark

https://nim-lang.org/docs/nimsuggest.html

I just set it up on neovim with Mason and it was pretty quick and easy.

That being said my preferred environment is jetbrains stuff and I’d very much enjoy an up to date plugin there

bckr

Does it have an official language server?

jitl

This is mentioned in the article.

michaelsbradley

Because Nim's unusual case-sensitivity rules[1] often make for a heated point of discussion on HN, I thought this quote from the Nimony docs[2] might be of interest:

> Nimony is case sensitive like most other modern programming languages. The reason for this is implementation simplicity. This might also be changed in the future.

[1] https://nim-lang.org/docs/manual.html#lexical-analysis-ident...

[2] https://nim-lang.github.io/nimony-website/index.html#lexical...

throwawaymaths

> The reason for this is implementation simplicity

The real correct reason for this is to facilitate grep/global-search-and-replace/LLM.

null

[deleted]

benterix

> WASM is not supported in the standard library.

Would using the C output and using emcc on it solve this problem?

cb321

Yeppers - https://github.com/treeform/nim_emscripten_tutorial

FWIW, people have been doing that for about as long as there's even been an emscripten, but the article is pointing out the lack of more tight integration with stdlib/std compilation toolchains. I would say evolving/growing the stdib in general is a pain point. Both the language and compiler are more flexible than most, though. So, this matters less in Nim that it might otherwise.

ledauphin

as an aside on Nimony aka Nim 3:

can somebody provide a reference explaining/demonstrating the ergonomics of ORC/ARC and in particular .cyclic? This is with a view toward imagining how developers who have never written anything in a non-garbage-collected language would adapt to Nimony.

alethic

ORC/ARC are a reference counting garbage collector. There's a bit of a terminological clash out there as to whether "garbage collection" includes reference counting (it's common for it to not, despite reference counting... being a runtime system that collects garbage). Regardless: what makes ORC/ARC interesting is that it optimizes away some/most counts statically, by looking for linear usage and eliding counts accordingly. This is the same approach taken by the Perseus system in use in some Microsoft languages like Koka and Lean, but came a little earlier, and doesn't do the whole "memory reuse" thing the Perseus system does.

So for ergonomics: reference counting is not a complete system. It's memory safe, but it can't handle reference cycles really very well -- since if two objects retain a reference to each other there'll always be a reference to the both of them and they'll never be freed, even if nothing else depends on them. The usual way to handle this is to ship a "cycle breaker" -- a mini-tracing collector -- alongside your reference counting system, which while is a little nondeterministic works very reasonably well.

But it's a little nondeterministic. Garbage collectors that trace references, and especially tracing systems with the fast heap ("nursery" or "minor heap") / slow heap ("major heap") generational distinction are really good. There's a reason tracing collectors are used among most languages -- ORC/ARC and similar systems have put reference counting back in close competition with tracing, but it's still somewhat slower. Reference counting offers one alternative, though -- the performance is deterministic. You have particular points in the code where destructors are injected, sometimes without a reference check (if the ORC/ARC optimization is good) and sometimes with a reference check, but you know your program will deallocate only at those points. This isn't the case for tracing GCs, where the garbage collector is more along the lines of a totally separate program that barges in and performs collections whenever it so desires. Reference counting offers an advantage here. (Also in interop.)

So, while you do need a cycle breaker to not potentially leak memory, Nim tries to get it to do as little as possible. One of these tools they provide to the user is the .acyclic pragma. If you have a data structure that looks like it could be cyclic but you know is not cyclic -- for example, a tree -- you can annotate it with the .acyclic pragma to tell the compiler not to worry about it. The compiler has its own (straightforward) heuristics, too, and so if you don't have any cyclic data in your program and let the compiler know that... it just won't include the cycle collector altogether, leaving you with a program with predictable memory patterns and behavior.

What these .cyclic annotations will do in Nim 3.0, reading the design documentation, is replace the .acyclic annotations. The compiler will assume all data is acyclic, and only include the cycle breaker if the user tells it to by annotating some cyclic data structure as such. This means if the user messes up they'll get memory leaks, but in the usual case they'll get access to this predictable performance. Seems like a good tradeoff for the target audience of Nim and seems like a reasonable worst-case -- memory leaks sure aren't the same thing as memory unsafety and I'm interested to see design decisions that strike a balance between burden on the programmer vs. burden on performance, w/o being terribly unsafe in the C or C++ fashion.

alethic

The short answer is you'd write your code the same, then add .cyclic annotations on cyclic data structures.

("The same" being a bit relative, here. Nim's sum types are quite a bit worse than those of an ML. Better than Go's, at least.)