Run Erlang/Elixir on Microcontrollers and Embedded Linux
41 comments
·September 2, 2025rkangel
derefr
Espressif calls the ESP32 an MCU, and at least 1/3 of ESP32 models have >1MiB of onboard PS ("pseudo-static") RAM (i.e. DRAM with its own refresh circuit.) At least 20 of the ESP32 models do have 16MiB.
(And I would argue that the ESP32 is an MCU even in this configuration — mostly because it satisfies ultra-low-power-on-idle requirements that most people expect for "pick up, use, put down, holds a charge until you pick it up again" devices.)
So, sure, if you mean the kind of $0.07 MCU IC you'd stuff in a keyboard or mouse, that's definitely not going to be running Nerves (or any other kind of dynamic runtime. You need to go full bare-metal on these.)
But if you mean the kind of $2–$8 MCU IC you'd stuff in a webcam, or a managed switch, or a battery-powered soldering iron, or a stick vacuum cleaner with auto suction-level detection, or a kitchen range/microwave/etc with soft-touch controls and an LCD — use-cases where there's more-than-enough profit margin to go around — then yeah, those'll run Nerves just fine.
ACCount37
Even ESP32, the quintessential "punches above its weight" MCU, only packs 520KB of RAM by default. At the time of its release, that was a shitton of RAM for an MCU to have!
If you ship MCUs with 16MB of RAM routinely, you're either working with graphics or are actually insane.
marci
For squeezing erlang in KiB sized RAM, the AtomVM project is probably a better fit.
PinguTS
I see their board uses a daughter board from Phytec, a German company too. This is based on very high performance NXP MCU, the i.MX 6UL, with additional external DDR RAM.
LeifCarrotson
It's a $212 SBC. They've got more L2 cache than most microcontrollers have Flash memory.The fact that it's got an L2 cache at all, much less external LPDDR3 DRAM, is a bit ridiculous. In most parameters - cost, RAM, frequency, storage, power consumption - it's approximately 2 orders of magnitude beyond the specifications of a normal microcontroller.
magicalhippo
NPX calls[1] it an application processor, and is based on a Cortex-A7, not a Cortex-M series microcontroller processor.
That said these nomenclatures are a bit fuzzy these days.
TrueDuality
You don't necessarily need on-package RAM for this. I'm not sure I'd build a project around this, but 16MiB of RAM would hardly be BOM killer.
PinguTS
Actually it is. If you want to build a cheap sensor or actuator, than any additional component is getting expansive. Remember it is not only the external component, it is also the PCB space, is the production, and the testing after production. This adds up all to the costs.
When you use a µC to make it cheap, then you don't want to use additional components.
jdndnc
RAM on MCUs is getting cheaper by the minute.
A couple of years ago it was measured in bytes. Before the RP2040 is was measured in dozens of KiB now it's measured in MiB
While I agree that 16 MiB is on the larger side for now, it will only be a couple of years for mainstream MCUs having that amount on board
jbarberu
Also curious what MCUs you're working with to give you this impression?
RP2040 is 264k, RP2350 is 520k.
I use NXP's rt1060 and rt1170 for work, and they have 1M and 2M respectively, still quite far away from 16M and those are quite beefy running at 500MHz - 1GHz.
tonyarkles
While I generally agree with you, the RT106x line does support external SDRAM as well. I've got an MIMXRT1060-EVKB sitting here on my desk that has 32MB of SDRAM alongside the on-die 1MB of SRAM.
FirmwareBurner
>RAM on MCUs is getting cheaper by the minute.
It really isn't. The RP2040 has 256KB RAM. Far away from 16MB.
>now it's measured in MiB
Where? Very few so far and mostly for image processing applications, and cap out at less than 8MB. And those are already bordering on SoCs instand of MCUs.
For applications where 8MB or more is needed, designers already use SoCs with external RAM chips.
>it will only be a couple of years for mainstream MCUs having that amount on board
Doubt very much. Clip it and let's see in 2 years who's right.
pessimizer
Bigger processors with more RAM have always been available. The question has always been whether you're going to use a $20 processor when you could do the job with a 50¢ one. It's the difference between your product being cheap and disposable, and you getting to choose your margin based on your strategy; and not being able to move a unit without losing money, hoping to sell yourself to someone who knows how to do more with less.
I'm an Erlang fanatic, and have been since forever, paid for classes when it was Erlang Training & Consulting at the center of things, flew cross-country to take them, have the t-shirt, hosted Erlang meetups myself in downtown Chicago. I'm not prototyping a microcontroller application in Erlang if I can get it done any other way. It's committing to losing from the outset.
edit: I've always been hopeful for some bare-metal implementation that would at least almost work for cheap µcs, and there have been promising attempts in the past, but they seem to have gone nowhere.
toast0
AtomVM runs on esp32, right? It's not an ultra-cheap microcontroller, but it's pretty cheap. AtomVM isn't BEAM either, though. I have no experience with AtomVM though... it didn't seem like a good fit when I was building something with an ESP32 (I didn't see anything about outputting to LCDs, and that was reasonable with arduino libraries... I also saw a library for calendars and thought that would work for my needs and then I got dragged into making it work better), and it would have worked for the stuff i was doing with ESP8266, but I didn't know about it when I was shopping for boards, so I didn't want to pay extra.
cmrdporcupine
Eh. It's getting blurry and has been for some time. To me these days the differentiators are: does it have an MMU? Address lines for external memory? Do you write for an OS or for "bare metal" / RTOS kit? Are there dedicated pins for GPIO?
If you choose some arbitrary memory amount as the criterion it will be out of date by next year.
hoppp
Pretty cool. I am a fan of everything Erlang. Managing large clusters of IOT devices running Beam sounds like a good idea not just because of fault tolerance but for hot swapping code.
garbthetill
I am the same but for elixir, the beam is awesome & I always wonder why it still hasn't caught on with all the success stories. The actor model just makes programming feel so simple
zwnow
For me its the complete opposite of simple. I am a fan of BEAM and OTP but im a horrible programmer. I have constant fear of having picked the wrong restart strategy in a supervisor. Or about ghost processes or whatever. I have no mentors and learn everything myself. I have no way of actually checking whether my implementations are good. With my skills id manage to make an Elixir system brittle because its not clear to me what happens at all times.
toast0
WhatsApp did what it did and we didn't hire anyone who had experience with OTP until 2013 I think. One person who was very experienced in Erlang showed up for a week and bounced.
We were doing all sorts of things wrong and not idiomatically, but things turned out ok for the most part.
The fun thing with restart strategies is if your process fails quickly, you get into restart escalation, were your supervisor restarts because you restarted too many times, and so on and then beam shuts down. But that happens once or twice and you figure out how to avoid it (I usually put a 1 second sleep at startup in my crashy processes, lol).
Ghost processes are easy-ish to find. erlang:processes() lists all the pids, and then you can use erlang:process_info() to get information about them... We would dump stats on processes to a log once a minute or so, with some filtering to avoid massive log spew. Those kinds of things can be built up over time... the nice thing is the debug shell can see everything, but you do need to learn the things to look for.
AnEro
Same, my personal theory where it excels and overachieves is where there is already really fleshed out and oversaturated developer ecosystems (and experienced developer pool) that organizations have alot of legacy software built on it. I think it will gain momentum as we see more need for distributed LLM agents and tooling pick up. (Or when people need extreme cost savings on front facing apis/endpoints that run simple operations)
worthless-trash
Is this something you do regularly?
thelastinuit
would be possible to get my 90's computers and run erlang/elixir for a crypto node... or some version of it??
asa400
Yes - Erlang/Elixir wouldn't be the bound here. 90s hardware is plenty for them. They were designed for far less.
barbinbrad
huge fan of elixir. and definitely have some dumb questions.
in some of the realtime architectures i've seen, certain processes get priority, or run at certain Hz. but i've never seen this with the beam. afaik, it "just works" which is great most of the time. i guess you can do: Process.flag(:priority, :high) but i'm not sure if that's good enough?
toast0
Beam only promises soft realtime. When switching processes, runnable high priority tasks will be chosen before runnable normal or low priority tasks, and within each queue all (runnable) tasks run before a task runs again. But beam isn't really preemptive; a normal or low priority task that is running when a high priority task becomes runable won't be paused; the normal task will continue until it hits its reduction cap or blocks. There's also a chance that maybe you hit some operation that is time consuming and doesn't have yield points; most of ERTS has yield points in time consuming operations, but maybe you find one or maybe you have a misbehaving NIF.
Without real preemption, consistently meeting strict timing requirements probably isn't going to happen. You might possibly run multiple beams and use OS preemption?
heeton
I spoke with Peer (the creator of Grisp) about this at Elixirconf earlier in the year, and I'm not an expert here so I hope I don't misrepresent his comments:
Grisp puts enough controls on the runtime that soft-realtime becomes hard-realtime for all intents and purposes, outside of problems that also cause errors in hard-realtime systems.
(Also, thanks Peer for being tremendously patient with a new embedded developer! That kind of friendly open chat is a huge draw to the Elixir community)
cyberpunk
I did the same workshop some years ago with him also, very nice and patient guy, I can recommend attending if anyone is curious how microelectronics actually work :}
whalesalad
Sounds like nerves to me? But with soft realtime added in?
thenewwazoo
Nerves is Erlang-as-init on Linux. GRISP is Erlang with RTEMS on metal.
toast0
My tldr is grisp is beam on an rtos; nerves is beam on a minimal linux; but they also have grisp allow and grisp forge that are beam on linux. Any of these gives you soft realtime.
Zaphoos
What about Gleam?
nesarkvechnep
Call us when they implement OTP compatibility.
trescenzi
What part is missing? I’ve built a little distributed app that has a cluster registry and dns. There’s a tiny bit of Erlang involved but the majority of it is gleam.
jen20
Several pieces: https://gleam.run/roadmap/
Much of it can be worked around as you suggest.
worthless-trash
What part of otp do you need. I have supervision working.
I have typed message passing.. I write erlang wrapping gleam modules.. its pretty easy.
juped
I'm interested in the claimed real-time capabilities, but it's hard to find anything about them written there. Still, I like the hardware integration.
garbthetill
yeah the claim is ambiguous because the beam itself is only guaranteed soft real time, leaving it open ended might make ppl think hard real-time especially since its hardware
elcritch
They support writing RTOS tasks in C as I understand it.
unit149
[dead]
> MCU-class footprint (fits in 16 MB RAM)
That is absolutely not an MCU class footprint. Anything with an "M" when talking about memory isn't really an MCU. For evidence I cite the ST page on all their micros: https://www.st.com/en/microcontrollers-microprocessors/stm32...
Only the very very high performance ones are >1MB of RAM.