Skip to content(if available)orjump to list(if available)

Microsandbox: Virtual Machines that feel and perform like containers

zackmorris

This is great!

I'd like to see a formal container security grade that works like:

  1) Curate a list of all known (container) exploits
  2) Run each exploit in environments of increasing security like permissions-based, jail, Docker and emulator
  3) The percentage of prevented exploits would be the score from 0-100%
Under this scheme, I'd expect naive attempts at containerization with permissions and jails to score around 0%, while Docker might be above 50% and Microsandbox could potentially reach 100%.

This might satisfy some of our intuition around questions like "why not just use a jail?". Also the containers could run on a site on the open web as honeypots with cash or crypto prizes for pwning them to "prove" which containers achieve 100%.

We might also need to redefine what "secure" means, since exploits like Rowhammer and Spectre may make nearly all conventional and cloud computing insecure. Or maybe it's a moving target, like how 64 bit encryption might have once been considered secure but now we need 128 bit or higher.

Edit: the motivation behind this would be to find a container that's 100% secure without emulation, for performance and cost-savings benefits, as well as gaining insights into how to secure operating systems by containerizing their various services.

tptacek

The issue, at least with multitenant workloads, isn't "container vulnerabilities" as such; it's that standard containers are premised on sharing a kernel, which makes every kernel LPE a potential container escape --- there's a long history of those bugs, and they're only rarely flagged as "container escapes"; it's just sort of understood that a kernel LPE is going to break containers.

delusional

> it's just sort of understood that a kernel LPE is going to break containers.

I think it's generally understood that any sort of kernel LPE can potentially (and therefore is generally considered to) lead to breaking all security boundaries on the local machine, since the kernel contains no internal security boundaries. That includes both containers, but also everything else such a user separation, hardware virtualization controlled by the local kernel, and kernel private secrets.

zrm

A large proportion of LPE vulnerabilities are in the nature of "perform a syscall to pass specially crafted data to the kernel and trigger a kernel bug". For containers, the kernel is the host kernel and now the host is compromised. For VMs, the kernel is the guest kernel and now the guest is compromised, but not the host. That's a much narrower compromise and in security models where root on the guest is already expected to be attacker-controlled, isn't even a vulnerability.

transpute

> hardware virtualization controlled by the local kernel

In some architectures, kernel LPE does not break platform (L0/EL2) virtualization, https://news.ycombinator.com/item?id=44141164

  L0/EL2  L1/EL1                   

  pKVM    KVM                  
  AX      Hyper-V / Xen / ESX

bjackman

You cannot build a secure container runtime (against malicious containers) because underlying it is the Linux kernel.

The only way to make Linux containers a meaningful sandbox is to drastically restrict the syscall API surface available to the sandboxee, which quickly reduces its value. It's no longer a "generic platform that you can throw any workload onto" but instead a bespoke thing that needs to be tuned and reconfigured for every usecase.

This is why you need virtualization. Until we have a properly hardened and memory safe OS, it's the only way. And if we do build such an OS it's unclear to me whether it will be faster than running MicroVMs on a Linux host.

akdev1l

One can definitely build a container runtime that uses virtualization to protect the host

For example there is Kata containers

https://katacontainers.io/

This can be used with regular `podman` by just changing the container runtime so there’s no even need for any extra tooling

In theory you could shove the container runtime into something like k8s

bjackman

> container runtime that uses virtualization to protect the host

True, by "container" I really meant "shared-kernel container".

> In theory you could shove the container runtime into something like k8s

Yeah this is actually supported by k8s.

Whether that means it's actually reasonable to run completely untrusted workloads on your own cluster is another question. But it definitely seems like a really good defense-in-depth feature.

Veserv

You cannot build a secure virtualization runtime because underlying it is the VMM. Until you have a secure VMM you are subject to precisely the same class of problems plaguing container runtimes.

The only meaningful difference is that Linux containers target partitioning Linux kernel services which is a shared-by-default/default-allow environment that was never designed for and has never achieved meaningful security. The number of vulnerabilities resulting from, "whoopsie, we forgot to partition shared service 123" would be hilarious if it were not a complete lapse of security engineering in a product people are convinced is adequate for security-critical applications.

Present a vulnerability assessment demonstrating a team of 10 with 3 years time (~10-30 M$, comparable to many commercially-motivated single-victim attacks these days) can find no vulnerabilities in your deployment or a formal proof of security and correctness otherwise we should stick with the default assumption that software if easily hacked instead of the extraordinary claim that demands extraordinary evidence.

transpute

> You cannot build a secure virtualization runtime because underlying it is the VMM

There are VMMs (e.g. pKVM in upstream Linux) with small SLoC that are isolated by silicon support for nested virtualization. This can be found on recent Google Pixel phones/tablets with strong isolation of untrusted Debian Arm Linux "Terminal" VM.

A similar architecture was shipped a decade ago by Bromium and now on millions of HP business laptops, including hypervisor isolation of firmware, "Hypervisor Security : Lessons Learned — Ian Pratt, Bromium — Platform Security Summit 2018", https://www.youtube.com/watch?v=bNVe2y34dnM

Christian Slater, HP cybersecurity ("Wolf") edutainment on nested virt hypervisor in printers, https://www.youtube.com/watch?v=DjMSq3n3Gqs

nyrikki

While VMs do have an attack surface, it is vastly different than containers, which as you pointed out are not really a security system, but simply namespaces.

Seacomp, capabilities, selinux, apparmor, etc.. can help harden containers, but most of the popular containers don't even drop root for services, and I was one of the people who tried to even get Docker/Moby etc.. to let you disable the privileged flag...which they refused to do.

While some CRIs make this easier, any agent that can spin up a container should be considered a super user.

With the docker --privlaged flag I could read the hosts root volume or even install efi bios files just using mknod etc, walking /sys to find the major/minor numbers.

Namespaces are useful in a comprehensive security plan, but as you mentioned, they are not jails.

It is true that both VMs and containers have attack surfaces, but the size of the attack surface on containers is much larger.

bjackman

I see your point but even if your VMM is a zillion lines of C++ with emulated devices there are opportunities to secure it that don't exist with a shared-monolithic-kernel container runtime.

You can create security boundaries around (and even within!) the VMM. You can make it so an escape into the VMM process has only minimal value, by sandboxing the VMM aggressively.

Plus you can absolutely escape the model of C++ emulating devices. Ideally I think VMMs should do almost nothing but manage VF passthroughs. Of course then we shift a lot of the problem onto the inevitably completely broken device firmware but again there are more ways to mitigate that than kernel bugs.

ignoramous

> ... drastically restrict the syscall API surface available to the sandboxee, which quickly reduces its value ...

Depends I guess as Android has had quite a bit of success with seccomp-bpf & Android-specific flavour of SELinux [0]

> Until we have a properly hardened and memory safe OS ... faster than running MicroVMs on a Linux host.

Andy Tanenbaum might say, Micro Kernels would do just as well.

[0] https://youtu.be/WxbOq8IGEiE

bjackman

> Android

Exactly. Android pulls this off by being extremely constrained. It's dramatically less flexible than an OCI runtime. If you wanna run a random unenlightened workload on it you're probably gonna have a hard time.

> Micro Kernels would do just as well.

Yea this goes in the right direction. In the end a lot of kernel work I look at is basically about trying to retrofit benefits of microkernels onto Linux.

Saying "we should just use an actual microkernel" is a bit like "Russia and Ukraine should just make peace" IMO though.

carlhjerpe

You also have gVisor, which runs all syscall through some Go history that's supposedly safe enough for Google.

godelski

Importantly I'd like to see the configurations of the machines. There's a lot you can do to docker or systemd spawns that greatly vary the security levels. This would really help show what needs to be done and what configurations lead to what risks.

Basically I'd love to see a giant ablation

Etheryte

In a way, containers already run as honeypots with cash or crypto prizes, it's called production code and plenty of people are looking for holes day and night. While this setup sounds like a nice idea conceptually, the monetary incentives it could offer would surely be miniscule compared to real targets.

dataflow

Tangential question: why does it normally take so long to start traditional VMs in the first place? At least on Windows, if you start a traditional VM, it takes several seconds for it to start running anything.

Edit: when I say anything, I'm not talking user programs. I mean as in, before even the first instruction of the firmware -- before even the virtual disk file is zeroed out, in cases where it needs to be. You literally can't pause the VM during this interval because the window hasn't even popped up yet, and even when it has, you still can't for a while because it literally hasn't started running anything. So the kernel and even firmware initialization slowness are entirely irrelevant to my question.

Why is that?

jeroenhd

You can optimize a lot to start a Linux kernel in under a second, but if you're using a standard kernel, there are all manners of timeouts and poll attempts that make the kernel waste time booting. There's also a non-trivial amount of time the VM spends in the UEFI/CSM system preparing the virtual hardware and initializing the system environment for your bootloader. I'm pretty sure WSL2 uses a special kernel to avoid the unnecessary overhead.

You also need to start OS services, configure filesystems, prepare caches, configure networking, and so on. If you're not booting UKIs or similar tools, you'll also be loading a bootloader, then loading an initramfs into memory, then loading the main OS and starting the services you actually need, with eachsstep requiring certain daemons and hardware probes to work correctly.

There are tools to fix this problem. Amazon's Firecracker can start a Linux VM in a time similar to that of a container (milliseconds) by basically storing the initialized state of the VM and loading that into memory instead of actually performing a real boot. https://firecracker-microvm.github.io/

On Windows, I think it depends on the hypervisor you use. Hyper V has a pretty slow UEFI environment, its hard disk access always seems rather slow to me, and most Linux distro don't seem to package dedicated minimal kernels for it.

dataflow

That's not what I'm asking about.

I'm saying it takes a long time for it to even execute a single instruction, in the BIOS itself. Even for the window to pop up, before you can even pause the VM (because it hasn't even started yet). What you're describing comes after all that, which I already understand and am not asking about.

zbentley

Unsubstantiated hunch: the hypervisor is doing a shitload of probes against the host system before allocating/configuring virtual hardware devices/behaviors. Since the host's hardware/driver/kernel situation can change between hypervisor invocations, it might have to re-answer a ton of questions about the host environment in order to provide things like "the VM/host USB bridge uses so-and-so optimized host kernel/driver functionality to speed up accesses to a VM-attached USB device". Between running such checks for all behaviors the VM needs, and the possibility that wasteful checks (e.g. for rare VM behaviors or virtual hardware that's not in use) are also performed, that could take some time.

On the other hand, it could just as easily be something simple, like setting up hugepages or checksumming virtual hard disk image files.

Both are total guesses, though. Could be anything!

bonki

I have always wondered the same, never tried looking into it but I wouldn't be surprised if Defender at least played a part in it. Defender is a huge source for general slowness on Windows from my experience.

hnuser123456

probably the intel ME setting up for virtualization in a way that it can infiltrate

orev

I think you need to provide more details on what VM software you’re using. On VirtualBox what you describe is very noticeable, and it didn’t have that delay in older versions. So it could be just an issue with that VM software and not a general “traditional VMs” issue.

dataflow

Yup I'm asking about VirtualBox mainly, I just don't understand what the heck it's doing during that time that takes so long. Although I don't recall other VMs (like say, Hyper-V) being dramatically different either (ignoring WSL2 here).

icedchai

Linux KVM/qemu VMs start pretty fast.

_factor

Try disabling Windows Defender and trying again.

akdev1l

The answer is that it doesn’t have to be like that.

In practice virtual machines are trying to emulate a lot of stuff that isn’t really needed but they’re doing it for compatibility.

If one builds a hypervisor which is optimized for startup speed and doesn’t need to support generalized legacy software then you can:

> Unlike traditional VMs that might take several seconds to start, Firecracker VMs can boot up in as little as 125ms.

BobbyTables2

In Linux, VM memory allocations can be slow if it tries to allocate GBs of RAM using 4K pages. There are ways to help it allocate 1GB at a time which vastly speeds it up.

Windows probably has an equivalent.

pdimitar

Is this specifically for during boot time? Also, any links?

speed_spread

Creating the VM itself is fast. It depends on what you run in it. Unikernel VMs can start in a few milliseconds. For example, checkout OSv.

dataflow

You're saying this is true on a Windows host?

akdev1l

Yes. The delay you’re complaining about happens because you are looking at general hypervisors which also come with virtualized hardware and need to mimic a bunch of stuff so that most software will work as usual.

For example: your VM starts up with the CPU in 16 bit mode because that’s just how things work in x86 and then it waits for the guest OS to set the CPU into 64 bit mode.

This is completely unnecessary if you just want to run x86-64 code in a virtualized environment and you control the guest kernel and can just assume things are in 64bit mode because it’s not the 70s or whatever

The guest OS would also need to probe few ports to get a bootable disk. If you control the kernel then you can just not do that and boot directly.

There’s a ton of stuff that isn’t needed

jiggawatts

Try Windows Server Core on an SSD. I've seen VMs launch in low single-digit seconds. You can strip it down even further by removing non-64-bit support, Defender, etc...

dist-epoch

Sounds like a VirtualBox problem.

I'm using Hyper-V and I can connect through XRDP to a GUI Ubuntu 22 in 10 seconds and I can SSH into a Ubuntu 22 server in 3 seconds after start.

diggan

I mean it is basically booting a computer from scratch, kind of makes sense. You have to allocate memory, start virtual CPUs, initialize devices, run BIOS/UEFI checks, perform hardware enumeration, all that jazz while emulating all of it, which tends to be slower than "real" implementations. I guess there is a bunch of processes for security as well, like wiping like zeroing pages and similar things that takes additional time.

If I let a VM use most of my hardware, it takes a few seconds from start to login prompt, which is the same time it takes for my Arch desktop to boot from pressing the button to seeing the login prompt.

dataflow

> You have to allocate memory, start virtual CPUs, initialize devices, run BIOS/UEFI checks, perform hardware enumeration, all that jazz while emulating all of it, which tends to be slower than "real" implementations.

That's not what I'm asking.

I'm saying it takes a long time for it to even execute a single instruction, in the BIOS itself. Even for the window to pop up, before you can even pause the VM (because it hasn't even started yet). What you're describing comes after all that, which I already understand and am not asking about.

bityard

In defense of the replies, your initial question was very vague and left people to assume you meant the obvious thing.

drewg123

Without any context in terms of what the VM is doing or what VMM software you use, my best guess is that the OS/VMM are pre-allocating memory for the VM. This might involve paging out other processes' memory, which could take some time.

I think task manager would tell you if there is a blip of memory usage and paging activity at the time. And I'm sure windows itself has profilers that can tell you what is happening when the VM is started..

appcypher

Thanks for sharing!

I'm the creator of microsandbox. If there is anything you need to know about the project, let me know.

This project is meant to make creating microvms from your machine as easy as using Docker containers.

Ask me anything.

simonw

I'm trying this out now and it's very promising. One problem I'm running into with the Python library is that I'd like to keep that sandbox running for several minutes while I do things like set variables in one call and then use them for stuff several calls later. I keep seeing this error intermittently:

    Error: Sandbox is not started. Call start() first
Is there a suggested way of keeping a sandbox around for longer?

The documented code pattern is this:

    async def main():
        async with PythonSandbox.create(name="my-sandbox") as sb:
            exec = await sb.run("print('Hello, World!')")
            print(await exec.output())
Due to the way my code works I want to instantiate the sandbox once for a specific class and then have multiple calls to it by class methods, which isn't a clean fit for that "async with" pattern.

Any recommendations?

appcypher

Right. You can skip the `with` context manager and call start and stop yourself.

There is an example of that here:

https://github.com/microsandbox/microsandbox/blob/0c13fc27ab...

gcharbonnier

async with is just syntactic sugar. You could very well call __aenter__ and __aexit__ manually. You could also use an AsyncExitStack, call __aenter__ manually, then enter_async_context, and call aclose when you’re done. Since aclose method exists I guess this is not an anti-pattern.

https://docs.python.org/3/library/contextlib.html#contextlib...

hugs

Looks great! This might be extremely useful for a distributed/decentralized software testing network I'm building (called Valet Network)...

Question: How does networking work? Can I restrict/limit microvms so that they can only access public IP addresses? (or in other words... making sure the microvms can't access any local network IP addresses)

appcypher

hugs

thanks! have an example on how to use that in a sandboxfile?

(also, this project is really cool. great work!)

nqzero

i'm on a mid-level laptop, at times with slow or expensive internet, running ubuntu. i want to be able to run nominally-isolated "copies" of my laptop at near-native speed

1. each one should have it's own network config, eg so i can use wireguard or a vpn

2. gui pass-through to the host, eg wayland, for trusted tools, eg firefox, zoom or citrix

3. needs to be lightweight. eg gnome-boxes is dead simple to setup and run and it works, but the resource usage was noticeably higher than native

4. optional - more security is better (ie, i might run semi-untrusted software in one of them, eg from a github repo or npm), but i'm not expecting miracles and accept that escape is possible

5. optional - sharing disk with the host via COW would be nice, so i'd only need to install the env-specific packages, not the full OS

i'm currently working on a podman solution, and i believe that it will work (but rebuilding seems to hammer the network - i'm hoping i can tweak the layers to reduce this). does microsandbox offer any advantages for this use case ?

appcypher

> 1. each one should have it's own network config, eg so i can use wireguard or a vpn

This is possible right now but the networking is not where I want it to be yet. It uses libkrun's default TSI impl; performant and simplifies setup but can be inflexible. I plan to implement an alternative user-space networking stack soon.

> 2. gui pass-through to the host, eg wayland, for trusted tools, eg firefox, zoom or citrix

We don't have GUI passthrough. VNC?

> 3. needs to be lightweight. eg gnome-boxes is dead simple to setup and run and it works, but the resource usage was noticeably higher than native

It is lightweight in the sense that it is not a full vm

> 4. optional - more security is better (ie, i might run semi-untrusted software in one of them, eg from a github repo or npm), but i'm not expecting miracles and accept that escape is possible

The security guarantees are similar to what typical VMs support. It is hardware-virtualized so I would say you should be fine.

> 5. optional - sharing disk with the host via COW would be nice, so i'd only need to install the env-specific packages, not the full OS

Yeah. It uses virtio-fs and has overlayfs on top of that for COW.

int_19h

This is very neat tech, but I think you might want to wait until you actually have Windows covered before making claims like https://github.com/microsandbox/microsandbox/blob/main/MSB_V...

appcypher

What do you mean?

catlifeonmars

How does the microvm architecture compare with firecracker?

appcypher

They are similar. We use libkrun under the hood. Firecracker team seems not to be interested in a macOS implementation

codethief

Hi appcypher, very cool project! Does the underlying MicroVM feature provide an OCI runtime interface, so that it could be used as a replacement for runc/crun in Docker/Podman?

Nypro

No. Not yet. Would be nice to have

codethief

Thanks for your response!

One more question: What syscalls do I need to have access to in order to run a MicroVM? I'm asking because ideally I'd like to run container workloads inside existing containers (self-hosted GitLab CI runners) whose configuration (including AppArmor) I don't control.

0cf8612b2e1e

Only did a quick skim of the readme, but a few questions which I would like some elaboration.

How is it so fast? Is it making any trade offs vs a traditional VM? Is there potential the VM isolation is compromised?

Can I run a GUI inside of it?

Do you think of this as a new Vagrant?

How do I get data in/out?

appcypher

> How is it so fast? Is it making any trade offs vs a traditional VM? Is there potential the VM isolation is compromised?

It is a lighweight VM and uses the same technology as Firecracker

> Can I run a GUI inside of it?

It is planned but not yet implemented. But it is absolutely possible.

> Do you think of this as a new Vagrant?

I would consider Docker for VMs instead. In a similar way, it focuses on dev ops type use case like deplying apps, etc.

> How do I get data in/out?

There is an SDK and server that help does that and file streaming is planned. But right now, you can execute commands in the VM and get the result back via the server

westurner

> I would consider Docker for VMs instead.

Native Containers would probably solve here, too.

From https://news.ycombinator.com/item?id=43553198 :

>>> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/

And also from that thread:

> How should a microkernel run (WASI) WASM runtimes?

What is the most minimal microvm for WASM / WASI, and what are the advantages to running WASM workloads with firecracker or microsandbox?

esafak

Looks neat. If I understand correctly, I can use it to spin up backends on the fly? You have an ambitious list of languages to support: https://github.com/microsandbox/microsandbox/tree/main/sdk

edit: A fleshed out contributors guide to add support for a new language would help. https://github.com/microsandbox/microsandbox/blob/main/CONTR...

appcypher

Yes. Self-hosting and using it on your own backend infra is the main use-case. And JVM support should just work since it is a Linux machine.

eamann

> Ever needed to run code you don't fully trust?

Then the installation instructions include piping a remote script directly to Bash ... Oh irony ...

That said, the concept itself is intriguing.

appcypher

Your statement initially went over my head. Sorry lol. You can always download the installer script and audit yourself. I will set up proper distribution later.

raphinou

In case you're interested when you set up proper distribution, I'm working on an open source solution aiming to improve security of downloads from the internet. Our first step is maintaining a mirror of checksums published in GitHub releases at https://github.com/asfaload/checksums/. If you publish a checksums file in your releases it can automatically be mirrored. The checksums mirror is not our end game, but it already protects against changes of released files from the time the mirror was taken. For anyone interested: https://asfaload.com/asfald/

hakcermani

.. did exactly that and also changed the BINDIR and LIBDIR to another location. BTW, amazing project from initial glance. Will give it a detailed look this weekend!

null

[deleted]

McAlpine5892

This looks awesome. The amount of super lightweight and almost-disposable VM options in recent years is crazy. I remember when VMs were slow, clunky, and generally painful.

I wonder how this compares to Orbstack's [0] tech stack on macOS, specifically the "Linux machines" [1] feature. Seems like Orb might reuse a single VM?

---

[0] https://orbstack.dev

[1] https://docs.orbstack.dev/machines/

ATechGuy

Congrats on launching! Booting VMs in milliseconds is certainly important, but it can also be achieved with CloudHypervisor/Firecracker. Where Containers beat VMs is runtime perf. The overhead in case of VMs stems from emulation of IO devices. I believe the overhead will become noticeable for AI agentic use cases. Any plans to address perf issues?

appcypher

You are right. We leverage libkrun. Libkrun uses virtio-mmio transport for block, vsock and virtio-fs to keep overhead minimal so we basically depend on any perf improvement made upstream.

Firecracker is no different btw and E2B uses that for agentic AI workloads. Anyway, I don't have any major plan except fix some issues with the filesystem rn.

amelius

For my taste, container technology is pushing the OS too far. By typing:

    mount
you immediately see what I mean. Stuff that should be hidden is now in plain sight, and destroys the usefulness of simple system commands. And worse, the user can fiddle with the data structures. It's like giving the user peek and poke commands.

The idea of containers is nice, but they are a hack until kernels are re-architected.

topspin

On recent Linux, try:

    findmnt --real
It's part of linux-utils, so it is generally available wherever have a shell. The legacy tools you have in mind aren't ever going to be changed as you would wish, for reasons.

throwaway314155

Sorry I am lacking the context to understand this post. What does running mount inside a container do that's so egregious? Are host mounts exposed to the container somehow? I thought everything needed to be explicitly passed through to the container (e.g. using a volume)?

remram

I think they mean that running `mount` on the host now lists hundreds of mountpoints from containers, snaps, packagekit etc.

rbitar

Looks great and excited to try this out. We’ve also had success using CodeSandbox SDK and E2B, can you share some thoughts on how you compare or future direction? Do you also use Firecracker under the hood?

appcypher

> can you share some thoughts on how you compare or future direction?

Microsandbox does not offer a cloud solution. It is self-hosted, designed to do what E2B does, to make it easier working with microVM-based sandboxes on your local machine whether that is Linux, macOS or Windows (planned) and to seamlessly transition to prod.

> Do you also use Firecracker under the hood?

It uses libkrun.

rbitar

Self-hosting is definitely something we are keen to explore as most of the cloud solutions have resource constrains (ie, total active MicroVMs and/or specs per VM) and managing billing gets complicated even with hibernation features. Great project and we'll definitely take it for a spin

pkkkzip

I can't tell if it uses firecracker but thats my main question too. I'm curious as to whether microsandbox will be maintained and proper auditing will be done.

I welcome alternatives. It's been tough wrestling with Firecracker and OCI images. Kata container is also tough.

appcypher

It will be maintained as I will be using it for some other product. And it will be audited in the future but it still early days.

pdimitar

I wanted to try Kata containers soon. What difficulties do you have with them?

SwiftyBug

Kind of almost off-topic: I'm working on a project where I must run possibly untrusted JavaScript code. I want to run it in an isolated environment. This looks like a very nice solution as I could spin up a microsandbox and securely run the code. I could even have a pool os live sandboxes so I wouldn't even experience the 200ms starts. Because this is OCI-compatible, I could even provide a whole sandboxed environment on which to run that code. Would that be a good use case for this? Are there better alternatives?

arjunbajaj

I recommend trying Javy[0]. Javy allows you to build a WASM file that includes Javy's JS interpreter along with your JS source code. Note that Javy is a heavily sandboxed environment so it doesn't have access to the internet, or npm modules, a desirable feature for running user code.

We're building an IoT Cloud Platform, Fostrom[1] where we're using Javy to power our Actions infrastructure. But instead of compiling each Action's JS code to a Javy WASM module, I figured out a simpler way by creating a single WASM module with our wrapper code (which contains some further isolation and helpful functions), and we provide the user code as an input while executing the single pre-compiled WASM module.

[0] https://github.com/bytecodealliance/javy

[1] https://fostrom.io

apitman

You might be able to get away with running QuickJS compiled to WebAssembly: https://til.simonwillison.net/npm/self-hosted-quickjs

appcypher

> Would that be a good use case for this?

That is an ideal use case

> Are there better alternatives?

Created microsandbox because I didn't find any

SwiftyBug

Awesome. This is really good timing. I'm going to give it a try.

ericb

runsc / gVisor is interesting also as the runsc engine can be run from within Docker/Docker Desktop.

gVisor has performance problems, though. Their data shows 1/3rd the throughput vs. docker runtime for concurrent network calls--if that's an issue for your use-case.

sureglymop

Always interested when things like this come up.

What like about containers is how quickly I can run something, e.g. `docker run --rm ...` without having to specify disk size, amount of cpu cores, etc. I can then diff the state of the container with the image (and other things) to see what some program did while it ran.

So I basically want the same but instead with small vms to have better sandboxing. Sometimes I also use bwrap but it's not really intended to be used on the command line like that.

srmatto

It has a YAML config format to declare all of that so you could just do that once, or template it, generate it on the fly, fetch it from remote, or many other methods.

airocker

Would love to hear nix people take on this?

mjrusso

As a Nix user, I'm actually really excited to try this out.

I want to run sandboxes based on Docker images that have Nix pre-installed. (Once the VM boots, apply the project-specific Flake, and then run Docker Compose for databases and other supporting services.) In theory, an easy-to-use, fully isolated dev environment that matches how I normally develop, except inside of a VM.

airocker

but dont they have overlapping requirements of solving "not works on my machine"

mjrusso

Microsandbox's primary goal is to make it easy to build environments for running untrusted code.

Nix, on the other hand, solves the problem of building reproducible environments... but making said environments safe for running untrusted code is left as an exercise for the reader.

Jayakumark

Windows support ? and can we VNC in to the sandbox and stream it ?

appcypher

Windows support is a work in progress. I haven't tested using VNC yet but it should be possible.