Skip to content(if available)orjump to list(if available)

Using Radicle CI

Using Radicle CI

39 comments

·July 23, 2025

dan_manges

There's a difference between small scale CI and large scale CI.

Small scale: a project is almost small enough to run the build and tests locally, but you still want to have a consistent environment and avoid "works on my machine" problems.

Large scale: a project is so large that you need to leverage remote, distributed computing to run everything with a reasonable feedback loop, ideally under 10 minutes.

The opposite ends of the spectrum warrant different solutions. For small scale, actually being able to run the whole CI stack locally is ideal. For large scale, it's not feasible.

> A CI system that’s a joy to use, that sounds like a fantasy. What would it even be like? What would make using a CI system joyful to you?

I spent the past few years building RWX[1] to make a CI system joyful to use for large scale projects.

- Local CLI to read the workflow definitions locally and then run remotely. That way can you test changes to workflow definitions without having to commit and push.

- Remote breakpoints to pause execution at any point and connect via ssh, which is necessary when running on remote infrastructure.

- Automatic content-based caching with sandboxed executions, so that you can skip all of the duplicative steps that large scale CI otherwise would. Sandboxing ensures that the cache never produces false positives.

- Graph-based task definitions, rather than the 1 job : 1 VM model. This results in automatic and maximum parallelization, with no redundancy in setup for each job.

- The graph based model also provides an improved retry experience, and more flexibility in resource allocation. For example, one task in the DAG can crank up the CPU and memory without having to run more resources for downstream tasks (steps in other platforms).

We've made dozens of other improvements to the UX for projects with large build and test workflows. Big engineering teams love the experience.

[1] https://rwx.com

RGBCube

Sounds good, but it's still YAML and shell scripts. It's not even close to ideal.

A custom lazy, typed functional language that doesn't differentiate between expressions and "builds" would be much better. Even better if you add "contexts", aka implicit tags under values for automatic dependency inference. Also do "serializable bytecode", and closing over dependencies of thunks efficiently like Unison does for great distrubuted builds.

And it would be pretty easy to add a debugger to this system, same logic as doing "import"

Nix gets somewhat close, but it misses the mark by separating the eval and build phases. It having terrible documentation, 1332432 ways to do the same thing, not properly separating the nixpkgs/nix divide, and nixpkgs being horribly, but still insufficiently abstracted also doesn't help.

Also, I'm not sure why you posted this comment here, as there is nothing that prevents you from writing a Radicle CI adapter that can handle huge repositories. You can reference the bare git repo stored in the Radicle home, so you just need to be able to store the repo itself.

viraptor

Every time there's yaml, you can use dhall and compile to json instead. You get typing and strictness that way - regardless of whether the service allows it internally.

kuehle

> I find the most frustrating part of using CI to be to wait for a CI run to finish on a server and then try to deduce from the run log what went wrong. I’ve alleviated this by writing an extension to rad to run CI locally: rad-ci.

locally running CI should be more common

__MatrixMan__

Agreed. I've got a crazy idea that I think might help...

Most tests have a step where you collect some data, and another step where you make assertions about that data. Normally, that data only ever lived in a variable, so it is not kept around for later analysis. All you get when you're viewing a failed test is logs with either exception or a failed assertion. It's not enough to tell a full story, and I think this contributes to the frustration you're talking about.

I've been playing with the idea that all of the data generation should happen first (since it's the slow part), it then gets added to a commit (overwriting data from the previous CI run) and then all of the assertions should run afterwards (this is typically very fast).

So when CI fails, you can pull the updated branch and either:

- rerun the assertions without bothering to regenerate the data (faster, and useful if the fix is changing an assertion)

- diff the new data against data from the previous run (often instructive about the nature of the breakage)

- regenerate the data and diff it against whatever caused CI to fail (useful for knowing that your change will indeed make CI happy once more)

Most tools are uncomfortable with using git to transfer state from the failed CI run to your local machine so you can just rerun the relevant parts locally, so there's some hackery involved, but when it works out it feels like a superpower.

andrewaylett

Hear, hear.

Although I'd slightly rephrase that to "if you don't change anything, you should end up running pretty much the same code locally as in CI".

GitHub Actions is really annoying for this, as it has no supported local mode. Act is amazing, but insufficient: the default runner images are huge, so you can't use the same environment, and it's not supported.

Pre-commit on the other hand is fantastic for this kind of issue, as you can run it locally and it'll fairly trivially run the same checks in CI as it does locally. You want it to be fast, though, and in practice I normally wind up having pre-commit run only cacheable tests locally and exclude any build and test hooks from CI because I'll run them as separate CI jobs.

I did release my own GHA action for pre-commit (https://github.com/marketplace/actions/cached-pre-commit), because the official one doesn't cache very heavily and the author prefers folk to use his competing service.

cedws

I have to disagree about Act, my experience is that it only works for extremely simple workflows, and even then it’s easy to run into differences between Act and GitHub Actions. I’ve raised many bugs but AFAIK there’s like one guy working on it in his own time.

It’s terrible that the community has had to invent something like this when it should be provided by GitHub. I suspect the GitHub Actions team is a skeleton crew because nothing ever seems to get done over there.

popsticl3

I've been using brisk to run my CI from my local machine (so it runs in the cloud from my local terminal). The workflow is just a drop in replacement for local running. They've recently changed their backend and it seems to be working pretty smoothly. It works very well with AI agents too that are running in the terminal - they can run my tests for me if they make a change and it doesn't kill my machine.

everforward

I've poked at this a few times, and I think it breaks down for CI that needs/wants to run integration tests against other services. Eg it wants to spin up a Postgres server to actually execute queries against.

Managing those lifetimes is annoying, especially when it needs to work on desktops too. On the server side, you can do things like spin up a VM that CI runs in, use Docker in the VM to make dependencies in containers, and then delete the whole VM.

That's a lot of tooling to do locally though, and even then it's local but has so many abstractions that it might as well be running in the cloud.

maratc

This can be achieved by running in CI what commonly runs on local.

E.g. if your build process is simply invoking `build.sh`, it should be trivial to run exactly that in any CI.

ambicapter

This is fine until your run into differences between your machine and the CI one (or you're writing code for a different architecture than the one you're using), but I agree, this is definitely the first step.

0x457

Plot twist, my build.sh invokes nix build and all I have to do on CI is to install nix and setup caching.

maratc

I agree, but if there's an architecture gap then locally running CI is not gonna help you to bridge it either.

esafak

Be sure to run it in a container, so you have a semblance of parity.

maratc

Where possible. (If your build process builds containers and your tests get them up and make them talk, doing that in a container is a challenge.)

However, there are stateless VMs and stateless BMs too.

Norfair

nix-ci.com is built with this as one of the two central features. The other is that it figures out what to do by itself; you don't have to write any YAML.

globular-toast

Yeah, and I will never understand why developers accept anything less. GitHub CI is really bad for this. GitLab is a lot better as you can just run the exact thing through Docker locally. I like tools like tox too that automate your whole test matrix and do it locally just as well.

gsaslis

It should!

And yet, that's technically not CI.

The whole point we started using automation servers as an integration point was to avoid the "it works on my machine" drama. (Have watched at least 5 seasons of it - they were all painful!).

+1 on running the test harness locally though (where feasible) before triggering the CI server.

tough

I use act to run github CI locally fwiw https://github.com/nektos/act

lbotos

Op, Radicle had a very glitchy style home page before it went more 8-bit. Do you have an archive of that anywhere? I’d like to use it as reference for a style period in design!

tomtomhowling

[flagged]