Show HN: Orbit a systems level programming language that compiles .sh to LLVM
9 comments
·December 19, 2025Ciantic
> designed to replace legacy shell scripting ... as arguments are passed as a structured array, not a raw string to be parsed by a shell
I find shell scripters prefer ubiquity and readability over raw performance. And making it mandatory to give arguments as arrays worsens the readabilty. However having both options would be good, your example doesn't actually require the shell escaping so it could have simpler way.
Here is equivalent in Deno for instance
#!/usr/bin/env -S deno run --allow-all
import $ from "jsr:@david/dax";
const command = $`grep -r keyword .`.pipe($`wc -l`);
const result = await command;
Deno (via library) and Bun both have $ that can also handle escaping, e.g. const dirName = "Dir with spaces";
await $`mkdir ${dirName}`; // executes as: mkdir 'Dir with spaces'
I don't think syntax is your biggest hurdle though, biggest hurdle is that Bash is so common, Powershell was supposed to be better shell scripting, yet it takes nowhere outside Windows space.pastage
> X was supposed to be better shell scripting
These are often ecosystem which always becomes "all or nothing", you see this in all big languages Javascript, Java and even fish. All of them handle integration in their own way. Shell scripting is the only thing that recognises that reality is ugly.
forgotpwd16
Thing with LLMs, they'll tell you what a great idea and then output a design and tons of code for you which if lack the necessary knowledge will look coherent and correct. It's good to throw the design/code back in and tell them to review it and explicitly prompt them to tell you what is wrong.
So here it says your error handling maps directly to POSIX exit code. But then "On success, the function returns a non-zero value."
For the sh JIT: The slowness isn't due to the language per se but due to launching multiple processes. If performance is really the goal then you essentially need to replace every process launch with a built-in command. The benchmark is an hallucination unless can indeed be run. Hypothetical benchmarks with hypothetical results are nonsense. (Unless you've a mathematical model backing it up.)
bayesnet
What on earth is the value of a “hypothetical benchmark” as shown in the readme?
aeve890
After the table it says it's a theoretical benchmark though.
Marking this as AI slop.
TheCodingDecode
Spaceship: A JIT-compiled systems language that compiles .sh to LLVM
I’ve always felt that the gap between "one-off shell scripts" and "robust systems code" is too wide. Bash is ubiquitous but dangerous; Go is safe but can feel heavy for quick automation.
I’m building Spaceship to bridge that gap. It’s a Go-inspired systems language with a C++/Boost-based compiler that JIT-compiles everything—including legacy shell scripts—directly into native machine code via LLVM.
The highlights:
* @jit Directive: You can take an existing .sh file and run @jit("script.sh"). Instead of spawning a subshell, Spaceship parses the shell logic, translates it to POSIX-compliant AST nodes, and JIT-compiles it into the current execution path. * Zero-Trust JIT Sandbox: Security is enforced at the LLVM IR lowering phase. If your script doesn't explicitly allow a capability (like network.tcp or process.fork) in the security manifest, the JIT simply refuses to generate the machine code for those instructions. No runtime interceptor overhead. * Arbitrary Bit-Widths: Since it’s LLVM-native, you aren't stuck with i32 or i64. If you're interfacing with specific hardware or protocols, you can use i1, i23, i25, etc. * The !i32 Contract: All system calls return a success value or an i32 POSIX error code, handled via a check/except flow that mirrors C++ exception speed but keeps the simplicity of Go’s error handling. * Unified Backend: We use Boost (Asio, Process, Filesystem) as the high-performance standard library that the JIT links against, ensuring POSIX compatibility across Linux and macOS.
The parser is implemented in C++ and handles deferred execution pipelines—nothing runs until you call .run(), which allows the JIT to optimize the entire chain of operations.
I'd love to hear your thoughts on the "Security through Omission" model and the feasibility of replacing dash/bash with a JIT-ted environment for high-performance automation.
keepamovin
Cool, I am also working on a systems language targeting binaries. FreedomLang (freelang.dev) takes a radically different approach by using direct PE/Mach-O emission with zero runtime dependencies, built specifically for security agents and DevSecOps automation.
The key philosophical differences:
FSABI (Filesystem ABI) Concurrency: Instead of JIT-compiling shell pipelines, we use the filesystem as the concurrency boundary. Jobs fork with typed params written to /jobs/job<id>/inbox/*.<type>, execute in isolated processes, and write results to ./outbox. Debuggable with ls -R, reproducible, and naturally auditable. No shared memory, no race conditions.
Windows "Self-Exec" Model: Since Windows has no fork(), we re-spawn the binary with --flx-worker flags—the child reads its entire state from the FSABI inbox. Zero runtime shims, no process table magic.
Raw Assembly -> Kernel Only: Our binaries are tiny (7.5KB hello world, ~22KB for realistic file I/O + control flow + assertions) and link only against kernel32.dll (Windows) or raw syscalls (Linux). No libc, no CRT startup, direct CreateProcessA/WriteFile calls. The attack surface is just the kernel interface.
Fail-Fast by Design: fall for bugs (immediate termination), explicit variants for world state (missing files, timeouts). No exceptions, no silent recoveries that hide security issues in production agents.
We're in RFC/private beta right now, targeting security teams that need to justify every line of code running in their scanning agents and CI/CD gates. The ability to audit the entire compiler/runtime in an afternoon is the feature.
Questions on yours:
Your shell-to-LLVM JIT is fascinating -- how are you handling the semantic gap between Bash's lenient error model (pipelines succeed if any command succeeds) and POSIX's strict contracts? Do you expose multiple error handling modes, or force everything through the check/except flow?
Also curious: when you JIT-compile legacy .sh scripts, do you preserve the original behavior of things like unquoted variable expansion and word splitting, or do you enforce stricter semantics? What do you think of shc?
keyle
Nice "functional programming synatx."
Hmmmm