Skip to content(if available)orjump to list(if available)

Optimizing Brainfuck interpreter in the C preprocessor

eps

As per usual the qualifying question if it can handle the Mandelbrot renderer -

https://github.com/erikdubbelboer/brainfuck-jit/blob/master/...

LukaD

Unlikely, given that hello world takes 20 minutes to run.

ijlx

I think you may be looking at the wrong column. Hello World runs in 0.020s, per the benchmarks.

johnisgood

bfcpp[1] seems to be the most performant.

[1] https://github.com/Haruno19/bfcpp

jenadine

The explanation document is interesting: https://github.com/camel-cdr/bfcpp/blob/main/TUTORIAL.md

bob1029

> https://github.com/camel-cdr/bfcpp/blob/main/TUTORIAL.md#32-...

> loops like [-], [>] and [<] could even be completely reduced to a single operation.

This has got me wondering if it would make sense to pre-process the BF programs I've been using in genetic programming experiments. Loops are the #1 thing that slow down the search. But, I don't know that this optimization pass would be paid for in the typical case - at least not until the average complexity of the population reaches a certain point.

john-h-k

Optimising the program absolutely will win in general, because the optimisations are extremely cheap and very impactful. See https://github.com/john-h-k/rustfuck for an example of the sort of speedup differences you can get with different optimisation levels

bob1029

Could the optimizer still help in the general case if you only ever intend to run each program one time?

If so, I probably need to spend a lot more time in this rabbit hole.

null

[deleted]

jevndev

Im curious about what you're working on. Do you have a repo for the project?

As for optimizations, I figure evolution-designed languages might come up with things that are hard to pattern match for more complex operations.

bob1029

The core idea of what I am working on is to build a solution that can generate programs capable of converting arbitrary input to arbitrary output (bytes to bytes) based on a reasonable quantity of training data.

I'm trying to determine if a more symbolic approach may lend itself to broader generalization capabilities in lieu of massive amounts of training data. I am also trying to determine if dramatically simplified, CPU-only architectures could be feasible. I.e., ~8 interpreted instructions combined with clever search techniques (tournament selection & friends).

I don't have anything public yet. I am debating going wide open for better collaboration with others.

> I figure evolution-designed languages might come up with things that are hard to pattern match for more complex operations.

I think I agree with this - once you hit a certain level of complexity things would get really hard to anticipate. The chances you would hit good patterns would probably drop over time as the model improves.

I've been looking at an adjacent idea wherein a meta program is responsible for authoring the actual task program each time, but I haven't found traction here yet. Adding a 2nd layer really slows down the search. And, the fitness function for the meta program is a proxy at best unless you have a LOT of free time to critique random program sources.

ginko

I guess this is a good opportunity to show off my Zig comptime implementation of Brainfuck.

https://github.com/ginkgo/Zig-comptime-brainfuck