Automating Algorithm Discovery: A Case Study in MoE Load Balancing
49 comments
·October 23, 2025accheng
liamYC
What does ADRS stand for?
nerdsniper
This blog post has more accessible writing and diagrams: https://www.sigops.org/2025/barbarians-at-the-gate-how-ai-is...
From TFA: https://arxiv.org/pdf/2510.06189
> We term this approach as AI-Driven Research for Systems (ADRS), which iteratively generates, evaluates, and refines solutions.
> The central thesis of this paper is that a new class of AI-driven approaches, which we term AI-Driven Research for Systems (ADRS), is beginning to show promising results in automated algorithm discovery, and will ultimately prompt a re-evaluation of the traditional role of systems researchers.
logicallee
did AI explain its thinking, or could it have just stumbled upon the solution without designing it or understanding why it worked? i.e. could it have just been a hallucination that happened to work?
accheng
This is a great question! By analyzing the logs of OpenEvolve with the full model outputs, we observed how the AI got its ideas (seemed to be pulling from literature in the space) and how it tried to apply them. So in some sense, it "reasoned" about how to get better algorithms. And we saw this process proceed systematically via the ADRS framework to converge to a significantly better algorithm
Izikiel43
Can you confirm if this generated code is the same as https://arxiv.org/pdf/2402.02447 ?
logicallee
very interesting, thank you.
_--__--__
Nice result, but the snake pattern is pretty obvious and intuitive even for a human who just glances over the problem. It kinda breaks if there is huge variance (if the top load expert is orders of magnitude higher than #2 it probably should just get its own GPU), but I'm not familiar enough with MoE to know if that's a realistic possibility.
abmfy
Thanks! In realistic workloads, the differences won’t be orders of magnitude.
I agree that this is a fairly simple problem. Experienced engineers—or anyone who has faced similar challenges—can quickly come up with such solutions. The key point, however, is that others might get stuck in their research simply because they don’t realize these quick solutions exist (“I don’t know what I don’t know”). AI helps bridge that gap by making expert-level knowledge accessible to every researcher, allowing them to focus more on exploring the truly unknown parts.
bgwalter
Except that "AI" steals and mostly does not do citations.
EDIT: The chutzpah of downvoting this is striking. The paper says "surpasses highly optimized algorithms engineered by human experts to achieve a 5.0x speedup" and https://news.ycombinator.com/item?id=45689663 links to a 2024 paper where humans discovered a 4.2x speedup using a snake pattern. The 2024 paper is not cited.
dash2
Given that, maybe the submission title should be changed?
pengaru
this should be the top comment
What "AI" is best at is enabling theft without crediting the true creators
pakt1
that's true for any application of AI :(
cblmemo
it’s exciting to see AI being applied to real systems problems in such a tangible way. Looking forward to seeing where goes next.
pos456
this feels less like Copilot and more like AlphaGo for systems programming. it's not just finding patterns in existing code, but discovering novel and more efficient strategies in a given problem space. Very cool.
joaohaas
So, if I got this right, this is just about re-implementing an existing load balancing algorithm faster...? If so, this is really dumb. As you guys checked out, yes most load balancing algorithms are slow/dumb:
>First, we evaluate DeepSeek's open-source EPLB implementation. This employs a greedy bin-packing strategy: experts are sorted by load in descending order, and each is placed onto the least-loaded GPU that has capacity (Figure 3a, Example 1). While simple, the solution is slow because it written in Python and uses a for-loop to performs linear search for finding the best-fit GPU choice.
This is because when considering a load balancing algorithm, unless the work being done (in this case by the GPU) lasts only a few ms, the load balancing algorithm being fast will never be the bottleneck. The post does not mention whether this is the case at all.
Also, I don't want to sound rude, but if all they managed to get is a 5x increase over a simple python algorithm, I don't think this is impressive at all...? Any rewrite of the 'dumb' algorithm in a language with more memory control and cache continuity should result in much better results.
abmfy
Thanks for commenting! Actually in this case, "the work being done" can be really fast because it can be done asynchronously. For context, here’s how this translates in a real-world application.
The original algorithm was provided by DeepSeek, and our optimized implementation achieves a 92× speedup over it. The 5x number is comparing with another baseline that is undisclosed yet.
When integrating EPLB into vLLM, I discovered—somewhat unexpectedly—that the open-source algorithm consumes nearly half of the total time of a rearrangement step, with the remaining time spent transferring weights across GPUs. To address this, I applied OpenEvolve to the algorithm, setting the primary objective to improve speed while maintaining the same balance factor. It performed remarkably well. With additional optimizations on the weight transferring, the overall overhead has now become almost negligible.
kristjansson
While no one will deny you (or I guess your system) the immense satisfaction of 100x improvement on a given step, I think it would be helpful to note the frequency of this rebalancing step, and to contextualize your result in terms of the runtime (or throughput) of the workload(s) you were using to evaluate.
teunlao
Agree. Starting from Python for-loops is embarrassing baseline. Any decent implementation gets you most of that 5x for free. The interesting part isn't the speedup - it's that AI can do routine optimization unsupervised. That's the actual value prop.
Noumenon72
There's nowhere on the page to find out what "ADRS" stands for since the upper left is cut off and isn't a link to your home page.
kristjansson
This is quite cool, but I must note that the 5x reported in the headline is the _runtime_ of the load balancing algorithm itself, not the load factor or throughput of the system or what have you.
> On average, it takes about 540 ms to re-balance the experts and achieves a load balance factor of 0.66 (calculated as the ratio of average to maximum tokens generated per GPU).
> ...
> We also consider a non-public reference implementation from a frontier lab that we have access to. This implementation avoids explicit iteration and reduces the rebalancing algorithm runtime to 19.6 ms while achieving the same balance factor as the open-source algorithm.
> ...
> The resulting algorithm matches the load balance factor of the other baselines while reducing runtime to just 3.7 ms, yielding a 5.0x speedup over the internal reference implementation.
quc1k
The final code might be fast, but is it understandable? The evolution process shows it tried a bunch of things that didn't work. The final result is a heuristic that won out based on a specific simulator and fitness function.
accheng
The code was quite short and easy to read. Specifying the right scoring function and scoping the problem are key parts of getting good results with ADRS.
bgwalter
I'm not sure if this is the exact same thing, but a load balancing paper reported a 4.2x speedup by applying a "snake pattern" in 2024:
coliveira
Most probably the AI was secretly tested on this data and is just stealing the algorithm.
letitgo12345
Seems the same tbh
abmfy
Thanks for letting us know! While we’re tackling different problems, the core idea around load balancing is quite similar.
The pattern might be a familiar trick to those experienced with this kind of problem — you can see my thoughts on it here: https://news.ycombinator.com/item?id=45688236#45689440
Jweb_Guru
It's okay to acknowledge that you missed something in your literature search. Everyone does. It's not okay to sweep it under the rug or pretend that it's novel after having having the prior work pointed out to you, especially when a central part of your thesis is that "AI" discovered a novel algorithm and it's very likely that this algorithm was part of the LLM's training data.
mavt6
i'm skeptical this generalizes beyond problems that can be expressed as "rearrange tensors faster". it feels like a solution that only works for a very narrow and convenient class of problems.
maven5t
getting a 5x speedup for less than $10 and in just five hours is insane. the roi on this approach is going to be hard to beat.
null
As an author of the blog, I'll note that this was one of the easiest applications of ADRS. Bowen, who was leading this effort, got things running within a day or two and the initial runs were with free Google credits! It was exciting to see how quickly these kinds of frameworks could be applied to real-world engineering and algorithmic challenges.