Researchers Discover the Optimal Way to Optimize
10 comments
·October 13, 2025akshayka
BobbyTables2
I feel software/CS people largely avoid (or don’t need) certain areas of math.
To me, convex optimization is more the domain of engineering when there are continuous functions and/or stochastic processes involved.
Much of signal processing and digital communication systems are founded around convex optimization because it’s actually a sensible way to concretely answer “was this designed right?”.
One can use basic logic to prove a geometry proof, or the behavior of a distributed algorithm.
But if one wants to prove that a digital filter was designed properly for random/variable inputs, it leads to finding solutions of convex optimization problems (minimization of mean squared error or such).
Of course, whether the right problem is being solved is a different issue. MMSE is just mathematically extremely convenient but not necessarily the most meaningful characterization of behavior.
alfiedotwtf
I’ve always thought it was weird too, and have spent far too much time thinking why - my best guess is that it’s used in Economics while other methods aren’t used outside programming curiosities (unless you need to apply it at work)
treetalker
> While the efforts of Bach and Huiberts are of theoretical interest to colleagues in their field, the work has not yielded any immediate practical applications.
measurablefunc
There must be lots of theorems in optimization theory that can be improved w/ more intellectual effort. Unlike video generation if AI is applied to find better algorithms it will have a direct impact on the economy b/c almost every industrial process is using some kind of constraint optimization algorithm including the simplex algorithm & its variations. But it's not flashy & profitable so OpenAI will keep promising AGI by 2030 w/o showing any actual breakthroughs in real world applications.
brosco
One of OpenAI's founding team members developed Adam [0] well before it was flashy and profitable. It's not like nobody is out there trying to develop new algorithms.
The reality is that there are some great, mature solvers out there that work well enough for most cases. And while it might be possible to eke out more performance in specific problems, it would be very hard to beat existing solvers in general.
Theoretical developments like this, while interesting on their own, don't really contribute much to day-to-day users of linear programming. A lot of smart people have worked very hard to "optimize the optimizers" from a practical standpoint.
Animats
Neat. Progress on lower bounds. That's been a tough area for decades.
There are a lot of problems like this. Traveling salesman, for example. Exponential in the worst case, but polynomial almost all the time.
Does this indicate progress on P = NP?
fernly
Another nice quote,
> The next logical step is to invent a way to scale linearly with the number of constraints. “That is the North Star for all this research,” she said. But it would require a completely new strategy. “We are not at risk of achieving this anytime soon.”
kruffalon
> “We are not at risk of achieving this anytime soon.”
Here "risk" seems odd (or it's a translation/language-nuance mistake).
probablypower
It is not a mistake, it is just being cheeky.
Anecdotally it seems like most software engineers have heard of linear programming, but very few have heard of convex programming [1], and fewer still can apply it. The fixation on LPs is kind of odd ...
[1] https://github.com/cvxpy/cvxpy