Skip to content(if available)orjump to list(if available)

How to solve computational science problems with AI: PINNs

patrickkidger

FWIW - I used to do research in this area - PINNs are a terribly overhyped idea.

See for example https://www.nature.com/articles/s42256-024-00897-5

Classical solvers are very very good at solving PDEs. In contrast PINNs solve PDEs by... training a neural network. Not once, that can be used again later. But every single time you solve a new PDE!

You can vary this idea to try to fix it, but it's still really hard to make it better than any classical method.

As such the main use cases for PINNs -- they do have them! -- is to solve awkward stuff like high-dimensional PDEs or nonlocal operators or something. Here it's not that the PINNs got any better, it's just that all the classical solvers fall off a cliff.

---

Importantly -- none of the above applies to stuff like neural differential equations or neural closure models. These are genuinely really cool and have wide-ranging applications.! The difference is that PINNs are numerical solvers, whilst NDEs/NCMs are techniques for modelling data.

/rant ;)

__mmd

I believe a lot of this hype is purely attributable to Karniadakis and how bad a lot of the methods in many areas of engineering are. The methods coming out of CRUNCH (PINNs chief among them) seem, if they are not just actually, more intelligent in comparison, since engineers are happy to take a solution to inverse or model selection problems by pure brute force as "innovative" haha.

mnky9800n

I love karniadakis energy. I invited him to give a talk in my research center ands his talk was fun and really targeted at physicists who understand numerical computing. He gave a good sell and was highly opinionated which was super welcomed. His main argument was that these are just other ways to arrive optimisation and they worked very quickly with only a bit of data. I am sure he would correct me greatly at this point. I’m not an expert on this topic but he knew the field very well and talked at length about the differences between one iterative method he developed and the method that Yao lai at Stanford developed after I had her work on my mind because she talked in an ai conference I organised in Oslo. I liked that he seemed to be willing to disagree with people about his own opinions because he simply believed he is correct.

Edit: this is the Yao lai paper I’m talking about:

https://www.sciencedirect.com/science/article/pii/S002199912...

anon389r58r58

The general rule of thumb to go by is that whatever Karniadakis proposes, doesn't actually work outside of his benchmarks. PINNs don't really work, and _his flavor_ of neural operators also don't really work.

PINNs have serious problems with the way the "PDE-component" of the loss function needs to be posed, and outside of throwing tons of, often Chinese, PhD students, and postdocs at it, they usually don't work for actual problems. Mostly owed to the instabilities of higher order automatic derivatives, at which point PINN-people begin to go through a cascade of alternative approaches to obtain these higher-order derivatives. But these are all just hacks.

mnky9800n

What do you do now?

Matthyze

Me and a friend were discussing PINNs, and he made an argument against them: The Bitter Lesson. PINNs are a way of incorporating domain knowledge into ML models. The Bitter Lesson, for those unaware, is a famous essay by Rich Sutton that states that the history of AI is full of attempts of methods guided by domain/expert knowledge, but that ultimately, all such methods were overtaken by methods that simply scaled data and/or computation.

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

I would love to hear HN's take on this argument.

add-sub-mul-div

Computer science is currently subservient to an economic climate in which the only viable business is one that scales revenue without scaling labor. That's the bitter lesson.

rangestransform

Improving labour productivity is good, actually

PaulHoule

But isn't that the story of technology and civilization? Hunter-Gatherers produced no surplus and couldn't support a complex and unequal society. Early agriculture could support a pyramid, but not very high.

It used to be almost everyone worked in agriculture, now about 1% does, so the others are free to do something else. Prior to the microprocessor making a computer required manual assembly of thousands of parts, early microprocessors contained thousands of parts manufactured by a small number of photographic and chemical steps, and the number of parts has grown into the billions without the number of steps expanding millions of times.

roger_

This articles seems like it was at least partially written by AI. Lots of fluff and no clear explanation of what PINNs are and how they work (other than the code).

getnormality

Agreed. Stylistic hints include the heavy use of bulleted lists with bold headings, and the general lack of concern with justifying any vaguely plausible-sounding assertion ("The PINN approach ensures physical consistency, efficient computation, and accurate generalization from limited data.")

I think someone who cared about the specific content would at least note that linear PDEs like the heat equation often have closed-form solutions and/or efficient algorithms for solving any particular problem, so aren't likely to be usefully solved with PINNs.