Mastering Atari Games with Natural Intelligence
13 comments
·January 23, 2025internet_points
ASalazarMX
Absolutely, that's like suddenly calling a water pump "manual labor" because it pumps water as we think a person would. It's marketing, and it doesn't exude honesty.
djmips
Very nervy trademarking genius and calling this natural intelligence.
SonOfLilit
This smelled like BS the moment they invoked Karl Friston, but when I noticed it's a publicly traded company (without any available products or research artifacts) I became _extremely_ suspicious.
The only non-scammy thing I could find about them is that Friston seems to work there.
chilmers
It reminds me of a machine-learning startup I worked at for a while. It was a company formed around a single scientist and their research, with the intent of productizing it. It wasn't a scam, people were operating in good faith, but it struggled with the basic problem that turning research into a product, and then selling that product, is difficult and time-consuming, especially for a startup. Yes, in theory there is a large market for more intelligent prediction and analytics across a lot of industries, but actually establishing yourself in say, insurance underwriting, is a difficult slog.
The hype around OpenAI and LLMs has slightly obscured the fact that, traditionally, AI has been very difficult to productize. DeepMind were operating for years, doing cool research and solving problems like playing Go, without actually building any usable products. OpenAI have succeeded so far by having massive funding, and by generating enough excitement around the capabilities of their models to produce an ecosystem of people trying to figure out how to build profitable products from it. But most AI platform startups don't have their level of funding or visibility.
Now, perhaps everything this company is saying is BS, but if we give the the benefit of the doubt, it sound like they have had some success in a specific area, namely training to play Atari games on more limited data that existing models. If true, that's pretty cool, but ultimately there is no market for an AI that plays Atari games, even at superhuman levels.
9q9
Benefit of doubt is a great concept. The other extreme is: extraordinary claims require extraordinary evidence. Why should we restrict ourselves to a binary choice? Can we not think in a more nuanced fashion, in Bayesian terms? In other words look at all available evidence and assign probabilities?
"We are the next DeepMind" is easy to say ... The DeepMind founders had a stellar predigee in computer games, AI and neuroscience, the Verses founders have a cryptocurrency background. Verses also released [1] last month. What both the Atari and the Mastermind announcements have in common is the lack of details, including code. Why do they not show their code? How do we know their figures are real? We've just had the OpenAI vs FrontierMath discussion [2, 3]. Presumably, being able to play Pong, a 1972 computer game, is unlikely to be their moat ...
Interesting also their 2024 MLST presentation [4]. Does that inspire confidence? It was that video that made my priors on Friston having had a breakthrough in ML change downwards dramatically ... But do not take my word for it, please make up your own mind.
[1] https://www.verses.ai/blog/genius-outperforms-openai-model-i...
[2] https://techcrunch.com/2025/01/19/ai-benchmarking-organizati...
SonOfLilit
There will soon be a multitrillion dollar market for AI that is SOTA at playing Atari games trained on small datasets, but it doesn't change the fact that everything about these guys smells like a scam.
9q9
Do those adjacent organisations inspire confidence?
9q9
So I'm not the only who wonders about the hyperbole emanating from Friston et al! Some more morsels:
- The CEO is an "International Bestselling Author" [1].
- The company blog states that Friston has "successfully [decoded] the underlying mechanisms of intelligence as it functions in the brain and biological systems" [2].
But they got $10M investment from G42, an Emirati VC [3]. Note that G42 have also invested in Cerebras and OpenAI [4]. So their PR works.
[1] https://www.linkedin.com/in/gabriel-ren%C3%A9-0201902/
[2] https://www.verses.ai/blog/blogs/letter-from-the-ceo
[3] https://21624003.fs1.hubspotusercontent-na1.net/hubfs/216240...
SonOfLilit
Friston is a bona fide world-famous neurobiologist (although renowned mostly for his papers being completely undecipherable).
If I were a VC, I'd give him $10M no questions asked for the small chance he's on to something. I'd expect him to be able to raise a $100M seed. So for me this is evidence against.
edit: he seems to have joined only in 2022, they were 4 years old at the time.
9q9
Agreed, Friston's Bona Fides are impressive. (Aside: His fame in neuroscience comes from him having written important FMRI software that everybody cites.)
That's also why I worked with his team and read a lot of his papers for a while. His principal idea was originally that neurons perform free energy minimisation. This idea makes a lot of sense, once you understand what free energy means. But, to the best of my knowledge, it has not at all been empirically verified for neurons (I'd be delighted to be proven wrong in this belief). So he went the route of generalising the free energy principle: "the free energy principle asserts that any “thing” that attains a nonequilibrium steady state can be construed as performing an elemental sort of Bayesian inference". Terms "can be construed" and "elemental sort of Bayesian inference" do a lot of work here. Updating and generalising one's research hypothesis is legitimate (albeit one could be more explicit about this), but it weakens the claim being made. Anyway, under a charitable interpretation of those terms, I agree that this is true, but, at the same time doesn't say much. Indeed, under the charitable interpretation it basically equates doing free energy minimisation with existence. Friston has lately said that the FEP is not falsifiable. Take it from the horse's mouth (i.e. a Verses employee): "the free energy principle just applies to stones, it applies to birds, it applies to any kinds of animals" on Machine Learning Street Talk [1].
Here is my current position: from a mathematical principle this general one cannot derive scalable ML algorithms!
> he seems to have joined only in 2022, they were 4 years old at the time.
The company founders have a cryptocurrency and (later) metaverse background.
low_tech_love
“If, metaphorically, the methods used by DQN and Agent57 are gas-guzzling Hummers and those used to tackle the Atari 100k challenge are like a fuel-efficient Prius, then our approach used on Atari 10k is like a Tesla, a hyper-efficient alternative architecture.”
Eh, what?
m000
- Can you write it in a way that the aging tech-bro would understand?
- Don't worry. I got it boss.
doesn't "natural intelligence" normally mean human intelligence?