Skip to content(if available)orjump to list(if available)

6 comments

·July 14, 2025

ankit219

I can see Gary's point here. He got some stick for this on x, but Gary seems to be right here. One curious thing about this is how both sides have vacated their original positions: scaling people were all about scaling and now they do talk about RL as the next scaling curve but code tool call is an accepted paradigm. Symbolic group was more about symbols first and learning later (marcus himself in 2001 - structured representations are the only route to systematicity).

Code Interpreter + o3 is neurosymbolic AI. The architecture is very similar to a cognitive science flowchart (perception net -> symbolic scratch pad -> controller loop). How we got there is through gradient descent and not brittle expert written rules.

readthenotes1

"Hao then captures OpenAI’s sophomoric attitude towards fair scientific criticism:"

So this is a bitter editorial?

kiratp

So a sequence of characters that is a python program is “neurosymbolic” but a sequence (of the same domain) in English (a different ruleset) that says “reverse this string” is not?

ipsum2

Gary Marcus keeps hallucinating that LLMs use neurosymbolic AI, something he's harped on for years. LLMs do not, no matter how many mental gymnastics he performs.

NitpickLawyer

The article could use a pass for removing all the "I was right" spiel, but is it not true that LLM + interpreter / tools is neurosymbolic?

null

[deleted]