Will Scaling Solve Robotics?
9 comments
·July 3, 2025nnnnico
markisus
Me too. I’m feeling that things have not changed that much. Many companies have been started since 2023 to see if they can get the giant neural net approach to work and we have seen incremental progress based on demo videos. Meanwhile Tesla’s self driving tech is still struggling. But at the same time, robots can now do things that were impossible using classical approaches.
mohsen1
Robotic data is perhaps not sensor and motor activation data. It's just video of things happening. A good model doesn't need that sort of data to be good at world modeling
margalabargala
While you're not wrong, I suspect that people focusing on this will lead to less fast robotics development.
It's in a similar vein to how "you can prove that a single layer neural network exists that does the same thing as a combination of many neural networks", something that led to a lot of people focusing on single level NNs for "purity's sake" and causing an AI winter.
Like yeah, maybe you don't "need" sensor and motor data. Especially if you build what you're calling a "good" model.
But making a "good" model that gets results might be near impossible and building a "less good" model that does use sensor data, and performs far better on real tasks, might be way easier for us mere mortals to do.
yorwba
One of the viewpoints covered in the article: "Another set of people argued that we can leverage existing vision, language, and video data and then just ‘sprinkle in’ some robotics data."
refulgentis
Tesla is the worst, man, all the vague half-truths since 2018 creates ideas like Tesla has experts teleoperate 100K vehicles to collect data from cameras to build a huge model
"Tesla in particular has a fleet of over 100,000 cars deployed in the real world that it is constantly collecting and then annotating data from. These cars are being teleoperated by experts, making the data ideal for large-scale supervised learning."
imtringued
Robotics is one area where it is painfully obvious how inadequate the current static weight paradigm is.
Biological brains are extremely good continual learners. They don't need to be trained ahead of time at all for basic motor skills. They'll learn on their own and sharing information merely speeds the process up.
There is no such thing as a data problem. The scaling paradigm isn't obsolete because it doesn't work, it's obsolete because it appears to be wholly unnecessary.
Think about it, why the hell would it be necessary for every single person wanting a robot butler in the house to take a video of their kitchen and then blend it with tens of thousands of other kitchens, just so the robot knows your specific kitchen in and out? The other kitchens are irrelevant to you. You don't care about them.
traverseda
You'd expect someone who's never seen a kitchen before to be able to cook?
refulgentis
A persistent strain of thought assumes static robotics models are useless unless they're trained on the exact environment they operate in, and I'm not sure why.
That aside, arguments from handwaving about biology are extremely weak - this is the same proof structure used to explain that because we only have two eyes, a machine only needs 2 cameras to drive a car safely. It wants you to forget everything that happens after the light hits the retina, and difference between an always-wet lens that's cleared every second or two and a camera. i.e. it's a noble goal to have some model that never saw a kitchen also be an expert chef, but, there's no logical reason to claim that'd ever be the case.
Should be marked (2023)! Im wondering what would be the current state