Skip to content(if available)orjump to list(if available)

Predictions Scorecard, 2025 January 01

Predictions Scorecard, 2025 January 01

189 comments

·January 10, 2025

sashank_1509

Feels too self-congratulatory when he claims to be correct about self driving in the Waymo case. The bar he set is so broad and ambiguous, that probably anything Waymo did, would not qualify as self driving to him. So he think humans are intervening once every 1-2 miles to train the Waymo, we’re not even sure if that is true, I heard from friends that it was 100+ miles but let us say Waymo comes out and says it is 1000 miles.

Then I bet Rodney can just fiddle with goal post and say that 3.26 trillion miles were driven in US in 2024, and having a human intervene 1000 miles would mean 3.26 billion interventions, and that this is clearly not self driving. In fact until Waymo disables Internet on all cars and prices it never needs any intervention ever, Rodney can claim he’s right, even then maybe not stopping exactly where Rodney wanted it to, might be proof that self driving doesn’t work.

Next big thing after deep learning prediction is clearly false. LLM is deep learning, scaled up, we are not in any sense looking past deep learning. Rodney I bet wanted it to be symbolic AI, but that is most likely a dead end, and the bitter lesson actually holds. In fact we have been riding this deep learning wave since Alex-Net 2012. OpenAI talked about scaling since 2016 and during that time the naysayers could be very confident and claim we needed something more, but OpenAI went ahead and proved out the scaling hypothesis and passed the language Turing test. We haven’t needed anything more except scale and reasoning has also turned out to be similar. Just an LLM trained to reason, no symbolic merger, not even a search step it seems like.

benreesman

Waymo cars can drive. Everything from the (limited) public literature to riding them personally has me totally persuaded that they can drive.

DeepMind RL/MCTS can succeed in fairly open-ended settings like StarCraft and shit.

Brain/DeepMind still knocks hard. They under-invested in LLMs and remain kind of half-hearted around it because they think it’s a dumbass sideshow because it is a dumbass sideshow.

They train on TPU which costs less than chips made of Rhodium like a rapper’s sunglasses, they fixed the structural limits in TF2 and PyTorch via the Jax ecosystem.

If I ever get interested in making some money again Google is the only FAANG outfit I’d look at.

tylerflick

I can tell you as someone that crosses paths almost everyday with a Waymo car, they absolutely due work. I would describe their driving behavior as very safe and overly cautious. I’m far more concerned of humans behind the wheel.

benreesman

I especially love how they can go fast when it’s safe and slow when the error bars go up even a little.

It’s like being in the back seat of Nikki Lauda’s car.

vessenes

Agreed Waymo cars can drive. Also I don't believe that, say, when a city bus stops on a narrow street near a school crosswalk, that the decision to edge out and around it is made on board the car, as I saw recently. The "car" made the right decision, drove it perfectly, and was safe at all times, but I just don't think anyone but a human in a call center said yes to that.

KKKKkkkk1

Which structural limits of TF2 and PyTorch were fixed via the Jax ecosystem?

fouronnes3

Does Waymo run on JAX?

tsimionescu

I think that, if it were true that Waymo cars require human intervention every 1-2 miles (thus requiring 1 operator for every, say, 1-2 cars, probably constantly paying attention while the car is in motion), then it would be fair to say that the cars are not really self driving.

However, if the real number is something like an intervention every 20 or 100 miles, and so an operator is likely passively monitoring dozens of cars, and the cars themselves ask for operator assistance rather than the operator actively monitoring them, then I would agree with you that Waymo has really achieved full self driving and his predicitons on the basic viability have turned out wrong.

I have no idea though which is the case. I would be very interested if there are any reliable resources pointing one way or the other.

skywhopper

I disagree that regular interventions every two trips where you have no control over pickup or dropoff points counts as full self driving.

But that definition doesn’t even matter. The key factor is whether the additional overhead, whatever percentage it is, makes economic sense for the operator or the customer. And it seems pretty clear the economics aren’t there yet.

laweijfmvo

Waymo is the best driver I’ve ridden with. Yes it has limited coverage. Maybe humans are intervening, but unless someone can prove that humans are intervening multiple times per ride, “self driving” is here, IMO, as of 2024.

Denzel

In what sense is self-driving “here” if the economics alone prove that it can’t get “here”? It’s not just limited coverage, it’s practically non-existent coverage, both nationally and globally, with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in.

AlotOfReading

It's covering significant areas of 3 major metros, and the core of one minor, with testing deployments in several other major metros. Considering the top 10 metros are >70% of the US ridehail market, that seems like a long way beyond "non-existent" coverage nationally.

jsnell

Where's the economic proof of impossibility? As far as I know Waymo has not published any official numbers, and any third party unit profitability analysis is going to be so sensitive to assumptions about e.g. exact depreciation schedules and utilization percentages that the error bars would inevitably be straddling both sides of the break-even line.

> with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in

That argument doesn't seem horribly compelling given the regular expansions to new areas.

YetAnotherNick

Here in the product/research sense, which is the hardest bar to cross. Making it cheaper takes time but generally we have reduced cost of everything by orders of magnitude when manufacturing ramps up, and I don't think self driving hardware(sensors etc) would be any different.

shrubble

Does Wayne operate in heavy rain and any kind of snow or ice conditions?

bhelkey

The author specifically calls out that the taxi service needs not operate in all weather conditions or times of day.

> First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.

However, their analysis this year is that, "This is unlikely to happen in the first half of this century."

The prediction is clear. The evaluation is dishonest.

khafra

> So he think humans are intervening once every 1-2 miles to train the Waymo

Just to make sure we're applying our rubric fairly and universally: Has anyone else been in an Uber where you wished you were able to intervene in the driving a few times, or at least apply RLHF to the driver?

(In other words: Waymo may be imperfect to the point where corrections are sometimes warranted; that doesn't mean they're not already driving at a superhuman level, for most humans. Just because there is no way for remote advisors to provide better decisions for human drivers doesn't mean that human-driven cars would not benefit from that, if it were available.).

tsimionescu

To apply this benchmark, you'd have to believe that Waymo is paying operators to improve the quality of the ride, not to make the ride possible at all. That is, you'd have to believe that the fully autonomous car works and gets you to your destination safely and in a timely manner (at the level of a median professional human driver), but Waymo decided that's not good enough and hired operators to improve beyond that. This seems very unlikely to me, and some of the (few) examples I've seen online were about correcting significant failures, such as waiting behind a parked truck indefinitely (as if it were stopped at a red light) or looping around aimlessly in a parking lot.

You'd also have to believe that when you wished to change how your Uber driver drove, you'd actually have improved things rather than worsened them.

lukeschlather

Let's suppose Waymo's fully automated stuff has tenfold-fewer fatal collisions than a human. There's no way to avoid the fatal accidents a human causes, and the solution to Waymos getting stuck sometimes is simple. The point is that the Waymo can actually be described as superior to a human driver, and the fact that its errors can be corrected with review is a feature and not a bug - they optimize for those kinds of errors rather than unrecoverable ones.

mvdtnz

Your objection to him claiming a win on self driving is that you think that we can still define cars as self driving even when humans are operating them? Ok I disagree. If humans are operating them then they simply are not self driving by any sensible definition.

sashank_1509

Human interventions are some non zero number in current self driving cars and will likely be that way for a while. Does this mean self driving is a scam and in fact it is just a human driving, and that these are actually ADAS. Maybe in some pedantic sense, you are right but then your definition is not useful, since it lumps cruise control/ lane-keeping ADAS and Waymo’s in the same category. Waymo is genuinely, qualitatively a big improvement above any ADAS/ self driving system that we have seen. I suspect Rodney did not predict even Waymo’s to be possible, but gave himself enough leeway so that he can pedantically argue that Waymo’s are just ADAS and that his prediction was right.

mvdtnz

No one said scam (although in the case of Tesla it absolutely is). It's just not a solved problem yet.

skywhopper

Some of them are scams, yes. For stuff like Waymo, it definitely doesn’t match the hype at the time he made the original predictions. As pointed out above, there were people in 2016 claiming we’d be buying cars without steering wheels that could go between any two points connected by roads by now.

Spivak

Yeah, I think semi-autonomous vehicles are a huge milestone and should be celebrated but the jump from semi-autonomous to fully-autonomous will, I think, feel noticeably different. It will be a moment future generations have trouble imagining a world where drunk or tired driving was ever even an issue.

fragmede

The future is here, just unevenly distributed. There are already people that don't have that issue, thanks to technology. That technology might be Waymo and not driving in the first place, or the technology might be smartphones and the Internet, which enables Uber/Lyft to operate. Some of them might use older technologies like concrete which enables people to live more densely and not have to drive to get to the nearest liquor establishment.

munchler

You can make exactly the opposite argument as well: You think that we can still define cars as human-driven even when they have self-driving features (e.g. lane keeping). If the car is self-driving in even the smallest way, then they simply are not human-operated by any sensible definition.

skywhopper

No one is making predictions or selling stock in the amount of “fully human controlled” vehicles.

littlestymaar

> when he claims to be correct about self driving in the Waymo case. The bar he set is so broad and ambiguous, that probably anything Waymo did, would not qualify as self driving to him

Honestly, back in 2012 or something I was convinced that we would have autonomous driving by now, and by autonomous driving I definitely didn't mean “one company is able to offer autonomous taxi rides is a very limited amount of places with remote operator supervision”, the marketing pitch has always been something along “the car you'll buy will be autonomously driving you to whatever destination you ask for, and you'll be just a passenger in you own car”, and we definitely aren't there at all when all we have is Waymo.

4ndrewl

Nonsense. If you spoke about self-driving cars a few decades ago you would have understood it to have meant that you could go to a dealer and buy a car that would drive itself, wherever you might be, without your input as a driver.

No-one would have equated the phrase "we'll have self-driving cars" with "some taxis in a few of US cities"

Schiendelman

That's how all innovation works. Ford never said people asked for a faster horse, but the theory holds. It doesn't matter what benchmarks you set, the market finds an interesting way to satisfy people's needs.

bhelkey

The prediction is:

> First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.

Their 2025 analysis is: "This is unlikely to happen in the first half of this century."

The prediction is clear. The evaluation is dishonest.

throw-qqqqq

I agree.. Waymo sells +150k rides every week according to Alphabet’s Q3 2024 earnings announcement. Yes they need human assistance once in a while. I know of plenty other automation that needs to be tickled or rebooted periodically to work, that most would still say works automatically.

Maybe he has a very narrow or strict definition of ‘driverless’. That would explain the “not in this half of the century”-sentiment. I mean, it’s 25 years!

gwern

The Waymo criticisms are absurd to the point of dishonesty. He criticizes a Waymo for... not pulling out fast enough around a truck, or for human criminals vandalizing them? Oh no, once some Waymos did a weird thing where they honked for a while! And a couple times they got stuck over a few million miles! This is an amazingly lame waste of space, and the fact that he does his best to only talk about Tesla instead of Waymo emphasizes how weak his arguments are, particularly in comparison to his earliest predictions. (Obviously only the best self-driving car matters to whether self-driving cars have been created.)

"Nothing ever happens"... until it does, and it seems Brooks's prediction roundups can now be conveniently replaced with a little rock on it with "nothing in AI ever works" written on it without anything of value being lost.

elicksaur

It’s interesting that in my reading of the post I felt like he hardly talked about Tesla at all.

He calls out that Tesla FSD has been “next year” for 11 years, but then the vast majority of the self-driving car section is about Cruise and Waymo. He also minorly mentions Tesla’s promise of a robotaxi service and how it is unlikely to be materially different than Cruise/Waymo. The amount of space allocated to each made sense as I read it.

For the meat of the issue: I can regularly drive places without someone else intervening. If someone else had to intervene in my driving 1/100 miles, even 1/1000 miles, most would probably say I shouldn’t have a license.

Yes, getting stuck behind a parked car or similar scenario is a critical flaw. It seems simple and non-important because it is not dangerous, but it means the drive would not be completed without a human. If I couldn’t drive to work because there was a parked car on my home street, again, people would question whether I should be on the road, and I’d probably be fired.

davedx

Interesting, that wasn't my takeaway from the article at all!

Direct quote from the article:

> Then I will weave them together to explain how it is still pretty much business as usual, and I mean that in a good way, with steady progress on both the science and engineering of AI.

There are some extremely emotional defences of Waymo on this comment thread. I don't quite understand why? Are they somehow inviolable to constructive criticism in the SV crowd?

Animats

> That being said, we are not on the verge of replacing and eliminating humans in either white collar jobs or blue collar jobs.

Tell that to someone laid off when replaced by some "AI" system.

> Waymo not autonomous enough

It's not clear how often Waymo cars need remote attention, but it's not every 1-2 miles. Customers would notice the vehicle being stopped and stuck during the wait for customer service. There are many videos of people driving in Waymos for hours without any sign of a situation that required remote intervention.

Tesla and Baidu do use remote drivers.

The situations where Waymo cars get stuck are now somewhat obscure cases. Yesterday, the new mayor of SF had two limos double-parked, and a Waymo got stuck behind that. A Waymo got stuck in a parade that hadn't been listed on Muni's street closure list.

> Flying cars

Probably at the 2028 Olympics in Los Angeles. They won't be cost-effective, but it will be a cool demo. EHang recently put solid state batteries into their flying car and got 48 minutes of flight time, instead of their previous 25 minutes. Ehang is basically a scaled-up quadrotor drone, with 16 motors and props. EHang been flying for years, but not for very long per recharge. Better batteries will help a lot.

[1] https://aerospaceamerica.aiaa.org/electric-air-taxi-flights-...

shlomo_z

> Tell that to someone laid off when replaced by some "AI" system. What are some good examples? I am very skeptical of anyone losing their jobs to AI. People are getting laid off for various reasons: - Companies are replacing American tech jobs with foreigners - Many companies hired more devs than they need - companies hired many devs during the pandemic, and don't need them anymore

Some companies may claim they are replacing devs with AI. I take it with a grain of salt. I believe some devs were probably replaced by AI, but not a large amount.

I think there may be a lot more layoffs in the future, but AI will probably account for a very small fraction of those.

lolinder

> I believe some devs were probably replaced by AI, but not a large amount.

I'm not even sold on the idea that there were any. The media likes to blame AI for the developer layoffs because it makes a much more exciting story than interest rates and arcane tax code changes.

But the fact is that we don't need more than the Section 174 changes and the end of ZIRP to explain what's happened in tech. Federal economic policy was set up to direct massive amounts of investment into software development. Now it's not. That's a real, quantifiable impact that can readily explain what we've seen in a way that the current productivity gains from these tools simply can't.

Now, I'll definitely accept that many companies are attributing their layoffs to AI, but that's for much the same reason that the media laps the story up: it's a far better line to feed investors than that the financial environment has changed for the worse.

Mistletoe

But I see so many devs typing here saying how vital AI is to their writing code efficiently and quickly now. If that is true then you need way less devs. Are people just sitting idle at their desks? I do see quite a bit of tech layoffs for sure. Are you saying devs aren't part of the workers being laid off?

>In 2024: At least 95,667 workers at U.S.-based tech companies have lost their jobs so far in the year, according to a Crunchbase News tally.

rcpt

It's pretty much impossible to get work as a copywriter now

RamblingCTO

I was thinking about this. I think we have an overcorrection right now. People get laid off because of expected performance of AI, not real performance. With copywriting and software development we have three options:

1. leaders notice they were wrong, start to increase human headcount again 2. human work is seen as boutique and premium, used for marketing and market placement 3. we just accept the sub-par quality of AI and go with it (quite likely with copywriting I guess)

I'd like to compare it with cinema and Netflix. There was a time where lost of stuff was mindless shit, but there was still place for A24 and it took the world by storm. What's gonna happen? No one knows.

But anyway, I figure that 90% of "laid off because of AI" is just regular lay-offs with a nice sounding reason. You don't loose anything by saying that and only gain in stakeholder trust.

theLiminator

I imagine there aren't really layoffs, but slowing/stopping of hiring as you get more productivity out of existing devs. I imagine in the future, lots of companies will just let their employee base slowly attrition away.

davedx

Yeah, the AgentForce thing is a classic example. Internal leaks say Salesforce is using it as cover for more regular (cost cutting based) layoffs. People who've actually evaluated AgentForce don't think it's ready for prime time. It's more smoke and mirrors (and lots of marketing).

davedx

I think what Waymo's achieved is really impressive, and I like the way they've rolled out (carefully), but there's a lot of non evidence based defense of them in this comment thread. YouTube videos of people driving for hours are textbook survivorship bias. (What about all the videos people made but didn't upload because their drive didn't go perfectly?)

Nobody knows how many times operators intervene, because Waymo hasn't said. It's literally impossible to deduce.

Which means I also agree his estimate could also be wildly wrong too.

skywhopper

He’s saying AI can’t do the work of humans, not that dumb executives won’t pretend it can.

brcmthrowaway

What is the silver bullet for battery tech?

Animats

Solid state batteries. Prototypes work, but high-volume manufacturing doesn't work yet. The major battery manufacturers are all trying to get this to production. Early versions will probably be expensive.

Maybe a 2x improvement in kwh/kg. Much less risk of fire or thermal runaway. Charging in < 10 mins.

Teever

The one thing I'm curious about with solid state batteries is if there's a path towards incremental improvements in power density like we've seen with lithium batteries?

It would be unfortunate if we get solid state batteries that have the great features that you describe but they're limited to 2x or so power density. Twice the power density opens a lot of doors for technology improvements and innovation but it's still limiting for really cool things like humanoid robotics and large scale battery powered aircraft.

brcmthrowaway

What about LK-99? Twitter influencers were talking about that.

adgjlsfhk1

I think there are ~3 major battery improvements to watch out for.

1. Solid state batteries. Likely to be expensive, but promise better energy density.

2. Some really good grid storage battery. Likely made with iron or molten salt or something like that. Dirt cheap, but horrible energy density.

3. Continued Lithium ion battery improvements, e.g. cheaper, more durable etc.

Animats

There are now a few large flow batteries. Here's one that's 400 megawatt-hours.[1] Round trip efficiency is poor and the installation is bulky, but storage is just tanks of liquid that are constantly recycled.

[1] https://newatlas.com/energy/worlds-largest-flow-battery-grid...

davedx

My money is on saltwater batteries. You can make them really cheaply. Flow batteries are still too complicated IMO.

coderintherye

Good example of everything that can wrong with a prediction market if left unchecked. Don't like that Waymo broke your prediction? Fine just move your goalposts. Like that prediction came true but on the wrong timeframe? Just move the goal posts.

Glad Polymarket (and other related markets) exist so they can put actual goal posts in place with mechanisms that require certain outcomes in order to finalize on a prediction result.

sashank_1509

Well said, shows even the most accomplished humans have the same biases as the rest of us when not held accountable

HDThoreaun

Polymarket suffers from the same problem. This market https://polymarket.com/event/ethereum-etf-approved-by-may-31... was resolved in an extremely contentious way.

littlestymaar

> Glad Polymarket (and other related markets) exist so

Polymarket is a great way to incentive people into making their predictions happen, with all clandestine tools at their disposal, which is definitely not what you want for your society generally.

UniverseHacker

It seems to me that the redefined flying cars for extremely wealthy people did happen? eVTOLs are being sold/delivered to the general public. Certainly still pretty rare, as I've never seen one in real life. I'd love to have one but would probably hate a world where everyone has them.

Not really wanting to have this argument a second time in a week (seriously- just look at my past comments instead of replying here as I said all I care to say https://news.ycombinator.com/item?id=42588699), but he is totally wrong about LLMs just looking up answers in their weights- they can correctly answer questions about totally fabricated new scenarios, such as solving simple physics questions that require tracking the location of objects and reporting where they will likely end up based on modeling the interactions involved. If you absolutely must reply that I am wrong at least try it yourself first in a recent model like GPT-4o and post the prompt you tried.

laweijfmvo

Kobe Bryant basically commuted by helicopter, when it was convenient. It may have even taken off and landed at his house, but probably not exactly at all of his destinations. Is a “flying car” fundamentally that much different?

UniverseHacker

I think the difference is that a helicopter is extremely technical to fly requiring complex and expensive training, and the eVTOL is supposed to be extremely simple to fly. Also the eVTOL in principle is really cheap to make if you just consider the materials and construction costs- probably eventually much cheaper than a car.

I was curious so I looked up how much you can buy the cheapest new helicopters for, and they are cheaper than an eVTOL right now- the XE composite is $68k new, and things like that can be ~25k used. I'm shocked one can in principle own a working helicopter for less than the price of a 7 year old Toyota Camry.

torginus

Nothing that flies in the air is that safe for its passengers or its surroundings - not without restrictions placed on it and having a maintenance schedule that most people would not be comfortable following.

Most components are safety critical in ways that their failure can lead to an outright crash or feeding the pilot false information leading him to make a fatal mistake. Most cars can be run relatively safely even with major mechanical issues, but something as 'simple' as a broken heater on a pitot tube (or any other component) can lead to a crash.

Then there's an issue of weather - altitude, temperature, humidity, wind speed can create an environment that makes it either impossible, unsafe, or extremely unpleasant - imagine flying into an eddy current that stalls out the aircraft, making your ass drop a few feet.

Flying's a nice hobby, and I have great respect to people who can make a career out of it, but I'd definitely not get into these auto-piloted eVTOLs, nor should people who don't know what they are doing.

Edit: Also unlike helicopters, which can autorotate, and fixed wing aircraft, that can glide, eVTOLs just drop out of the sky.

mgfist

eVTOLs are going to be much more expensive to build than helicopters because they have far more stringent weight/strength requirements due to low battery energy density (relative to aviation fuel).

The idea is to have far cheaper operating costs. Electric motors are far more efficient than ICE, so you should have much cheaper energy costs. Electric motors are also simpler than ICE so you should have cheaper maintenance with less required downtime compared to helicopters.

Of course, most of this is still being tested and worked on. But we are getting closer to having these get certified (FAA just released the SFAR for eVTOL, the first one since the 1940s).

xarope

But I'm sure running costs (aviation fuel), hanger costs, maintenance costs, cost to maintain pilot license are far more expensive, compared to driving a car.

input_sh

Can you imagine thousands of flying cars flying low over urban areas?

Skill level needed for "driving" would increase by a lot, noise levels would be abysmal, security implications would be severe (be they intentional or mechanical in nature), privacy implications would result in nobody wanting to have windows.

This is all more-or-less true for drones as well, but their weight is comparable to a todler, not to a polar bear. I firmly believe they'll never reach mass usage, but not because they're impossible to make.

UniverseHacker

That does sound truly awful. I already hate the noise of internal combustion cars and am looking forward to cars getting quieter.

xarope

I had a friend who used to (still does) fly RC helicopters; that requires quite a bit of skill. Meanwhile, I think anybody can fly a DJI drone. I think that's what will transform "flying" when anybody, not just a highly skilled pilot, can "drive" a flying car (assuming it can be as safe as a normal car... which somehow I doubt)

Al-Khwarizmi

Yeah, as an NLP researcher I was reading the post with interest until I found that gross oversimplification about LLMs, which has been repeatedly proved wrong. Now I don't trust the comments and predictions on the other fields I know much less about.

sinuhe69

I always have a definitional problem with predictions. I mean, it's moot whether a specific prediction is right or wrong as long as it doesn't help us to understand the big picture and the trends.

Take, for example, the prediction about "robots can autonomously navigate all US households". Why all? From the business POV, 80% of the market is "all" in a practical sense, and most people will consider navigation around the home as "solved" if they can do it for the majority of households and with virtually no intervention. Hilarious situations will arise that amuse the folks; video of clumsy robots will flood the internet instead of cats and dogs, but for the business site, it's lucrative enough to produce and sell them en masse. Another question of interest is how is the trend? What will the approximate cost of such a robot be? How many US households will adopt such a robot by which time, as they have adopted washing machines and dishwashers. Will we see a linear adoption or rather a logistic adoption? These are the more interesting questions than just whether I'm right or wrong.

SavageBeast

In reading this I come to wonder if the current advances in "AI" are going to follow the Self Driving Car model. Turns out the 80% is relatively easy to do, but the remaining 20% to get it right is REALLY hard.

brisky

Agree, that is why the agent hype is going to bust. Agent means giving AI control. That means critical failure modes and the need of human to constantly oversee agent working.

thefaux

> Their imaginations were definitely encourage by exponentialism, but in fact all they knew was that when the went from smallish to largish networks following the architectural diagram above, the performance got much better. So the inherent reasoning was that if more made things better then more more would make things more better. Alas for them it appears that this is probably not the case.

I recommend reading Richard Hamming's "The Art of Science and Engineering." Early in the book he presents a simple model of knowledge growth that always leads to an s-curve. The trouble is that on the left, an s-curve looks exponential. We still don't know where we are on the curve with any of the technologies. It is very possible we've already passed the exponential growth phase with some of these technologies. If so, we will need new technologies to move forward to the next s curve.

kqr

> Systems which do require remote operations assistance to get full reliability cut into that economic advantage and have a higher burden on their ROI calculations

Technically true but I'm not convinced it matters that much. The reason autonomation took over in manufacturing was not that they could fire the operator entirely, but that one operator could man 8 machines simultaneously instead of just one.

teractiveodular

All that verbiage about robotaxis and not a single mention about China, which by all accounts is well ahead of the US in deploying them out on the road. (With a distinctly mixed track record, it must be said, but still.)

rexreed

I like Rodney Brooks, but I find the way he does these predictions to be very obtuse and subject to a lot of self-congratulatory interpretation. Highlighting something green that is "NET2021" and then saying he was right when something happened or didn't happen, when something related happened in 2024 mean that he predicted it right or wrong, or is everything subject to arbitrary interpretation? Where are the bold predictions? Sounds like a lot of fairly obvious predictions with a lot of wiggle room to determine if right or wrong.

gcr

NET2021 means that he predicted that the event would take place on or after 2021, so happening in 2024 satisfies that. Keep in mind these are six-year-old predictions.

Are you wishing that he had tighter confidence intervals?

rexreed

If the predictions are meant to be bold, then yes. If they're meant to be fairly obvious, then no.

For example, saying that flying cars will be in widespread use NET 2025 is not much of a prediction. I think we can all say that if flying cars will be in widespread use, it will happen No Earlier Than 2025. It could happen in 2060, and that NET 2025 prediction would still be true. He could mark it green in 2026 and say he was right, that, yes, there are no flying cars, and so mark his scorecard another point in the correct column. But is that really a prediction?

A bolder prediction would be, say "Within 1-2 yrs of XX".

So what is Rodney Brooks really trying to predict and say? I'd rather read about what the necessary gating conditions are for something significant and prediction-worthy to occur, or what the intractable problems are that would make something not be possible within a predicted time, rather than reading about him complain about how much overhype and media sensation there is in the AI and robotics (and space) fields. Yes, there is, but that's not much of a prediction or statement either, as it's fairly obvious.

There's also a bit of an undercurrent of complaint in this long article about how the not-as-sexy or hyped work he has done for all those years has gone relatively unrewarded and "undeserving types" are getting all the attention (and money). And as such, many of the predictions and commentary on them read more as rant than as prediction.

Denzel

Presumably you read the section where Brooks highlights all the forecasts executives were making in 2017? His NET predictions act as a sort of counter-prediction to those types of blind optimistic, overly confident assertions.

In that context, I’d say his predictions are neither obvious nor lacking boldness when we have influential people running around claiming that AGI is here today, AI agents will enter the workforce this year, and we should be prepared for AI-enabled layoffs.

riffraff

The NET estimation is supposed to be a counter to the irrational exuberance of media and PR. E.g. musk says they'll get humans to Mars in 2020, and the counter is "I don't think that will happen until at least 2030".

kragen

"NET2021" means "no earlier than 2021". So, if nothing even arguably similar happened until 2024, that sounds like a very correct prediction.

Whether that's worth congratulating him about depends on how obvious it was, but I think you really need to measure "fairly obvious" at the time the prediction is made, not seven years later. A lot of things that seem "fairly obvious" now weren't obvious at all then.

vikrantrathore

For me this predictions are kind of being aware of how progress can happen based on history, but this will not lead to any breakthrough. I am not in the camp of being skeptic so I still like the hype cycle, they create an environment for people to break the boundaries and sometimes help untested ideas and things to be explored. This might not have happen if there is no hype cycle. I am in the camp of people who are positive as George Bernard Shaw in his 2 quotes:

  1. A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing.
  2. The reasonable person adapts themselves to the world: the unreasonable one persists in trying to adapt the world to themself. Therefore all progress depends on the unreasonable person. (Changed man to person as I feel it should be gender neutral)
In hindsight when we look back, everything looks like we anticipated, so predictions are no different some pans out some doesn't. My feeling after reading prediction scorecard is that you need a right balance between risk averse (who are either doubtful or do not have faith things will happen quickly enough) and risk takers (one who is extremely positive) for anything good to happen. Both help humanity to move forward and are necessary part of nature.

It is possible AGI might replace humans in a short term and then new kind of work emerges and humans again find something different. There is always a disruption with new changes and some survive and some can't, even if nothing much happens its worth trying as said in quote 1.

sgt101

I feel a counter is that hyping and going along with hype leads to substantial misallocation of capital and this leads to human misery.

How much money has been burned on robo-taxis which could have been spent on incubators for kids.

kookamamie

It's far too rambly and vague to make any sense of the achieved results, I think.