Skip to content(if available)orjump to list(if available)

Self-Driving Teslas Are Fatally Rear-Ending Motorcyclists More Than Any Other

xbmcuser

The point I soured on Musk was when he ditched radar/lidar and tried to go with camera's alone. This made me realize he is not the genius he is made out to be but instead he is a fraud/charlatan and over the years his statements on different topics have only hardened that belief. Why the fuck would you want ai cars to be the same as humans you should want them to be many times better and radarr/lidar sonarr type tech make them better

toddmorey

This was purely an effort to improve margins on the cars that they tried to sell with other types of rationale. From the way I've seen him operate his companies, treat his employees, and now work with the government, he has a high tolerance for risk paired with a very low tolerance for perceived inefficiencies plus too little patience to fully understand the problem.

He really embodies the ethos of "move fast and break things". So let's fire 80% of the staff, see what falls down, and rehire where we made "mistakes". I really think he has an alarmingly high threshold for the number of lives we can lose if it accelerates the pace of progress.

kelipso

It's pretty funny because lidar used to be many thousands of dollars and now it's down to hundreds and they're still sticking to just regular cameras. Funny in the sense that Teslas are many times more dangerous than lidar enabled cars but anyway.

labrador

Yes, it's become clear to me that it's only a matter of time before Tesla adds LiDAR and says that was the plan all along. Meanwhile, what better way to test vision-only than by releasing it to the beta testing public? Elon Musk knows people will die but in his ketamine addled brain he's convinced himself that a vision only self-driving Tesla is safer than a human driver so it's ok. People were going to die anyway statistically speaking.

4d4m

How much is a solid state lidar now?

Ygg2

To quote Black Adder: Some of you may die, but some of us will live!

throw0101d

"Some of you may die, but it's a sacrifice I am willing to make." — Lord Farquaad, Shrek, https://www.imdb.com/title/tt0126029/characters/nm0001475?it...

mindslight

The sour irony is that "move fast and break things" was formulated in the low-stakes world of web entertainment software, which was able to become so prominent precisely because of the stability of having our more pressing needs being predictably taken care of (for the most part).

ben_w

Yup, it's a product development process known as "Muntzing": https://en.wikipedia.org/wiki/Muntzing

While it was absolutely vital to getting the costs of the original Tesla Roadster and SpaceX launches way down… it can only work when you are able to accept "no, stop" as an answer.

Rockets explode when you get them wrong, you can't miss it.

Cars crashing more than other models? That's statistics, which can be massaged.

Government work? There's always someone complaining no matter what you do, very easy to convince yourself that all criticism is unimportant, no matter how bad it gets. (And it gets much worse than the worst we've actually seen from DOGE and Trump — I don't actually think they'll get to be as bad as the Irish Potato Famine, but that is an example of leaders refusing to accept what was going on).

tzs

After reading the Wikipedia article on Muntzing it is worth also reading the article about Muntz himself [1]. He may have been the first person to sell TVs by diagonal screen size instead of screen width.

[1] https://en.wikipedia.org/wiki/Madman_Muntz

rayiner

[flagged]

energy123

Multiple independent sensor streams is like having multiple pilots fly a plane instead of just one pilot. The chance of a fatal error (false negative identification of an object near the car) decreases from p to p^n. The size of that decrease is not intuitive: if p=0.0001 it now becomes a much smaller number with the introduction of a second pilot or second independent sensor stream (0.0001^2).

Now the errors are not all independent so it's not as good as that, but many classes of errors are independent (e.g. two pilots getting a heart attack versus just one) so you can get pretty close to that p^n. Musk did not understand this. He's just not as smart as he's made out to be.

jfengel

I misread your first sentence, in that having multiple pilots is not necessarily a good thing, like multiple cooks.

You have a main and a backup pilot, but either one must be 100% capable of doing it on their own. The backup is silently double checking, but their assignments are more about ensuring that the copilot doesn't just check out because they're human. If the copilot ever has to say "don't do that it's going to kill us all" it's a crisis.

Lidar is a good backup but the car must be able to with without it. You can't drive just with lidar; it's like driving by Braille. Lidar can't even read a stop light. If they cannot handle it with just the visuals, the car should be not be allowed on the road.

I concur that is terrifying that he was allowed to go without the backup that stops it from killing people. Human co drivers are not a good enough backup.

But he's also not wrong that the visual system must be practically perfect -- if it's possible at all. Which it surely isn't yet.

Retric

Cars unlike aircraft don’t need to move forward to maintain safety. If the lidar normally has very good uptime but happens to break, you need to safely come to a complete stop and that’s basically it.

Trusting vision systems for 30 seconds or even 30 minutes over the lifetime of a car is very different than trusting them for 30,000 hours. So for edge cases sure, add a “this is an emergency drive to a hospital without LiDAR” mode but you don’t need to handle normal driving without them.

travisjungroth

> You have a main and a backup pilot, but either one must be 100% capable of doing it on their own. The backup is silently double checking, but their assignments are more about ensuring that the copilot doesn't just check out because they're human.

This is not how flying works in a multi-crew environment. It’s a common misconception about the dynamic.

Both pilots have active roles. Pilots also generally take turns who is manipulating the flight controls (“flying the airplane”) each flight.

Havoc

>Multiple independent sensor streams is like having multiple pilots fly a plane.

Don't think that's the right analogy. Realistically you'd aim to combine them meaningfully. A bit like two eyes gives you depth perception.

You assume 1+1 is less than two, when really you'd aim for >2

dawnerd

That’s the thing, they’re already merging multiple data streams (cameras) and dealing with visual anomalies. Pretty nonsense they can’t figure out different sensors when other companies are doing it just fine.

Izkata

I remember this being his reasoning for it, that the lidar should be unnecessary if they could get multi-camera depth perception to work like it does in humans.

null

[deleted]

vitus

I think that's an oversimplification, and helps mainly in the case where one sensor is totally offline.

If you have two sensors, one says everything's fine but the other says you're about to crash, which one do you trust? What if the one that says you're about to crash is feeding you bad data? And what if the resulting course correction leads to a different failure?

I'd hope that we've learned these lessons from the 737 Max crashes. In both cases, one sensor thought that the plane was at imminent risk of stalling, and so it forced the nose of the plane down, thereby leading to an entirely different failure mode.

Now, of course, having two sensors is better than just having the one faulty sensor. But it's worth emphasizing that not all sensor failures are created equal. And of course, it's important to monitor your monitoring.

breadwinner

> If you have two sensors, one says everything's fine but the other says you're about to crash, which one do you trust?

Neither. You stop the car and tow it to the nearest repair facility. (Or have a human driver take over until the fault is repaired.)

Spooky23

The Elon magic is getting people to bikeshed bullshit like this and ignore the bigger issues.

You don’t gouge out your ears because you hear something walking around at night that doesn’t appear to be accurate. As a human, your executive function makes judgements based on context and what you know. Your eyesight is degraded in the dark, so your brain pays more attention to unexpected sound.

The argument for lidar, sonar or radar isn’t that cameras are “bad”, it’s that they perform very well in circumstances where visual input may not. As an engineer, you have an ethical obligation to consider the use case of the product.

It’s not at all like the 737max issue - many companies have been able to successfully implement these features I have a almost decade old Honda that uses camera and radar sensors to manage adaptive cruise and lane keeping features flawlessly.

In the case of Tesla and their dear leader, they tend to make dogmatic engineering decisions based on personal priorities. They then spackle in legal and astroturf marketing bullshit to dodge accountability. The folly of relying on cameras or putting your headlights inside of a narrow cavity in the car body (cybertruck) is pretty obvious if you live in a place that has winter weather and road salt.

agubelu

The difference with the 737 Max crashes is that there was only one sensor feeding data to the system, not two. If there's a discrepancy between the two sensors, disconnect the automation and let the human take control. And unlike planes, you can safely stop a car in almost all scenarios.

timschmidt

I thought it was consensus that the move away from lidar was driven by a lack of supply compared to desired EV production numbers, and the things said about making vision only work have all been about coping with that reality.

breadwinner

Lidar was expensive a few years ago, but it is around $1K now.

https://www.wired.com/story/lidar-cheap-make-self-driving-re...

artursapek

They didn't let lack of battery supply stop them.

searealist

At the time, Lidar cost about $80k for the main unit.

NoTeslaThrow

I'm not really following the attempt at logic here but under this logic surely you'd WANT multiple types of sensors for all the same reasons.

monktastic1

Yes, that's what it's saying.

ravenstine

I dunno because I'm pretty sure that each sense a person loses increases their risk of mortality. If what you are saying is true, then shouldn't we all be better off wearing blinders, a clothespin over our noses, and rely on echolocation?

Gud

Not at all. The car should be smart enough to figure out which sensor is faulty. Otherwise, the car should not be driving itself at all.

It's more like, a pilot has access to multiple sensors. Which they do.

swid

The comment you are replying to is saying the chance of error decreases with more pilots since there is redundancy. They are not saying it’s like too many chefs in a kitchen.

null

[deleted]

null

[deleted]

Volundr

I think you and GP are in agreement. If you have a sensor with a really bad .5 chance of failure, by having to your chance of failure decreases to 0.5^2=0.25.

throwaway31131

Does that mean a Tesla only has one camera? If not, you're dealing with "multiple independent sensor streams" regardless.

liendolucas

> instead he is a fraud/charlatan and...

Just see him talking about things at NeuraLink. Musk wouldn't exist if it weren't for the people working for him. It's a clown that made it to the top in a very dubious way.

glitchc

Not dubious. He was rich to begin with and used that wealth to make more.

ravenstine

Yes, though he wouldn't be relevant if people didn't believe he is a genius.

rstuart4133

> Musk wouldn't exist if it weren't for the people working for him.

I've decided Musk core talent is creating and running an engineering team. He's done it many times now - Tesla, SpaceX, Paypal, even twitter.

It's interesting because I suspect he isn't a particular good engineer himself, although the only evidence I have for that is tried to convert Paypal from Linux to Windows. His addiction to AI's getting results quickly isn't a good look either. To make the product work in the long term the technique has to get you 100% of the way there, not the 70% we see in Tesla and of now DOGE. He isn't particularly good at running businesses either, as both twitter shows and his solar roof's show.

But that doesn't matter. He's assembled lots of engineering teams now, and he just needs a few of them to work to make him rich. Long ago it was people who could build train lines faster and cheaper than anyone else that drove the economy, then it was oil fields, then I dunno - maybe assembly lines powered by humans. But now wealth creation is driven by teams of very high level engineers duking it out, whether they be developing 5G, car assembly lines or rockets. Build the best team and you win. Musk has won several times now, in very different fields.

plun9

What's wrong with him talking about things at Neuralink?

tim333

And while I haven't seen the talking, Musk never claimed to be a neurologist or have expertise in that area.

dgrin91

I see this get thrown around once in a while and I really don't get it. Isn't this true of basically every leader? Gates, Jobs, Buffet, Obama, they all wouldn't existing without their teams. Isn't that just obvious? Isn't one of the important markers of a good leader to be able to build a good team?

mmooss

> I see this get thrown around once in a while and I really don't get it. Isn't this true of basically every leader? Gates, Jobs, Buffet, Obama, they all wouldn't existing without their teams. Isn't that just obvious? Isn't one of the important markers of a good leader to be able to build a good team?

The others don't claim the extremes of power and genius, based to a large extent on what their teams do. They also build good teams - look at DOGE, for example.

tim333

Buffett did pretty well before having a team. I still think Musk is quite good at physics/engineering type stuff. It's not everyone who can start with modest money and transform industries (rockets and evs mostly).

pfannkuchen

You do want them to be better than humans, but vision quality is not really a major source of human accidents. Accidents are typically caused by driving technique, inattention, or a failure to accurately model the behavior of other drivers.

Put another way - would giving humans superhuman vision significantly reduce the accident rate?

The issue here is that the vision based system failed to even match human capabilities, which is a different issue from whether it can be better than humans by using some different vision tech.

crazygringo

> Put another way - would giving humans superhuman vision significantly reduce the accident rate?

Yes? Incredibly?

If people had 360° instant 3D awareness of all objects, that would avoid so many accidents. No more blind spots, no more missing objects because you were looking in one spot instead of another. No more missing dark objects at night.

It would be a gigantic improvement in the rate of accidents.

shepherdjerred

Idk, many people are just bad at driving due to impatience.

They don’t leave enough space/time to react even if they did have enough awareness

pfannkuchen

You seem to be missing my point.

Looking in one spot instead of another is included in what I’m calling “attention”. Of course paying attention to everything all the time would be and is a huge improvement. That is orthogonal to the type of vision tech being used. All approaches used in self driving systems today look everywhere all the time.

maxerickson

The improved sensors improve awareness, improving situational awareness of human drivers would have a huge impact.

layer8

Humans have limited awareness bandwidth. I’m doubtful that improved “sensors” would change the fact that one can only take in and focus on very few things at a time. If anything, filtering down the input to the relevant features would consume more brain resources and possibly take longer.

NoTeslaThrow

> but vision quality is not really a major source of human accidents.

Never driven before?

gruez

I've driven before, and agree with the OP. Now what?

null

[deleted]

harimau777

Isn't radar/lidar less like super vision and more like spidey sense? I'd love to give human drivers an innate sense of exactly how far away things are and how fast they are closing.

flashman

seems like vision quality might be a major source of robot accidents

luckylion

Will limiting auto-pilots to human-level vision not increase their accident rate?

sudosysgen

Human eyes are in many ways far superior to reasonably priced vision sensors. This isn't giving humans superhuman vision, it's changing the tradeoffs human vision has (without changing the cognitive process that it coevolved with to begin, and which is the most important part of why we get into accidents).

There is no affordable vision system that's as good as human vision in key situations. LiDAR+vision is the only way to actually get superhuman vision. The issue isn't the choice of vision system, it's to choose vision itself, and besides the lesson from the human sensory system is to have sensors that go well with your processing system, which again would mean LiDAR.

If humans could integrate a LiDAR-like system where we could be warned of approaching objects from any angle and accurately gauge the speed and distance of multiple objects simultaneously, we would surely be better drivers.

hughw

Yes?

rayiner

> The point I soured on Musk was when he ditched radar/lidar and tried to go with camera's alone. This made me realize he is not the genius he is made out to be but instead he is a fraud/charlatan and over the years his statements on different topics have only hardened that belief.

Yeah, he was arguably wrong about one thing so his building both the world's leading EV company and the world's leading private rocket company was fake.

As they say, the proof of the pudding is in the eating. Between Tesla, SpaceX, and arguably now xAI, the probability of Musk's genius being a fluke or fraud is close to zero.

fzeroracer

> probability of Musk's genius being a fluke or fraud is close to zero.

We already know he's an objective fraud because he literally cheats at video games and was caught cheating. As in, he hired people to play for him and then pretended the accomplishments were his own. Which maps very well to literally everything he's done.

watwut

He was not wrong about one thing, he was frequently wrong. He is highly charizmatic bullshitter that gets away with fraud and lies, that can secure help from people when he needs.

But, he is frequently wrong, it just does not matter. He was occasionally right, like with Tesla back then.

stephenapple

Of total road deaths per year, 14% are Motorcycles, and the 5 Tesla fatalities are 0.0008% of motorcycle deaths. I think a more poignant subject is how did they have us arguing over this so quickly? Most posts are just anti-Elon. Boggling...

Workaccount2

It's because the sensor suite for lidar is expensive and HD cameras are basically a commodity at this point.

So if your goal is to pump out $20k self driving cars, then you need cameras to be good enough. So the logic becomes "If humans can do it, so can cameras, otherwise we have no product, no promise."

breadwinner

Cameras have poor dynamic range and can be easily blinded by bright surfaces. While it is true that humans do fine with only eyes, our eyes are significantly better than cameras.

More importantly, expectations are higher when an automated system is driving the car. It is not sufficient if, in aggregate, self-driving cars have fewer accidents. If you lose a loved one in an accident where the accident could have been easily avoided if a human was driving, then you're not going to be mollified to hear that in aggregate, fewer people are being killed by self-driving cars! You'd be outraged to hear such a justification! The expectation therefore is that in each individual injury accident a human clearly could not have handled the situation any better. Self-driving cars have to be significantly better than humans to be accepted by society, and that means it has to have better-than-human levels of vision (which lidars provide).

josephcsible

How many strangers' lives is a loved one's life worth? If your answer is anything other than "1", how does that square with other people having their own loved ones, and your loved ones being strangers to them?

sudosysgen

I wish this myth would die. Anyone who picks up a camera would know that it isn't true, there are many things even very expensive cameras can't do that humans can. Specifically, the mix of high acuity when needed but wide angle, low-light movement performance, and tracking fast objects is something that only a camera system in the tens of thousands of dollars can do, and those are all relevant to driving.

lukeschlather

I drive a beater used car, I've contemplated installing aftermarket lidar on it, i don't want to drive as a human only relying on being able to see everything by turning my head.

stephc_int13

I fully agree on this.

Computer Vision has turned out to be a very tough nut to crack and that should have been visible from anyone doing serious work in the field since at least 15 years ago.

In any case, any safety critical system should be build with redundancy in mind, with several sub systems working independently.

Using more and better sensors is only a problem when building a cost sensitive system, not a safety critical one, and very often those sensors are expensive because they are niche, that can be mediated with mass scale.

fsh

This problem has been solved more than a decade ago by radar sensors (standard on many mid-range cars at the time). They detect imminent collisions with almost perfect accuracy and very little false positives. Having better sensor data is always going to beat trying to massage crappy data into something useful.

redox99

Radars are not as good as you think. They generally can't detect stationary objects, have problems with reflections, most of them are VERY low resolution, and so on.

The "with almost perfect accuracy and very little false positives" part is not true.

If you look at euroncap data, you'll see how most cars are not close to 100 in Safety Assist category (and Teslas with just vision are among the top). And these EuroNCAP are fairly easy and ideal. So it's clearly not a solved problem, as you portray.

https://www.euroncap.com/en/ratings-rewards/latest-safety-ra...

throwaway31131

> They generally can't detect stationary objects, have problems with reflections, most of them are VERY low resolution, and so on.

Radar can absolutely detect a stationary object.

The problem is not, "moving or not moving", it's "is the energy reflected back to the detector," as alluded to by your second qualification.

So something that scatters or absorbs the transmitted energy is hard to measure with radar because the energy doesn't get back to the detector completing the measurement. This is the guiding principle behind stealth.

And, as you mentioned, things with this property naturally occur. For example, trees with low hanging branches and bushes with sparse leaves can be difficult to get an accurate (say within 1 meter) distance measurement from.

redox99

They can detect stationary objects, yes. But there's so much clutter from things like overpasses, road signs, and other objects that confuse the radar, that for things like adaptive cruise control, stationary objects are often intentionally filtered out or assigned much lower priority. So you detect moving objects (which stand out because of doppler shift).

potato3732842

Imagine if you could only see in a very narrow portion of the visible light spectrum, like only green or something. That's kind of how radar "sees" (I'm grossly over-simplifying here but point is it doesn't see the way we do).

It's hard to tell something sticking off the back of a truck or a motorcycle behind a vehicle without false positive triggering off of other stuff and panic braking at dumb times, something early systems (generically, not any particular OEM) were known for, which is why they were mostly limited to warnings not actual braking.

And while one can make bad faith comments all day about that not technically being the fault of the system doing the braking allowing such systems to proliferate would be a big class action lawsuit, and maybe even a revision of how liability is handled, waiting to happen.

hwillis

Backing this up: automotive radar uses a band at ~80 GHz. The wavelength is ~3.7 millimeters, which lets you get incredible resolution. Not quite as good as the TSA airport scanners that can count your moles through your shirt, but good enough to see anything bigger than a golf ball.

For a long, long time automotive radar was a pipe dream technology. Steering a phased array of antennas means delaying each antenna by 1/10,000s of a wave period. Dynamically steering means being able to adjust those timings![1] You're approaching picosecond timing, and doing that with 10s or 100s antennas. Reading that data stream is still beyond affordable technology. Sampling 100 antennas 10x per period at 16 bit precision is 160 terabytes per second, 100x more data than the best high speed cameras. Since the fourier transform is O(nlogn), that's tens of petaflops to transform. Hundreds of 5090s, fully maxed out, before running object recognition.

Obviously we cut some corners instead. Current techniques way underutilize the potential of 80 GHz. Processing power trickles down slowly and new methods are created unpredictably, but improvement is happening. IMO radar has the highest ceiling potential of any of the sensing methods, it's the cheapest, and it's the most resistant to interference from other vehicles. Lidar can't hop frequencies or do any of the things we do to multiplex radar.

[1]: In reality you don't scan left-right-up-down like that. You don't even use just an 80 GHz wave, or even just a chirp (a pulsing wave that oscillates between 77-80 GHz). You direct different beams in all different directions at the same time, and more importantly you listen from all different directions at the same time.

porphyra

Agreed. Also, the fact that current automotive radars return a point cloud (instead of, say, a volumetric density grid) is sad. But it will be a while before processing power can catch up, and by the time you have the equivalent of hundreds of 5090s on your car, you will also be able to drive flawlessly by running a giant transformer model on vision inputs.

plun9

This isn't true. You can try using adaptive cruise control with lane-keeping on a radar-equipped car on an undivided highway. Radar is good at detecting distance and velocity, but can't see lane lines. In order to prevent collisions, you would need to know precisely the road geometry and lane positions, which may come from camera data, and combine that information with the vehicle information.

jamincan

I do this all the time with no problem at all. I drive a 2023 VW Taos, for what that is worth.

outside1234

But what if you were on Ketamine and thought you could resolve it with a camera?

mitthrowaway2

Radar is great for detecting cars, not as great for detecting pedestrians.

whiplash451

Are we sure Teslas dont have radars? We know they don’t have lidars, but that’s irrelevant.

daemonologist

Yes, they removed radar from their vehicles in 2021: https://www.tesla.com/support/transitioning-tesla-vision

(Also I wouldn't say it's _irrelevant_ that they don't have lidar, as if they did it would cover some of the same weaknesses as radar.)

pclmulqdq

Tesla Model 3's don't have radars. They had them at the beginning of the run, and removed them.

null

[deleted]

rad_gruchalski

Yes. We are sure.

BeetleB

The other comment pointing this out has been (ridiculously) flagged, so I'll repeat the point that was made:

The analysis is useless if it doesn't account for the base rate fallacy (https://en.m.wikipedia.org/wiki/Base_rate_fallacy)

The first thing I thought before even reading the analysis was "Does the author account for it?" And indeed he makes no mention that he did.

So after reading the whole article I have no idea whether Tesla's automatic driving is any worse at detecting motorcycles than my Subaru's (which BTW also uses only visual sensors).

Antidisclaimer: I hate both Teslas and Musk. And my hate for one is not tied to the other.

btrettel

The base rate was discussed early in the article, but not by that name:

> It’s not just that self-driving cars in general are dangerous for motorcycles, either: this problem is unique to Tesla. Not a single other automobile manufacturer or ADAS self-driving technology provider reported a single motorcycle fatality in the same time frame.

PaulRobinson

That gives you absolute rate, but not relative rate.

There are not many other cars out there (in comparison), with a self-driving mode. There are so many Teslas in the World out there driving around, that I think you'd have to considerably multiply all the others combined to get close to that number.

As such, while 5 > 0, and that's a problem, what we don't know (and perhaps can't know), is how that adjusts for population size. I'd want to see a motorcycle fatality rate per auto-driver-mile number, and even then, I'd want it adjusting for prevalence of motorcycles in the local population: the number in India, Rome, London and South California vary quite a bit.

polygamous_bat

> As such, while 5 > 0, and that's a problem, what we don't know (and perhaps can't know), is how that adjusts for population size.

This puts the burden on companies which may hesitate to put their “self driving” methods out there because it has trouble with detecting motorcyclists. There is a solid possibility that self driving isn’t being rolled out by others because they have higher regard for human life than Tesla and its exec.

azinman2

“ADAS self-driving technology”

ADAS is fairly common. It was in my VW and BMW, and I’m certain many other cars have it too.

BeetleB

That is not addressing the base rate.

To take a hypothetical extreme: If all cars but one on the road were Teslas, it would not be meaningful to point out that there have been far more fatalities with Teslas.

Even more illustrative, if 10 people on motorcycles had died from Teslas, and 1 person had died from that sole non-Tesla, then that non-Tesla would be deemed much, much more dangerous than Tesla.

btrettel

It does address the base rate, though not in a fully satisfactory way. You're correct to point out the number of Teslas on the road vs. other vehicles, as is this person who mentions driving hours: https://news.ycombinator.com/item?id=43601681

The replies to my comment seem to me to be addressing the question of what the appropriate reference class is, not the base rate fallacy.

polygamous_bat

> To take a hypothetical extreme: If all cars but one on the road were Teslas, it would not be meaningful to point out that there have been far more fatalities with Teslas.

However, in such a case, “base rate fallacy” would prevent you from blaming Tesla even if it had a 98% fatality rate. How do you square that? What happens if other companies aren’t putting self driving cars out yet because they aren’t happy with the current rate of accidents, but Tesla just doesn’t care?

outer_web

To add, even if the fatality numbers are small, accelerating into a person is a pretty outrageous failure mode.

BeetleB

My Subaru has done this many times, and with regular cars, not motorcycles.

93po

Another part of the problem: Waymo, for example, doesn't provide motorcycle specific stats. They only provide collisions with vehicles, bicycles, and pedestrians. There's no breakdown of vehicle type. So the basis of this article is already bullshit and likely done just for "space man bad" reasons

sidibe

Well we can come up the number of Waymo-involved motorcycle fatalities from such a breakdown easily from the total fatalities which is 0

jsight

It is worse than that. The other adas providers do not all automatically report this stuff. He's comparing five reports gathered meticulously to a self reporting system that drops the vast majority of incidents.

It is a bad article.

porphyra

People will gobble up all kinds of bad articles to reinforce their anti-Tesla bias. This reminds me of the "iSeeCars" study that somehow claimed that teslas have the highest fatality rate per mile travelled, even though:

* They basically invented the number of miles travelled, which is off by a large factor compared to the official figure from Tesla

* If you take into account the fact that the standard deviation is proportional to the square root of the number of fatal accidents, the comparison has absolutely no statistical significance whatsoever

buyucu

Base rate fallacy is not relevant here.

There are 5 dead people who would be alive today had tesla not pushed this to the market.

ezfe

Well that makes no sense because then you could say there are X people who would be alive today if they had not ridden a motorcycle.

lynndotpy

I don't like Tesla and the premature "FSD" announcement was a huge set back AV research. An AV without lidar killing motorcyclists is not surprising, to say the least. And this is a damning report.

That said -- and I might have missed this if it was in the linked sources, I'm on mobile -- what is the breakdown of other (supposed) AVs adoption currently? What other types of crashes are there? Are these 5+ fatalities statistically significant?

cpncrunch

“This issue is unique to Tesla. Other self-driving manufacturers have logged zero motorcycle fatalities in the same time frame”

Doesnt give number of driving hours for Tesla vs others though.

robwwilliams

Yep! And no stats at all. Pathetic article.

rad_gruchalski

Do not believe any statistic you didn’t manipulate yourself.

FireBeyond

Wait til you hear that Tesla doesn’t count fatalities in its accident data.

Or that any collision that doesn’t involve airbag deployment is not actually an accident, according to Tesla.

You were saying something about stats?

honeybadger1

yes, it's missing the millions of FSD miles vs the minutes of drives of all of the rest combined.

lynndotpy

This is not the case. Waymo alone has claimed 50 million rider-only miles as of December 2024. That would mean Waymo travelled at least 833,000 miles per hour on its driverless miles! (Unless you mean "minutes" in the literal sense, which can be any amount of time, and would apply to every vehicle.)

It's worth noting Waymo's rider-only miles is a stronger claim than "FSD" miles. "Fully self driving" is Tesla branding (and very misleading, and expects an attentive human behind the wheel ready to take over in a split second.)

buyucu

at least 5 people would be alive today if tesla had not pushed unreliable technology on the market.

christophilus

I’m in the same camp. I think self driving shouldn’t be allowed as it currently stands. But, this is probably the XKCD heatmap phenomenon.

How many other self-driving vehicles are on the road vs Tesla? What percentage of traffic consists of motorcycles in the place where those other brands have deployed bs in Florida, etc.

https://www.xkcd.com/1138/

para_parolu

Would also be interesting to see: what % of rear end crashes caused by AV vs human driver

bryanlarsen

Prediction: over the next 4 years we're going to see lots of stories like this. Some of the stories will be fair, some won't. "Musk bad" stories get clicks. The next administration will be very anti-Tesla. Coincidentally, by that time self-driving will be considered mature enough to warrant proper regulation, rather than the experimental regulation status we have now.

Combine the two, and the regulations will be written such that it excludes Tesla and includes Waymo. Not by name, just that the safety regulations will require a safety record better than Tesla's but worse than Waymo's. Likely nobody but Waymo will have that record, and now nobody will be able to because they won't have access to the public roads to attain it.

This might be the ultimate regulatory lock in monopoly we've ever seen.

pwg

> Combine the two, and the regulations will be written such that it excludes Tesla and includes Waymo.

The solution seems easier, if only the regulators would pick up upon it.

Under the current human driven auto regime, it is the human that is operating the machine who is liable for any and all accidents.

For a self-driving car, that human driver is now a "passenger". The "operator" of the machine is the software written (or licensed) by the car maker. So the regulation that assures self-driving is up-to-snuff is:

When operating in "self driving" mode, 100% of all liability for any and all accidents rests on the auto manufacturer.

The reason the makers don't seem to care much about the safety of their self driving systems is that they are not the owners of the risk and liability their systems present. Make them own 100% of the risk and liability for "self driving" and all of a sudden they will very much want the self-driving systems to be 100% safe.

bryanlarsen

Being both the owner and operator, Waymo already has full liability. It's a good proposal, but I don't think it's sufficient if your secondary goal is "screw Elon".

Nor is it sufficient to ensure that self driving is significantly safer than human drivers. I don't think the public wants "slightly safer than humans".

bliteben

Seems like the obvious solution to this, is if you collect driving data on public highways, the data has to be made available to the public. If you collect the data on private highways you are free to keep it private. If you don't intend to use it in a product on public highways it can remain private.

Doesn't even seem that crazy when you consider the government is already licensing them to be able to use their private data anyway. Biggest issue is someone didn't set it up this way from the start.

skybrian

Maybe, but why couldn’t a competitor prove their system works using safety drivers?

If a competitor resold their system to other car companies, another possible scenario might be a duopoly like Apple versus Android.

bryanlarsen

By that time Waymo will likely be over a billion miles of data, and you're likely going to need similar amounts of mileage to prove that your safety margin is >> 10X better than human.

tim333

>ultimate regulatory lock in monopoly...

In fairness to the regulators they have been pretty reasonable so far.

dangjc

Then Tesla should pay for back up drivers until their safety record meets the bar.

dopidopHN

It’s imply a next administration. And ediction of new regulation.

crazygringo

> Not by name, just that the safety regulations will require a safety record better than Tesla's but worse than Waymo's.

That's not a bad thing if Tesla is significantly worse than Waymo. That's desirable.

The solution here seems like it would be for Tesla to become as safe as Waymo. If they can't achieve that, that's on them. Unfair press doesn't cause that.

I mean, I care about not dying in a car accident. If Tesla is less safe, and this leads to people taking safer Waymos instead, I can't see that as anything but a good thing. I don't want to sacrifice my life so another company can put out more dangerous vehicles.

smrtinsert

What's the hard part? Maybe Tesla should stop pretending it doesn't need lidar?

senkora

I see a lot of people saying that this isn't statistically significant. I think that that is probably true, but I also think that it is important to do the statistical test to make sure:

    tesla.mult = c(1/(5:2),1:5)
    data.frame(tesla.mult = tesla.mult, p.value = sapply(tesla.mult, (function(tesla.mult) { poisson.test(c(5, 0), c(tesla.mult, 1))$p.value })))

      tesla.mult      p.value
    1  0.2000000 0.0001286008
    2  0.2500000 0.0003200000
    3  0.3333333 0.0009765625
    4  0.5000000 0.0041152263
    5  1.0000000 0.0625000000
    6  2.0000000 0.1769547325
    7  3.0000000 0.3408203125
    8  4.0000000 0.5904000000
    9  5.0000000 1.0000000000
tesla.mult is how many times more total miles Teslas have driven with level-2 ADAS engaged compared to all other makers. We don't have data for what that number should be because automakers are not required to report it. I think that it is probably somewhere between 1/5 and 5. If you believe that the number is more than 1, then the result is not statistically significant.

jamincan

Even though other manufacturers may not be reporting these numbers, Level 2 ADAS systems are pretty common as far as I can tell. Wouldn't any vehicle with adaptive cruise control and lane-keep assist be considered Level 2 ADAS?

senkora

I’m not quite sure where the line is between Level 1 and Level 2 ADAS. Wikipedia says this:

> ADAS that are considered level 1 are: adaptive cruise control, emergency brake assist, automatic emergency brake assist, lane-keeping, and lane centering. ADAS that are considered level 2 are: highway assist, autonomous obstacle avoidance, and autonomous parking.

https://en.m.wikipedia.org/wiki/Advanced_driver-assistance_s...

I think that Level 2 requires something more than adaptive cruise control and lane-keep assist, but that several automakers have a system available that qualifies.

My intuition is that there are more non-Tesla cars sold with Level 2 ADAS, but Tesla drivers probably use the ADAS more often.

So I don’t have high confidence what tesla.mult should be. I wish that we had that data.

kentonv

Hmm. The article's source is NHTSA data that goes up through February 2025 -- pretty recent.

The article cites 5 motorcycle fatalities in this data.

Four of the five were in 2022, when Tesla FSD was still closed beta.

The remaining incident was in April 2024.

(The article also cites one additional incident in 2023 where the injury severity was "unknown", but the author speculates it may have been fatal.)

I dunno, to me this specific data suggests a technology that has improved a lot. There are far more drivers on the road using FSD today than there were in 2022, and yet fewer incidents?

erikpukinskis

I’m normally quite skeptical of these kinds of “Tesla More Dangerous Than Other Brands” headlines since they tend to be B.S.

But this seems like a pretty legitimate accusation, and certainly a well researched write-up at the very least.

Workaccount2

What gets me is that there are no other brands in Tesla's league. Tesla is the only consumer car that has "FSD" level ability.

The competitors have to use pre-mapped roads and availability is spotty at best. There is also risk as Chevy already deprecated their first gen "FSD", leaving early adopters with gimped ability and shutout from future expansions.

sroussey

FSD meaning level 2? Lots of those.

Workaccount2

Please, I'd love to know because I want an FSD like system and don't want to buy a Tesla.

As far as I am aware, everyone else's offerings only work in pre-mapped areas, i.e. Chevy's system only covers half my commute.

dboreham

That's because the competitors are aiming to provide a feature that works (all the time, consistently).

jajko

You mean betatesting on millions of users? Yes traditional manufacturers are very wary of class action suits and generally have some reputation to uphold. Tesla, not so much... move fast, break things, kill few people, who cares current profit is all that matters

marxisttemp

Aren’t you ignoring Waymo?

tim333

It's not 'consumer'.

lern_too_spel

Nobody has level 5. Waymo did 5 million miles in 2024 with nobody behind the wheel. Tesla did way more but required a human driver due to frequent disengagement. These are not the same. https://electrek.co/2024/09/26/tesla-full-self-driving-third...

Level 4 is a commercially viable product. Mapping allows verification by simulation before deployment. Tesla offers level 3, which is not monetizable beyond being a gimmick.

sidibe

I dont really care about the levels but I think Tesla has been building level 2 product, that will always supposedly be level 4 next year but they have never shown any intention of doing level 3.

What is clear is Tesla is not currently capable of self driving and he has lied year after year after year about it.

I think carmakers should have to be liable for their cars capabilities in the areas they allow them to be used.

FireBeyond

> There is also risk as Chevy already deprecated their first gen "FSD", leaving early adopters with gimped ability and shutout from future expansions.

Tesla has already said that some of its vehicles, sold with “all the hardware necessary for FSD” will never get it.

josephcsible

> Tesla has already said that some of its vehicles, sold with “all the hardware necessary for FSD” will never get it.

No they didn't. They said it turned out the vehicles didn't have all the hardware necessary, but that a free retrofit to add it will be forthcoming.

marxisttemp

[flagged]

marxisttemp

[flagged]

dagw

Without knowing how many FSD miles Teslas have done compared to other brands, it's hard to judge. It could just be that Tesla owners are far more likely to trust and use their cars FSD capabilities, and thus end up in more FSD accidents. Other brands might have so bad FSD that people simply not trust it and basically never use it, and thus never get into an FSD accident.

audunw

I don’t see any way you can spin this in a positive light. Yeah, there may be many more FSD miles on Tesla, but if that leads to a bunch of motorcyclists getting hit, then maybe that’s exactly the problem.

We know this is one of the core issues of Tesla FSD: its capabilities have been hyped and over promised time and time again. We have countless examples of drivers trusting it far more than they should. And who’s to blame for that? In large part the driver, sure. But Elon has to take a lot of that blame as well. Many of those drivers would not have trusted it as much if it wasn’t for his statements and the media image he has crafted for Tesla.

Hobadee

The problem is there isn't enough data here. Killing 5 motorcycles isn't great, but if human driver killed say 100 in the same sample, 5 is actually pretty good. Of course if human drivers kill 1 or 2 in the same sample, 5 is really bad.

BeetleB

> Yeah, there may be many more FSD miles on Tesla, but if that leads to a bunch of motorcyclists getting hit, then maybe that’s exactly the problem.

I'd wager far more motorcyclists get hit by humans driving non-Teslas in non-autonomous modes. I could rephrase your comment to:

"Yeah, but if humans drive cars without safety features, and that leads to a bunch of motorcyclists getting hit, then maybe that’s exactly the problem."

... to make the (faulty[1]) argument that driving with FSD turned on is better for motorcyclists.

[1] Faulty not because it's false, but because it is a logical fallacy.

vachina

It’s not hyped. You can literally watch how FSD perform all readily live on YouTube, and despite these shortcomings, they’re lightyears ahead of any self driving systems. And it’s going to just get better as they account for all these corner cases.

Tesla is a victim of their own success, they’ve set the bar so high people now expect it to have 0.0000% failure rate.

Hikikomori

How good can it be if it rear ends motorcycles?

Ajedi32

It could be 100x better than human drivers and still rear end motorcycles occasionally. Without better statistical information this article tells you almost nothing.

ModernMech

This.

People arguing over base rates of motorcycle accidents as if Tesla didn't get fooled by a loony tunes wall. If Waymo had killed 5 motorcyclists in SF we would know. But they operate there without an incident for years.

Meanwhile, Tesla just after releasing autopilot to the world a man is decapitated because the system is deficient. Then it happened again in 2016 under eerily similar circumstance. Then we observe Teslas hitting broad objects like fire trucks and busses.

The correct response to that is not to say "Well what's the base rate for decapitations and hitting broad objects?"

No, you find out the reason for this thing happening. And the reason is known: a deficient sensor suit that it prone to missing objects clearly in its field of view.

So with the motorcycle situation, we already know what the problem is. This isn't a matter of a Tesla just interfacing with the statistical reality of getting rear-ended by a car. Because we know Teslas have a deficient sensor suit.

SideburnsOfDoom

> Other brands might have so bad FSD that people simply not trust it and basically never use it

The issue is not quite how good the automation is in absolute terms, it's how good it is vs. how it is sold. Tesla is an outlier here, right down to the use of the term "FSD" i.e. "Full Self-Driving", when it's nothing of the sort.

dagw

That's kind of my point. We don't know from the data if the problem is that Tesla has worse FSD than the non-accident brands, or if all brands are equally good/bad and Tesla owners are more likely to use the feature. Tesla could very well have the best FSD software on the market and still lead the accident stats simply because how their users are (ab)using the feature (which I suspect is closest to the truth)

shadowgovt

The only criticism I could leverage is that the difference between five and zero incidents is very hard to extrapolate information from.

The author kind of plays this up a bit by insinuating that there are incidents we don't know of, and they probably aren't wrong that if there are five fatalities there are going to be many more near misses and non-fatal fender bender collisions.

But for the number of millions of miles on the road covered by all vehicles, extrapolating from five incidents is doing a lot of statistical heavy lifting.

null

[deleted]

fallingknife

There is no statistical evidence cited to show that there really is a difference. And there is no data at all showing the rate of these crashes vs non self driving cars.

pelagic_sky

When I used to ride, getting rear ended at a stop was one of my biggest fears other than people blowing stop signs. I always left more space in front of me that would give me room to maneuver as I watched for the car behind me to make the stop.

bitmasher9

I just want to compound on this one.

If you get rear ended at a stoplight or stopsign it’s very likely the motorcyclist is not at fault. The motorcyclist suffers significantly more bodily injury than a car driver would in a similar collision. As a motorcyclist, you can tell that sometimes people just don’t see you because their brain is looking for something car shaped.

When I ride, every time I stop at a stoplight or stopsign I am watching my rear view mirror to judge if the person behind me is going to stop, and I have an exit strategy if they don’t. Ive had some close calls.

m3047

I put auto horns on my motorcycle(s). In the Seattle urban core (when I lived there, 15 years ago), if I honked then people looked for the car -- they weren't looking at me. Other than that, seems to have the intended effect.

petra303

Lane filtering solves this fear. Should be legal nation wide.

bink

As a motorcyclist, lane filtering also comes with some dangers. It's up to each motorcyclist to decide if it's worth it. I lane split in heavy traffic in CA but I'm also very wary about being the first vehicle into an intersection after a light changes. I see far more people running red lights around me than I do people rear ending stopped traffic at a light. I'm glad we have the option.

JKCalhoun

Lane filtering is apparently "lane splitting" at lower speeds.

dharmab

They're different. Filtering is between stopped cars, splitting is in moving traffic.

outer_web

Been riding for two decades and my understanding has been that 'splitting' is the yank term and 'filtering' is the britbong one.

whartung

Nobody is going to cite you for filtering if you move into the gap to avoid a collision.

null

[deleted]

snozolli

Filtering as a matter of habit eliminates the risk. That's OP's point, not that motorcyclists should watch their mirrors like a hawk and jump between lanes when they decide that the driver behind them might not stop.

nicbou

Mine was and still is people cutting into my lane when cornering. You turn a corner and bam, incoming car.

Still, I always leave the bike in gear until the car behind me has stopped, and if I can, I stop slightly diagonally with enough space to move left or right to avoid getting sandwiched.

xandrius

My latest fear is not having the person behind me to rear me (as you can keep an eye on them) but is whoever is behind them, hitting them and then myself getting hit as a result.

2nd hand collision, still pretty dangerous.

kjkjadksj

If I am on a bicycle generally I pull out into the crosswalk. Way more visible and pedestrians don’t cuss at you like they would a blocking car because they get it.

jeffbee

Statistically, as a motorcyclist you should be afraid of fatally rear-ending someone else, since that is the top cause of motorcyclist deaths in America. Of course that is within your control and if you feel like you have reduced that risk through personal practice then yeah getting rear-ended or left-hooked are the biggest dangers.

bchris4

Whether you like Tesla or not: this blog post is a perfect example of how clickbait headlines twist things around. Nobody reads anymore - if you made it down to the nitty gritty of each actual example, it’s painfully obvious that many have almost nothing to do with the self driving software at all, other than how humans can interact with it to screw it all up. There’s:

- A drunk driver doing 100 in a 45 (by pressing down on the pedal) through a yellow light

- A driver who “didn’t see the motorcyclist” because he was looking at his PHONE, but who had the go pedal pressed down at 95-100% for as many as 10 seconds after hitting him, to the point where witnesses say the front wheels were spinning while up in the air

- Others with no detail- not the authors fault but from the ones we have, clearly there are often circumstances which would require more analysis before coming to this conclusion

dharmab

Related: Well-sourced video on the topic https://youtube.com/watch?v=yRdzIs4FJJg

DwnVoteHoneyPot

FortNine videos are my favorite videos on YouTube. You're short changing them by just leaving the link without a description or title.

The video title is "Tesla Autopilot Crashes into Motorcycle Riders - Why?"

The amazing part is the one guy who created this video covered all insightful comments here in HN in one concise video 2 years ago.

redox99

Considering how much FSD has improved in the last few months, a 2 year old video is not relevant.

only-one1701

Not relevant to what? Those people are still dead.

janice1999

Other peoples lives are a sacrifice some drivers alpha testing their cars firmware are willing to make.

redox99

The shortcomings of 2 year old software is not relevant, because that's not what the cars run now.

dharmab

The main critique in the video is related to the hardware, specifically the lack of LIDAR.

redox99

It's just like Mark Robers video[1], where he makes all these wild claims that you can't solve X issue without LIDAR and tests an old version of Autopilot. When another user correctly tests with the latest hardware and software, it passes the test[2].

[1] https://www.youtube.com/watch?v=IQJL3htsDyQ&pp=ygUQbWFyayByb...

[2] https://www.youtube.com/watch?v=TzZhIsGFL6g

null

[deleted]

outside1234

Just amazing to me how blind people are to how much of a fraud Musk is

bdangubic

cause it is a cult following, not blindness…

Veserv

NHTSA Report ID 13781-3470 in Florida, April 2022 is most likely the fatal motorcycle crash Tesla influencer Chuck Cook was involved in as mentioned in his video here: https://www.youtube.com/watch?v=H8B0pX8TSOk

In the video Chuck Cook states he was involved in a fatal motorcycle accident in Florida in April 2022 where the motorcycle wheel fell off and then careened into his Tesla while Navigate on Autopilot was engaged. The reporting source for that NHTSA incident is "Field Report" and "Media" which lines up with his statement that a Florida Highway Patrol officer reviewed the footage and that Tesla most likely first learned of the crash from Chuck Cook's page classified as "media".

If the incident was not the crash involving Chuck Cook, then Chuck Cook's crash would be a crash that Tesla has left illegally unreported as I can see no other identifiable crash that could correspond to Chuck Cook's crash.