Two auto-braking systems can't see people in reflective garb: report
89 comments
·January 14, 2025thorncorona
pfedak
the chart in the streetsblog article puts some values in the wrong boxes, too. pathetic
neom
wow not just some, it tells a totally different story, that's awful. (edit, I emailed the author- further edit: author reply: I’ve corrected my chart. I did not use a tool to extract the data. I actually think the chart you are looking at (on the left) was updated from the one I originally received and I may have been working off the earlier one.)
PaulHoule
I'm fascinated with weird cameras and noticed that the #1 requirement of automotive cameras is the ability to deal with extreme variations in brightness both between frames and within a frame.
For one thing I'd be worried that retroreflective tape could be crazy bright in the dark and could blow out the cameras.
ahartmetz
The instantaneous "HDR" capability of biological eyes is really quite amazing. About 5 orders of magnitude for human eyes, about 2-3 for most cameras.
By the way, there's a really cool in its simplicity medium term adaptation mechanism in eyes as well, they measure light intensity by photo-decay of a chemical substance that is produced slowly. If there is much light, the substance decays a short time after production. If there is little light, it accumulates for about half a minute, massively increasing sensitivity. The quantum efficiency (the inverse of how many photons it takes to produce a signal) of a dark-adapted eye is about 0.3: https://www.nature.com/articles/ncomms12172#MOESM482
andix
Is the 2-3 just for single frames, or does it already include all the tricks cameras can do to get more dynamic range?
Many cars have multiple cameras and could (can?) run them at different exposures. Or run at a very high frame rate and take every frame with multiple exposure settings, and calculate an HDR video on the fly.
ahartmetz
2-3 would be single frame / without special tricks to increase dynamic range.
AlotOfReading
The processing pipeline is just as important as the camera hardware here. It's difficult to build an appropriate system by gluing together off the shelf software and many people writing automotive requirements aren't even aware of the failure modes. When it goes to the tier ones, they'll just throw things together until it meets the requirements and nothing more.
I've caught (and fixed) this issue before at my own employers.
Veserv
Very likely a case of tuning to the standard safety tests.
The gold standard for standardized AEB testing is in the Euro NCAP. You can see the testing protocol [1] explicitly specifies [2] a fixed size human adult with black hair, black shirt, blue pants with a precise visible, infrared, and radar cross-section. I lack sufficient knowledge to comment on whether those characteristics are representative, but I will assume that they are.
While precise test characteristics are valuable for test reproduction and comparative analysis, it makes it very easy for manufacturers to overfit and make their systems seem safer than they actually are in generalized circumstances whether accidentally or intentionally.
[1] https://cdn.euroncap.com/media/58226/euro-ncap-aeb-vru-test-...
[2] https://www.acea.auto/files/Articulated_Pedestrian_Target_Sp...
jonas21
Someone should sell a shirt and pants made to those specifications.
bigfatkitten
> with a reflective strips in a configuration similar to those worn by roadway workers (though their safety gear is generally bright orange or yellow rather than black).
But similar enough to turnout gear worn by many North American fire departments.
tlavoie
I wonder too if the reflective markers are also messing with self-driving vehicles that hit stopped fire trucks. It's been a few years, but this article on Teslas hitting fire vehicles was sobering. https://www.wired.com/story/tesla-autopilot-why-crash-radar/
The idea that they can't deal with stationary obstacles just makes it all worse, because obstacles happen constantly.
throwaway48476
If the goal is to be safer than a human driver then it will require better than human sensors, such as lidar. Camera only approaches will not stand the test of time.
toss1
Yup.
The concept that biological systems have made 3D vision, navigation, and object avoidance work without LIDAR is certainly attractive.
But there is a LOT more to it than just a photosensor and a bunch of calculations. The sensors themselves have many properties unmatched by cameras, including wider dynamic range, processing in the retina and optic nerve itself, and more, and the intelligence attached to every biological eye also is built upon a body that moves in 3D space, so has a LOT of alternate sensory input to fuse into an internal 3D model and processing space. We are nowhere near being able to replicate that.
The more appropriate analogy would be the wheel or powered fixed wing aircraft. Yes, we're finally starting to be able to build walking robots and wing-flapping aircraft, and those may ultimately be the best solution for many things. But, in the meantime, the 'artificial' solution of wheels and fixed airfoils gets us much further.
Ultimately, camera-only vision systems will likely be the best solution, but until then, integrating LIDAR will get us much further.
cmiller1
> Ultimately, camera-only vision systems will likely be the best solution, but until then, integrating LIDAR will get us much further.
Why though? How could it possibly be better than camera plus other sensors?
toss1
Because LIDAR specifically gives you the range or distance to each object. While in theory this should be possible with multiple cameras and stereoscopic vision/analysis, it obviously is not as simple in practice as it seems in theory. The additional depth info is also critical in identifying objects.
For example, several drivers of Tesla vehicles have been beheaded when a semi-truck turned/crossed in front of them and the car on autopilot evidently identified the white side of the trailer as sky and drove right under it, removing the roof and the occupant's heads. LIDAR would have identified a large flat object at range decreasing at the approximate speed of the vehicle, and presumably the self-driving system would have taken different action.
nomel
Whenever I walk up to a chainlink fence, and my vision places it at the wrong z distance, I'm reminded that 3d from vision is a consequence of our biological limitation of not having evolved emitters.
toss1
>>not having evolved emitters.
Like echolocation in bats and dolphins... Excellent point!
In fact, humans do have some echolocation capability [0,1]. That should tell us that LIDAR (or emitter-receiver-range-finder capability) may ultimately always be a core piece of the solution.
[0] https://en.wikipedia.org/wiki/Human_echolocation
[1] https://www.smithsonianmag.com/innovation/how-does-human-ech...
thebruce87m
> If the goal is to be safer than a human driver then it will require better than human sensors, such as lidar.
Having the same sensors as a human but being more attentive would be a step up. That said, I think camera-only is not good enough for now.
bigfatkitten
The sensors they have now aren't even as good as a human. The cameras have nowhere near the dynamic range of the average human eyeball, which isn't even particularly spectacular as far as eyes go.
thebruce87m
I agree. My point was from “all other thing being equal” that extra attentiveness on its own is a plus. I work in computer vision AI and am aware of current limitations.
hn_acc1
So, actual androids?
standeven
I think vision-only approaches can work, but our eyes and brain are amazing and it would take some serious hardware. Our eyes have a 200-degree FOV, providing a 576 megapixel landscape, with 13ms of latency. Plus there are 6 billion neurons in the visual cortex alone to process the images, which are then fed to another 80 billion neurons that can interpret and react to the data.
Peppering a few webcam-quality cameras around a car and plugging it into an Intel Atom processor probably won't be better than our eyes and brain, even if the cameras don't blink or get tired. It's only going to get better though.
nomel
There are corner cases of vision that cannot work because it's not mathematically possible, like a featureless wall (or ground, in case of recent mars crash).
And, vision can't work in/penetrate heavy snow or fog, which is transparent to radar.
Vision is an indirect measurement. Lidar/radar is a direct measurement. I'm curious if there are any other safety critical systems that uses such massively indirect measurements?
standeven
Good point regarding snow and fog, but I’m assuming operation would slow or stop in those conditions and that would be acceptable.
Does a truly featureless wall/road with no visible edges actually exist in the wild? I’d expect cameras with high enough resolution, spacing, and FOV would handle any real world examples but maybe I’m wrong.
gruez
>Our eyes have a 200-degree FOV, providing a 576 megapixel landscape, with 13ms of latency.
...only if you count the field of view you get from moving your eyeballs. You wouldn't say a PTZ camera has "360 FOV" just because it can rotate around. The "576 megapixel" figure is also questionable. Peak resolution only exists in the fovea. Everywhere else is blurry and much lower resolution. You don't notice this because your eyes does it automatically, but the actual information you can receive at any given time is far less than "576 megapixel".
standeven
The quoted latency and neuron counts can also be questioned, but my point stands: it's hard to compete with the human eye and brain with current (affordable) camera and processing hardware.
ajross
I don't see how that follows. To first approximation zero human-at-fault accidents are due to "sensor failure". I mean, sure, somewhere out there a pedestrian was killed while walking in a white-out blizzard. But far, far more were hit by drivers looking at their phones with perfectly good eyes.
null
ntonozzi
Wow, it's amazing how much better the Subaru's automatic braking system is.
I worry that hitting a pedestrian at night is the most likely way I'd seriously hurt somebody, and I want to encourage automakers to prioritize the safety of pedestrians and other road users, so Subarus will be high on my list the next time I'm shopping for a car.
ezfe
Subaru is just casually shipping better vision-only TACC than any other car company (I include tesla in this comparison, when just activating TACC) and nobody is paying attention to the fact that front radar is just not needed.
numpad0
Subaru is going back and forth between vision-only and radar assisted, and also going through suppliers and project structures for EyeSight-branded systems. Current camera unit is supplied by Veoneer in Sweden, slightly older ones were outsourced to Hitachi Astemo, before that were mostly internal R&D and so on.
Current latest gen Subaru has a forward radar.
ezfe
Subaru doesn't have forward radar except in the Solterra, which uses Toyota's system.
The 2025 models have 3 forward cameras, no radar.
nytesky
So all mainstream cars are vision only? No ranging like lidar?
bigfatkitten
Just about everyone ships radar, not only for collision avoidance but adaptive cruise control.
Aloisius
Honda and Mazda use radar as well.
ezfe
Most cars use radar, Subaru does not
aidenn0
If I'm reading the table correctly, there was only one vehicle for which reflective strips were worse than normal clothing (the Mazda), for the Honda reflective strips didn't always help but don't seem to have hurt (judging by the body text they did on the order of 12 tests, so 9% vs 0% is 1/12 vs 0/12).
pfedak
you're reading the table correctly but it's been reproduced incorrectly and had its title removed from the original source https://www.iihs.org/news/detail/high-visibility-clothing-ma...
i'm not clear from that how many trials were run for each test condition, but the percentage is average speed reduction, not a chance for binary hit/not hit. edit: the paper pdf says up to three trials each.
aidenn0
Wow, so much was lost when they mis-transcribed that table; thanks for the link.
null
Aloisius
> If I'm reading the table correctly, there was only one vehicle for which reflective strips were worse
No. It was all three vehicles. The table is average speed reduction.
Reflective strips had a lower average speed reduction than black clothing in every case except for the Subaru at 0 and 20 lux and the Honda at 0 lux.
Waterluvian
I test drove those very 3 models (2020 years) when buying and I found that everything “autonomous” about the Honda and Mazda felt just plain bad. I raised it with the Mazda guy who insisted the features were probably just not turned on, but when he checked they were on.
The Subaru though was an entirely different class. It worked so well. The thing would drive me around curves on a somewhat windy country road. Comfortably brought me to a stop behind a stopped car. Etc. I bought the Subaru.
According to an engineer I was in contact with, (at the time, maybe still true) the Subaru EyeSight system was their crown jewel system.
aidenn0
I have a similar experience with my Hyundai Kona Electric.
Even the automatic headlights are by far the worst I have ever used (and that includes a 1996 Ford Taurus). They only reliably stay off for a few hours around noon on overcast days, where the illumination is bright and diffuse. Otherwise they toggle on and off while I am driving through shadows (including self-shadowing when I turn away from the sun).
nytesky
My Toyota is like your Subaru: it gently slows behind traffic, nearly stops for stop signs, and guides me in lane marker.
It’s kinda of terrible because it teaches me bad habits to depend on the car and then when I drive conventional, I’m more vulnerable as I’ve been trained to let the car take over.
Waterluvian
My only complaint is that mine wants to stay in the very middle of the lane on the highway when I’d rather it bias to the left, especially when passing transport trucks.
And if I calmly but consistently hold left to keep it left a bit, the PID loop ramps up trying to center, and if I let go of the wheel it wants to swing into the right lane.
bentcorner
The Honda and Mazda both use a single camera to visually detect pedestrians while the Subaru uses two cameras - perhaps this is the difference?
JumpCrisscross
My Subaru also has radar. It’s noticed things ahead of my in whiteout conditions that my eyes couldn’t yet discern.
andix
Another thing that bothers me personally with emergency vechicles at night are the very bright emergency lights (blue in Europe).
Especially in situations where a lot of emergency vehicles are parked with the lights on outside city lights. It's often very disorientating and for me it reduces visibility of the surroundings when passing by. Those traffic situations require additional caution, because there could be people and debris on the road, but might reduce passing drivers abilities to properly see them.
It's probably also a problem for car safety systems.
Maybe the emergency lights and reflective strips got too good, to a point where they start causing harm. Emergency lights could easily adjust automatically to the ambient lighting conditions.
(Mazda/honda definitively need to get better, the data shows it's possible, not arguing with that fact)
dtgriscom
Each morning I drive past a middle school as it starts its day, and there's a police car and officer guiding traffic. Sometimes the officer leaves the full flashing blue lights going, and it makes it really hard to see what's around it (e.g. the officer and/or students). Most of the time they leave it on non-flashing blue, which makes it a lot easier to see the environment.
bigfatkitten
Some agencies have taken a smarter approach to this.
Ambulances in the Australian Capital Territory use steady burn amber perimeter lights when stopped on roadsides. Makes the outline of the vehicle more conspicuous, tends not to encourage rubbernecking and doesn't dazzle people.
thesz
Why are there no European, American and/or Chinese vehicles to compare to?
https://www.carpro.com/blog/almost-all-new-vehicles-have-aut...
Why only those three?
numpad0
Does it make sense to always refer to these systems by car manufacturers? Lots of these "camera" units are self contained computers that directly generates steering and braking commands, and are constantly switched between lowest bidders.
It's a bit like yogurt that comes with an economy class meal. Evaluations made for the cup on one flight might not apply to a flight on different days, or its return flight. Shouldn't it be the brand on the cup, not the one on headrest, that gets named in reports?
GoToRO
Just to add that every single device is the same: airbag, central lock, engine, transmission and so on. They are all bought from suppliers and usually you get the same device in cars that are from competing car manufacturers. An expensive vehicle has up to 100 embedded devices, each with it's own function.
numpad0
Those are more deeply integrated. The camera unit tends to be more closer to store-bought compared to other peripherals, and theory of operations hasn't converged.
largbae
As a consumer I can't choose which camera model, but I can choose which car manufacturer. So I should choose whichever car manufacturer chooses the best camera models, all else being equal.
If you want the unsummarized source and not the chatgpt summarized version:
https://www.iihs.org/news/detail/high-visibility-clothing-ma...