Two auto-braking systems can't see people in reflective garb: report
65 comments
·January 14, 2025thorncorona
pfedak
the chart in the streetsblog article puts some values in the wrong boxes, too. pathetic
neom
wow not just some, it tells a totally different story, that's awful. (edit, I emailed the author- further edit: author reply: I’ve corrected my chart. I did not use a tool to extract the data. I actually think the chart you are looking at (on the left) was updated from the one I originally received and I may have been working off the earlier one.)
Veserv
Very likely a case of tuning to the standard safety tests.
The gold standard for standardized AEB testing is in the Euro NCAP. You can see the testing protocol [1] explicitly specifies [2] a fixed size human adult with black hair, black shirt, blue pants with a precise visible, infrared, and radar cross-section. I lack sufficient knowledge to comment on whether those characteristics are representative, but I will assume that they are.
While precise test characteristics are valuable for test reproduction and comparative analysis, it makes it very easy for manufacturers to overfit and make their systems seem safer than they actually are in generalized circumstances whether accidentally or intentionally.
[1] https://cdn.euroncap.com/media/58226/euro-ncap-aeb-vru-test-...
[2] https://www.acea.auto/files/Articulated_Pedestrian_Target_Sp...
jonas21
Someone should sell a shirt and pants made to those specifications.
PaulHoule
I'm fascinated with weird cameras and noticed that the #1 requirement of automotive cameras is the ability to deal with extreme variations in brightness both between frames and within a frame.
For one thing I'd be worried that retroreflective tape could be crazy bright in the dark and could blow out the cameras.
ahartmetz
The instantaneous "HDR" capability of biological eyes is really quite amazing. About 5 orders of magnitude for human eyes, about 2-3 for most cameras.
By the way, there's a really cool in its simplicity medium term adaptation mechanism in eyes as well, they measure light intensity by photo-decay of a chemical substance that is produced slowly. If there is much light, the substance decays a short time after production. If there is little light, it accumulates for about half a minute, massively increasing sensitivity. The quantum efficiency (the inverse of how many photons it takes to produce a signal) of a dark-adapted eye is about 0.3: https://www.nature.com/articles/ncomms12172#MOESM482
andix
Is the 2-3 just for single frames, or does it already include all the tricks cameras can do to get more dynamic range?
Many cars have multiple cameras and could (can?) run them at different exposures. Or run at a very high frame rate and take every frame with multiple exposure settings, and calculate an HDR video on the fly.
ahartmetz
2-3 would be single frame / without special tricks to increase dynamic range.
AlotOfReading
The processing pipeline is just as important as the camera hardware here. It's difficult to build an appropriate system by gluing together off the shelf software and many people writing automotive requirements aren't even aware of the failure modes. When it goes to the tier ones, they'll just throw things together until it meets the requirements and nothing more.
I've caught (and fixed) this issue before at my own employers.
bigfatkitten
> with a reflective strips in a configuration similar to those worn by roadway workers (though their safety gear is generally bright orange or yellow rather than black).
But similar enough to turnout gear worn by many North American fire departments.
ntonozzi
Wow, it's amazing how much better the Subaru's automatic braking system is.
I worry that hitting a pedestrian at night is the most likely way I'd seriously hurt somebody, and I want to encourage automakers to prioritize the safety of pedestrians and other road users, so Subarus will be high on my list the next time I'm shopping for a car.
ezfe
Subaru is just casually shipping better vision-only TACC than any other car company (I include tesla in this comparison, when just activating TACC) and nobody is paying attention to the fact that front radar is just not needed.
numpad0
Subaru is going back and forth between vision-only and radar assisted, and also going through suppliers and project structures for EyeSight-branded systems. Current camera unit is supplied by Veoneer in Sweden, slightly older ones were outsourced to Hitachi Astemo, before that were mostly internal R&D and so on.
Current latest gen Subaru has a forward radar.
throwaway48476
If the goal is to be safer than a human driver then it will require better than human sensors, such as lidar. Camera only approaches will not stand the test of time.
thebruce87m
> If the goal is to be safer than a human driver then it will require better than human sensors, such as lidar.
Having the same sensors as a human but being more attentive would be a step up. That said, I think camera-only is not good enough for now.
hn_acc1
So, actual androids?
standeven
I think vision-only approaches can work, but our eyes and brain are amazing and it would take some serious hardware. Our eyes have a 200-degree FOV, providing a 576 megapixel landscape, with 13ms of latency. Plus there are 6 billion neurons in the visual cortex alone to process the images, which are then fed to another 80 billion neurons that can interpret and react to the data.
Peppering a few webcam-quality cameras around a car and plugging it into an Intel Atom processor probably won't be better than our eyes and brain, even if the cameras don't blink or get tired. It's only going to get better though.
gruez
>Our eyes have a 200-degree FOV, providing a 576 megapixel landscape, with 13ms of latency.
...only if you count the field of view you get from moving your eyeballs. You wouldn't say a PTZ camera has "360 FOV" just because it can rotate around. The "576 megapixel" figure is also questionable. Peak resolution only exists in the fovea. Everywhere else is blurry and much lower resolution. You don't notice this because your eyes does it automatically, but the actual information you can receive at any given time is far less than "576 megapixel".
standeven
The quoted latency and neuron counts can also be questioned, but my point stands: it's hard to compete with the human eye and brain with current (affordable) camera and processing hardware.
null
toss1
Yup.
The concept that biological systems have made 3D vision, navigation, and object avoidance work without LIDAR is certainly attractive.
But there is a LOT more to it than just a photosensor and a bunch of calculations. The sensors themselves have many properties unmatched by cameras, including wider dynamic range, processing in the retina and optic nerve itself, and more, and the intelligence attached to every biological eye also is built upon a body that moves in 3D space, so has a LOT of alternate sensory input to fuse into an internal 3D model and processing space. We are nowhere near being able to replicate that.
The more appropriate analogy would be the wheel or powered fixed wing aircraft. Yes, we're finally starting to be able to build walking robots and wing-flapping aircraft, and those may ultimately be the best solution for many things. But, in the meantime, the 'artificial' solution of wheels and fixed airfoils gets us much further.
Ultimately, camera-only vision systems will likely be the best solution, but until then, integrating LIDAR will get us much further.
cmiller1
> Ultimately, camera-only vision systems will likely be the best solution, but until then, integrating LIDAR will get us much further.
Why though? How could it possibly be better than camera plus other sensors?
toss1
Because LIDAR specifically gives you the range or distance to each object. While in theory this should be possible with multiple cameras and stereoscopic vision/analysis, it obviously is not as simple in practice as it seems in theory. The additional depth info is also critical in identifying objects.
For example, several drivers of Tesla vehicles have been beheaded when a semi-truck turned/crossed in front of them and the car on autopilot evidently identified the white side of the trailer as sky and drove right under it, removing the roof and the occupant's heads. LIDAR would have identified a large flat object at range decreasing at the approximate speed of the vehicle, and presumably the self-driving system would have taken different action.
ajross
I don't see how that follows. To first approximation zero human-at-fault accidents are due to "sensor failure". I mean, sure, somewhere out there a pedestrian was killed while walking in a white-out blizzard. But far, far more were hit by drivers looking at their phones with perfectly good eyes.
aidenn0
If I'm reading the table correctly, there was only one vehicle for which reflective strips were worse than normal clothing (the Mazda), for the Honda reflective strips didn't always help but don't seem to have hurt (judging by the body text they did on the order of 12 tests, so 9% vs 0% is 1/12 vs 0/12).
pfedak
you're reading the table correctly but it's been reproduced incorrectly and had its title removed from the original source https://www.iihs.org/news/detail/high-visibility-clothing-ma...
i'm not clear from that how many trials were run for each test condition, but the percentage is average speed reduction, not a chance for binary hit/not hit. edit: the paper pdf says up to three trials each.
aidenn0
Wow, so much was lost when they mis-transcribed that table; thanks for the link.
Aloisius
> If I'm reading the table correctly, there was only one vehicle for which reflective strips were worse
No. It was all three vehicles. The table is average speed reduction.
Reflective strips had a lower average speed reduction than black clothing in every case except for the Subaru at 0 and 20 lux and the Honda at 0 lux.
null
andix
Another thing that bothers me personally with emergency vechicles at night are the very bright emergency lights (blue in Europe).
Especially in situations where a lot of emergency vehicles are parked with the lights on outside city lights. It's often very disorientating and for me it reduces visibility of the surroundings when passing by. Those traffic situations require additional caution, because there could be people and debris on the road, but might reduce passing drivers abilities to properly see them.
It's probably also a problem for car safety systems.
Maybe the emergency lights and reflective strips got too good, to a point where they start causing harm. Emergency lights could easily adjust automatically to the ambient lighting conditions.
(Mazda/honda definitively need to get better, the data shows it's possible, not arguing with that fact)
dtgriscom
Each morning I drive past a middle school as it starts its day, and there's a police car and officer guiding traffic. Sometimes the officer leaves the full flashing blue lights going, and it makes it really hard to see what's around it (e.g. the officer and/or students). Most of the time they leave it on non-flashing blue, which makes it a lot easier to see the environment.
hnburnsy
To me IIHS has done more harm then good. They continue to up their ratings bar to to help keep insurance outlays low (better crash protection), but at the expense of heavier and more expense cars, and there really has not been a decrease in passenger deaths per mile (2023 rate per mile is equal to 2018).
thesz
Why are there no European, American and/or Chinese vehicles to compare to?
https://www.carpro.com/blog/almost-all-new-vehicles-have-aut...
Why only those three?
bentcorner
The Honda and Mazda both use a single camera to visually detect pedestrians while the Subaru uses two cameras - perhaps this is the difference?
JumpCrisscross
My Subaru also has radar. It’s noticed things ahead of my in whiteout conditions that my eyes couldn’t yet discern.
numpad0
Does it make sense to always refer to these systems by car manufacturers? Lots of these "camera" units are self contained computers that directly generates steering and braking commands, and are constantly switched between lowest bidders.
It's a bit like yogurt that comes with an economy class meal. Evaluations made for the cup on one flight might not apply to a flight on different days, or its return flight. Shouldn't it be the brand on the cup, not the one on headrest, that gets named in reports?
largbae
As a consumer I can't choose which camera model, but I can choose which car manufacturer. So I should choose whichever car manufacturer chooses the best camera models, all else being equal.
If you want the unsummarized source and not the chatgpt summarized version:
https://www.iihs.org/news/detail/high-visibility-clothing-ma...