Tesla’s Autopilot investigation continues
In the Summer of 2020, a Tesla using Autopilot crashed into a police car while the driver was watching a movie from his phone. This driving behavior is becoming increasingly common among Tesla drivers using the autopilot feature. Luckily, this accident didn’t have any fatalities, but according to Tesladeaths.com – yes, it’s an actual website – there have been 12 deaths involving Tesla’s Autopilot. The big question is, was this Tesla’s fault? Or was the driver not paying attention?
Well, it could be both.
In August 2021, the National Highway Traffic Safety Administration (NHTSA) started an investigation on Tesla’s Autopilot (as well as other brands). The continued research into Tesla’s Autopilot is another step closer to a recall. On June 9, the NHTSA announced they are upgrading the investigation to an Engineering Analysis (EA):
“PE21-020 is upgraded to an Engineering Analysis (EA) to extend the existing crash analysis, evaluate additional data sets, perform vehicle evaluations, and to explore the degree to which Autopilot and associated Tesla systems may exacerbate human factors or behavioral safety risks by undermining the effectiveness of the driver’s supervision. In doing so, NHTSA plans to continue its assessment of vehicle control authority, driver engagement technologies, and related human factors considerations.”
In other words, they need scientific data to rule out that Tesla’s Autopilot Safety Procedures and warnings are too lax for driving behaviors.
Understanding Tesla’s Autopilot
In the first letter to Tesla, the NHTSA had claimed they were “aware of twelve incidents where a Tesla vehicle operating in either Autopilot or Traffic-Aware Cruise Control struck first responder vehicles/scenes, leading to injuries and vehicle damage.” On Wednesday, the NHTSA posted a report on their website that says they’re now investigating 830,000 vehicles in the U.S. This number includes all models with the Level 2: Additional Assistance (Autopilot) feature. If you didn’t know, the NHTSA breaks down the Automated Car systems into six levels. Starting with Level 0, the Momentary Driver Assistance, the most basic, includes automatic emergency braking, forward collision warning, and lane-departure warnings. The ultimate stage, Level 5, is Full Automation. This level is where the car completely takes over, and no passenger needs to be engaged. However, no vehicle has been released passed Level 2. Level 2: Additional Assistance states, “You Drive, You Monitor.” As a consumer, that should mean Autopilot is not 100% always trusted, and you still need to be alert while operating the car.
Is Tesla at fault?
Tesla’s website clarifies what Autopilot’s intentions are: “Autopilot and Full Self-Driving Capability are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment. … The currently enabled features do not make the vehicle autonomous.” When drivers are 100% compliant with this rule, the technology checks out. Tesla does issue warnings when this rule isn’t followed and will turn off Autopilot if the safety precautions are not met and Tesla manuals contain the same information. However, it comes down to the last few seconds before any collision happens and if the Tesla can actually recognize a unique obstacle approaching.
Tesla’s consideration of driver behaviors and its capabilities to be in complicated situations is in question. As this story evolves, the NHTSA will be getting down to engineering and physics before making any announcement on a recall. Many automakers are adopting this AI or similar procedures. This investigation gives other brands a vast heads up before releasing any automated vehicles and leaves everyone to wonder about Tesla’s next move if a recall is announced.