Unlike technologists at almost every other company working on self-driving vehicles, Musk insisted that autonomy could be achieved solely with cameras tracking their surroundings
A Tesla is seen slammed into the back of an unoccupied fire truck, in Culver City, California, U.S., January 22, 2018.
Image: Culver City Fire Department / via Reuters
Elon Musk built his electric car company, Tesla, around the promise that it represented the future of driving — a phrase emblazoned on the automaker’s website.
Much of that promise was centered on Autopilot, a system of features that could steer, brake and accelerate the company’s sleek electric vehicles on highways. Over and over, Musk declared that truly autonomous driving was nearly at hand — the day when a Tesla could drive itself — and that the capability would be whisked to drivers over the air in software updates.
Unlike technologists at almost every other company working on self-driving vehicles, Musk insisted that autonomy could be achieved solely with cameras tracking their surroundings. But many Tesla engineers questioned whether it was safe enough to rely on cameras without the benefit of other sensing devices — and whether Musk was promising drivers too much about Autopilot’s capabilities.
Now those questions are at the heart of an investigation by the National Highway Traffic Safety Administration after at least 12 accidents in which Teslas using Autopilot drove into parked firetrucks, police cars and other emergency vehicles, killing one person and injuring 17 others.
©2019 New York Times News Service