Become a member and get exclusive access to articles, contests and more!
Start Today

Going Direct: Fatal Tesla Crash Report: Why Aviation Is Closely Following

The crash of a Tesla on a Florida Highway last spring was one of thousands of fatal crashes that take place on U.S. highways every year, but this one was different because the driver of the car was a robot.

The Tesla Model S that drove into the side of a semi truck turning on to the highway in front of it was on autopilot at the time, being driven autonomously. While the driver of a Tesla on autopilot can at any time take control of the car again, the driver in this case, Joshua Brown, a technology proponent and active member of the online Tesla community, apparently failed to see the Freightliner make the turn, as the car’s autopilot failed to detect, as well. The Tesla was traveling almost ten mph over the speed limit on the two-lane highway when the crash occurred and there was no evidence that the autopilot or the driver deployed the brakes. The driver of the truck was not injured in the crash.

In its report on the crash, issued on Tuesday, the NTBS found that the autopilot in the Tesla played what it called a “major role” in the crash and further found that the automation lacked “system safeguards,” giving the driver “far too much leeway!to divert his attention.” The NTSB also faulted the driver for not paying attention at the time of the crash and the truck driver’s failure to yield to the Tesla, which had the right of way.

After the accident, Tesla upgraded its traffic recognition system in its vehicles, reportedly using the Florida crash to refine its object-recognition algorithms.

Advertisement

The report is the NTSB’s first on a car crash where the failure of an autonomous or semi-autonomous system was involved in the crash. For aviation, the stakes are high, because aviation is increasingly employing new autonomous technologies to improve safety, and the feds’ policies toward those systems are still evolving.

In this case, the NTSB signaled an openness to these systems, which every major auto-maker is working into its current products, with features such as lane recognition, pedestrian avoidance, and blind spot monitoring, among others. These car manufacturers have gone on record as claiming that these technologies will improve safety, and given the statistics it’s hard to argue otherwise. Just as in aviation where a vast majority of aviation accidents are caused by pilot error, most car crashes are caused by driver error.

Many auto accidents involve excessive speed, failure to brake and driver inattention, interestingly enough, exactly the same factors in the fatal Tesla crash. Critics of autonomous systems argue that if that crash couldn’t be avoided through the use of technology, what’s the point of it? But such an attitude misses the point of our initiatives to improve safety. The idea isn’t to set the bar at zero accidents. If we did that, we’d all have to stop driving (or flying) tomorrow. Instead, the goal has to be to reduce accidents across a broad spectrum of drivers.

But the question isn’t so easily dismissed in the case of semi-autonomous systems, like the one on the Tesla S that crashed in Florida. For one, the feds determined, through data compiled by the car itself, that Brown on several instances ignored the car’s prompts for him to take hold of the steering wheel. While the Board didn’t directly address this issue, there is evidence that Brown was watching a movie on a portable DVD player at the time of the crash—the truck driver who went to check on the car after the crash reported hearing the soundtrack of a movie still playing when he arrived at the demolished car moments after the crash. The implication, of course, is that Brown would not have been knowingly diverting his attention as he did without the implied assurance that the car had everything under control, so it wasn’t just an accident that wasn’t prevented by the system but you could argue that it was caused by the system, which in part the Board did say on Tuesday.

Advertisement

It’s sad to say, but we need to accept two things here. First, the development of this technology in live situations will almost certainly result in accident scenarios that the inventors did not foresee. Such has happened again and again in aviation. And despite that fact, we need to continue to deploy these new devices, in part because we can learn from the accidents that they will fail to prevent or, in some cases, cause. In the end the bottom line is what matters here, and if a thousand lives a year can be saved by a technology that costs a half dozen lives, then we need to do that gruesome calculus and say yes to the technology. It’s all about the numbers. And the sunny side of the equation is that safety will almost certainly continue to improve. Just as in commercial aviation, where there are fewer and fewer accidents—it has been nearly a decade since the last crash of a United States based Part 121 carrier—it will be in the automotive world, too, where surprising accident causes are revealed and then that data is used to prevent them in the future.

With small planes, the same will come to pass. Accidents related to loss of control, which continue to bedevil our beloved segment of aviation, will decline as more planes are outfitted with full-time autopilot systems that prevent pilots from making mistakes that will kill them and their passengers. We just need to have faith in the process, which shouldn’t be that hard, since we’ve seen it work over and over again in the past.


If you want more commentary on all things aviation, go to our Going Direct blog archive.

Advertisement
Advertisement

Save Your Favorites

Save This Article