And yet, as Robert Lowell wrote, “No rocket strays so far as man.” In recent months, as outrage on Twitter and elsewhere began to multiply, Musk seemed determined to squander much of the goodwill he’d amassed throughout his career. I asked Slavik, the plaintiffs’ attorney, whether the recent shift in public sentiment against Musk made it easier for him to work in the courtroom. “I think at least there are more people who are skeptical of his judgment now than before,” he said. “If he was on the other side, he would be worried about it.”
However, some of Musk’s more questionable decisions begin to make sense if viewed as the result of hard-hitting utilitarian calculation. Last month, Reuters reported that Neuralink, Musk’s medical device company, had caused the needless deaths of dozens of laboratory animals through hasty experiments. Musk’s internal messages made it clear that the urgency came from above. “We’re just not moving fast enough,” he wrote. “It’s driving me crazy!” The cost-benefit analysis must have seemed clear to him: Neuralink had the potential to cure paralysis, he believed, which would improve the lives of millions of humans in the future. The suffering of a smaller number of animals was worth it.
This form of crude long-termism, in which the size of future generations gives them additional ethical weight, even shows up in Musk’s remarks about buying Twitter. He called Twitter a “digital town square” that was responsible for nothing less than preventing a new American civil war. “I didn’t do it to make more money,” he wrote. “I did it to try to help humanity, who I love.”
Autopilot and FSD represent the culmination of this approach. “Tesla’s overall engineering goal,” Musk wrote, “is to maximize the area under the user happiness curve.” Unlike Twitter or even Neuralink, people died as a result of his decisions, but it didn’t matter. In 2019, in a testy email exchange with activist investor and strong Tesla critic Aaron Greenspan, Musk chafed at the suggestion that Autopilot was more than just life-saving technology. “The data is unequivocal that autopilot is safer than human driving by a significant margin,” he wrote. “It is unethical and disingenuous of you to claim otherwise. By doing so, you are endangering the public.”
I wanted to ask Musk to elaborate on his risk philosophy, but he did not respond to my interview requests. So instead I spoke with Peter Singer, a leading utilitarian philosopher, to discuss some of the ethical issues involved. Was Musk right when he asserted that anything that slows the development and adoption of autonomous vehicles is inherently unethical?
“I think he’s right,” Singer said, “if he’s right about the facts.”
Musk rarely talks about Autopilot or FSD without mentioning how superior it is to a human driver. At a shareholder meeting in August, he said that Tesla was “solving a very important part of AI, and that it can ultimately save millions of lives and prevent tens of millions of serious injuries by driving just an order of magnitude safer than driving.” people. Musk has data to back this up: Starting in 2018, Tesla has released quarterly safety reports to the public, showing a consistent advantage from using Autopilot. The most recent, from late 2022, said that Teslas with Autopilot engaged were one-tenth as likely to crash as a normal car.
That’s the argument Tesla has to present to the public and to jurors this spring. In the words of the company’s safety report: “While no car can prevent all accidents, we work every day to try to make them much less likely to happen.” Autopilot can cause an accident during World War II, but without that technology, we’d be in OOOOOOOOOOOOOOOOOOO.