Efforts to influence President-elect Donald Trump's policies through Elon Musk are already beginning. On Friday, the nonprofit ai advocacy group Americans for Responsible Innovation (ARI) launched a public petition calling on Trump to name Musk his special advisor on ai, saying he is well positioned to protect US leadership. in the technology while ensuring it is implemented safely.
Musk has been a leading critic of OpenAI, a company he co-founded but from which he has more recently distanced himself and become an opponent. Shortly after the launch of ChatGPT, he signed a letter calling for a moratorium on the development of more advanced generative ai models to implement safeguards. However, critics say his stances are largely self-serving, as he also runs his own artificial intelligence company, xAI.
The ARI petition says it is possible to address Musk's conflicts of interest, arguing that with “adequate mechanisms” to do so, “Musk would be an invaluable asset in helping the Trump administration navigate the development of this transformative technology.” ARI aims to get 10,000 signatures for the petition.
“Musk could emerge as an advocate for ai safety in the administration,” wrote ARI policy analyst David Robusto. ai-policy-in-the-trump-white-house/”>in a recent blog post. Robusto pointed to Musk's co-founding of OpenAI, his call for a moratorium on ai development, and support for California's vetoed ai safety bill SB 1047 as reasons to believe his commitment to safety is deeply rooted. Robusto admits that Musk hasn't said much about what kind of government policies should actually be implemented (besides the creation of an agency dedicated to ai safety), but says that his “lack of specificity suggests that his thinking on the topic is skewed.” evolving and can still be shaped.” for public debate on the subject.”
Musk has previously stated that he will join the Trump administration in a role he created from scratch: the head of a new Department of Government Efficiency (DOGE), whose role would be to gut the entire US regulatory system. But Robusto hopes to promote ai safety even in that capacity, if only by putting less pressure on the departments that manage it. Robusto says Musk can prevent agencies key to ai safety policy, such as the National Institute of Standards and technology (NIST), from suffering cuts in federal spending. And if Musk imposes mass layoffs across the government to save costs, the government could turn more to ai tools to offset the workload.
“With the right guardrails in place, their unique combination of technical expertise and security advocacy could be a valuable asset in developing responsible ai governance,” Robusto writes.