In November 2016, the Senate Subcommittee on Space, Science, and Competitiveness held the first congressional hearing on AI, with lawmakers twice citing Musk’s warnings. During the hearing, academics and the chief executive of OpenAI, a San Francisco lab, dismissed Musk’s predictions or said they were at least many years off.
Some lawmakers stressed the importance of the nation’s leadership in AI development. Congress must “ensure that America remains a world leader throughout the 21st century,” Sen. Ted Cruz, a Texas Republican and subcommittee chairman, said at the time.
DARPA subsequently announced that it was assigning $2 billion for AI research projects.
Warnings about the dangers of AI intensified in 2021 when the Vatican, IBM, and Microsoft pledged to develop “ethical AI,” meaning organizations are transparent about how the technology works, respect privacy, and minimize bias. . The group called for regulation of facial recognition software, which uses large databases of photos to identify people’s identities. In Washington, some lawmakers have tried to create rules for facial recognition technology and for company audits to prevent discriminatory algorithms. The bills went nowhere.
“It’s not a priority and it doesn’t feel urgent to members,” said Mr. Beyer, who failed to garner enough support last year to pass a bill on AI algorithm audits sponsored by Rep. Yvette D. Clarke, a New York Democrat. .
More recently, some government officials have tried to close the AI knowledge gap. In January, some 150 legislators and their staff packed a meeting, organized by the normally sleepy AI Caucus, which included the participation of Jack Clark, one of the founders of AI company Anthropic.
There is some action taking place around AI at federal agencies, which are enforcing laws already on the books. The Federal Trade Commission has filed execution orders against companies that used AI in violation of their consumer protection rules. The Consumer Financial Protection Bureau has also warned that opaque artificial intelligence systems used by credit bureaus could run afoul of anti-discrimination laws.
The FTC has also proposed trade surveillance regulations to curb the collection of data used in AI technology, and the The Food and Drug Administration published a list of AI technology in medical devices under its purview.