President Biden will issue an executive order on Monday outlining the federal government’s first regulations on artificial intelligence systems. They include requirements that the most advanced ai products be tested to ensure they cannot be used to produce biological or nuclear weapons, and that the results of those tests be reported to the federal government.
The testing requirements are a small but central part of what Biden, in a speech scheduled for Monday afternoon, is expected to describe as the broader government action to protect Americans from the potential risks that come with the huge advances. of ai in the past. several years.
The regulations will include recommendations, but not requirements, that photos, videos and audio developed by such systems be watermarked to make clear that they were created by ai. This reflects a growing fear that ai will make it much easier to create “deepfakes” and convincing disinformation, especially as the 2024 presidential campaign accelerates.
The United States recently restricted the export of high-performance chips to China to curb its ability to produce so-called large language models, the accumulation of data that has made programs like ChatGPT so effective at answering questions and speeding up tasks. Similarly, the new regulations will require companies that run cloud services to report their foreign clients to the government.
Biden’s order will be issued days before a meeting of world leaders on ai safety hosted by British Prime Minister Rishi Sunak. On the issue of ai regulation, the United States has lagged behind the European Union, which has been drafting new laws, and other nations, such as China and Israel, which have issued proposed regulations. Since ChatGPT, the ai-powered chatbot, exploded in popularity last year, global lawmakers and regulators have grappled with how artificial intelligence could disrupt jobs, spread disinformation and potentially develop its own brand of intelligence.
“President Biden is implementing the strongest set of actions any government in the world has ever taken on ai security and trust,” said Bruce Reed, White House deputy chief of staff. “It is the next step in an aggressive strategy to do everything possible on all fronts to realize the benefits of ai and mitigate the risks.”
The new U.S. rules, some of which will take effect in the next 90 days, are likely to face many challenges, some legal and some political. But the order is aimed at more advanced future systems and largely does not address immediate threats from existing chatbots that could be used to spread disinformation related to Ukraine, Gaza or the presidential campaign.
The administration did not release the language of the executive order on Sunday, but officials said some of the order’s steps would require approval from independent agencies, such as the Federal Trade Commission.
The order affects only American companies, but as software development occurs around the world, the United States will face diplomatic challenges in enforcing the regulations, which is why the administration is trying to encourage allies and adversaries alike to develop rules. Similar. Vice President Kamala Harris will represent the United States at a conference in London on the issue this week.
The regulations also aim to influence the technology sector by establishing safety and consumer protection standards for the first time. Using the power of its finances, the White House’s directives to federal agencies aim to force companies to meet standards set by their government clients.
“This is an important first step, and more importantly, the executive orders set standards,” said Lauren Kahn, senior research analyst at Georgetown University’s Center for Security and Emerging technology.
The order directs the Department of Health and Human Services and other agencies to create clear safety standards for the use of ai and to optimize systems to facilitate the purchase of ai tools. Directs the Department of Labor and the National Economic Council to study the effect of ai on the labor market and propose possible regulations. And it requires agencies to provide clear guidance to property owners, government contractors and federal benefit programs to avoid discrimination due to algorithms used in artificial intelligence tools.
But the White House’s authority is limited and some of the directives are not enforceable. For example, the order requires agencies to strengthen internal guidelines to protect consumers’ personal data, but the White House also recognized the need for privacy legislation to fully ensure data protection.
To foster innovation and bolster competition, the White House will call for the FTC to step up its role as a watchdog for consumer protection and antitrust violations. But the White House does not have the authority to order the FTC, an independent agency, to create regulations.
Lina Khan, chair of the trade commission, has already expressed her intention to act more aggressively as an ai watchdog. In July, the commission opened an investigation into OpenAI, the maker of ChatGPT, over potential consumer privacy violations and accusations of spreading false information about individuals.
“While these tools are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we must administer, even in this new market,” Ms. Khan wrote in a guest essay in The New York Times on May .
The tech industry has said it supports the regulations, although companies disagree over the level of government oversight. Microsoft, OpenAI, Google and Meta are among 15 companies that have agreed to voluntary security commitments, including having third parties stress test their systems for vulnerabilities.
Biden has called for regulations that support ai‘s opportunities to aid medical and climate research, while also creating barriers to protect against abuse. She has underscored the need to balance regulations with supporting American companies in a global race for ai leadership. And to that end, the order directs agencies to expedite the visa process for highly qualified immigrants and nonimmigrants with artificial intelligence experience to study and work in the United States.
Central regulations to protect national security will be outlined in a separate document, called the National Security Memorandum, to be produced next summer. Some of those regulations will be public, but many are expected to remain classified, particularly those related to measures to prevent foreign nations or non-state actors from exploiting artificial intelligence systems.
A senior Energy Department official said last week that the National Nuclear Security Administration had already begun exploring how such systems could accelerate nuclear proliferation, resolving complex issues in building a nuclear weapon. And many officials have focused on how these systems could allow a terrorist group to gather what it needs to produce biological weapons.
Still, lawmakers and White House officials have cautioned against moving too quickly to draft laws for rapidly changing artificial intelligence technologies. The EU did not consider major language models in its first legislative projects.
“If you move too quickly on this, you can ruin it,” Senator Chuck Schumer, Democrat of New York and the majority leader, said last week.