How do you regulate something that has the potential to help and hurt people, that affects every sector of the economy, and that is changing so rapidly that not even experts can keep up?
That has been the main challenge for governments when it comes to artificial intelligence.
Regulating ai too slowly could miss the opportunity to prevent potential pitfalls and dangerous misuses of the technology.
If you react too quickly, you risk writing bad or harmful rules, stifling innovation, or ending up in a position like that of the European Union. It first published its ai Law in 2021, just before a wave of new generative ai tools arrived, rendering much of the law obsolete. (The proposal, which has not yet become law, was later rewritten to include some of the new technologies, but it remains a bit awkward.)
On Monday, the White House announced its own attempt to govern the fast-paced world of ai with a sweeping executive order that imposes new rules on companies and directs a host of federal agencies to begin putting up barriers around the technology.
The Biden administration, like other governments, has been under pressure to do something about the technology since late last year, when ChatGPT and other generative ai applications burst into the public consciousness. ai companies have been sending executives to testify before Congress and brief lawmakers on the technology‘s promises and risks, while activist groups have urged the federal government to crack down on dangerous uses of ai, such as manufacturing new cyber weapons and creating deceptive counterfeits.
Additionally, a cultural battle has broken out in Silicon Valley, with some researchers and experts urging the ai industry to slow down, and others pushing for it to accelerate at full speed.
President Biden’s executive order attempts to chart a middle path: allowing ai development to continue largely undisturbed, while setting some modest rules and signaling that the federal government intends to keep a close eye on the ai industry. ai in the coming years. Unlike social media, a technology that was allowed to grow unimpeded for more than a decade before regulators showed interest in it, this shows that the Biden administration has no intention of letting ai go unnoticed.
He full executive orderwhich is over 100 pages long, seems to have something for almost everyone.
Concerned ai safety advocates, such as those who signed an open letter this year asserting that ai poses a similar “extinction risk” to pandemics and nuclear weapons, will be happy that the order imposes new requirements on the companies that build powerful ai systems. .
In particular, companies that make the largest ai systems will be required to notify the government and share the results of their security tests before releasing their models to the public.
These reporting requirements will apply to models above a certain computing power threshold (100+ septillion integer or floating point operations, if you’re curious) which will likely include next-generation models developed by OpenAI, Google, and other majors. companies. developing ai technology.
These requirements will be enforced through the Defense Production Act, a 1950 law that gives the president broad authority to compel American companies to support efforts deemed important to national security. That could give the rules the teeth that the administration’s previous voluntary commitments to ai lacked.
Additionally, the order will require cloud providers that rent computers to artificial intelligence developers (a list that includes Microsoft, Google and Amazon) to report their foreign clients to the government. And it directs the National Institute of Standards and technology to develop standardized tests to measure the performance and safety of ai models.
The executive order also contains some provisions that will please supporters of ai ethics: a group of activists and researchers who worry about the short-term harms of ai, such as bias and discrimination, and who think that Long-term fears of ai extinction are overblown. .
In particular, the order directs federal agencies to take steps to prevent artificial intelligence algorithms from being used to exacerbate discrimination in housing, federal benefit programs, and the criminal justice system. And it directs the Commerce Department to develop guidance for watermarking ai-generated content, which could help combat the spread of ai-generated misinformation.
And what do ai companies, the objectives of these standards, think of them? Several executives I spoke to on Monday seemed relieved that the White House order would not require them to register for a license to train large ai models, a proposed measure that some in the industry had criticized as draconian. Nor will it require them to recall any of their current products from the market or force them to reveal the type of information they have tried to keep private, such as the size of their models and the methods used to train them.
Nor does it attempt to curb the use of copyrighted data in training ai models, a common practice that has come under attack by artists and other creative workers in recent months and is being litigated in court.
And tech companies will benefit from the order’s attempts to loosen immigration restrictions and streamline the visa process for workers with specialized ai experience as part of a “ai.gov/” title=”” rel=”noopener noreferrer” target=”_blank”>Increased ai talent.”
Of course, not everyone will be delighted. Hardline security activists perhaps wish the White House had imposed stricter limits on the use of large ai models, or blocked the development of open source models, the code of which can be freely downloaded and used by anyone. And some enthusiastic ai boosters may be upset that the government is doing something to limit the development of a technology they consider mostly good.
But the executive order appears to strike a careful balance between pragmatism and caution, and in the absence of congressional action to sign comprehensive ai regulations into law, they appear to be the clearest guardrails we are likely to have in the foreseeable future.
There will be other attempts to regulate ai, especially in the European Union, where the ai Act could become law. technology/eu-lawmakers-face-struggle-reach-agreement-ai-rules-sources-2023-10-23/” title=”” rel=”noopener noreferrer” target=”_blank”>as soon as next yearand in Great Britain, where technology-67172230″ title=”” rel=”noopener noreferrer” target=”_blank”>world leaders summit this week It is expected to lead to new efforts to curb the development of ai.
The White House executive order is a sign that it intends to act quickly. The question, as always, is whether ai itself will advance faster.