Although their attempts to keep up with advances in artificial intelligence have mostly failed, regulators around the world are taking very different approaches to controlling the technology. The result is a highly fragmented and confusing global regulatory landscape for a borderless technology that promises to transform labor markets, contribute to the spread of disinformation or even present a risk to humanity.
The main frameworks for regulating ai include:
European risk-based law: The European Union's ai Law, which is being negotiated on Wednesday, assigns regulations proportional to the level of risk posed by an ai tool. The idea is to create a sliding scale of regulations aimed at imposing the greatest restrictions on the riskiest ai systems. The law would classify ai tools according to four designations: unacceptable, high, limited and minimal risk.
Unacceptable risks include artificial intelligence systems that perform social scoring of people or facial recognition in real time in public places. They would be prohibited. Other tools that carry less risk, such as software that generates doctored videos and deepfake images, should reveal that people are viewing ai-generated content. Violators could be fined 6 percent of their global sales. Minimal risk systems include spam filters and ai-generated video games.
US Voluntary Codes of Conduct: The Biden administration has given companies leeway to voluntarily monitor themselves for security risks. In July, the White House announced that several ai makers, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, had agreed to self-regulate their systems.
Voluntary commitments included third-party security testing of tools known as red teaming, research into bias and privacy concerns, sharing risk information with governments and other organizations, and developing tools to combat social challenges such as climate change, including transparency measures to identify ai-generated material. Companies were already fulfilling many of those commitments.
American technology-based law: Any substantial regulation of ai will have to come from Congress. Senate Majority Leader Chuck Schumer, D-N.Y., has promised a comprehensive ai bill, possibly as early as next year.
But so far, lawmakers have introduced bills focused on the production and deployment of artificial intelligence systems. The proposals include creating an agency like the Food and Drug Administration that could create regulations for ai providers, approve licenses for new systems and set standards. Sam Altman, CEO of OpenAI, supported the idea. Google, however, has proposed that the National Institute of Standards and technology, founded more than a century ago without regulatory powers, serve as a center for government oversight.
Other bills focus on copyright violations by artificial intelligence systems that devour intellectual property to create their systems. Proposals have also been presented regarding electoral security and limiting the use of “deep fakes.”
China is making rapid progress in regulating speech: Since 2021, China ai-regulations-and-how-they-get-made-pub-90117#:~:text=The%20rules%20for%20recommendation%20algorithms,placed%20on%20synthetically%20generated%20content.” title=”” rel=”noopener noreferrer” target=”_blank”>has made rapid progress in implementing regulations on recommendation algorithms, synthetic content such as deep fakes and generative ai. The rules prohibit price discrimination through recommendation algorithms on social media, for example. ai creators must label ai-generated synthetic content. And draft rules for generative ai, like OpenAI's chatbot, would require that the training data and content the technology creates be “ai-regulations-shape-the-future-a-digichina-forum/” title=”” rel=”noopener noreferrer” target=”_blank”>true and accurate”, which many see as an attempt to censor what the systems say.
Global Cooperation: Many experts have said that effective regulation of ai will require global collaboration. So far, those diplomatic efforts have produced few concrete results. One idea that has been floated is the creation of an international agency, similar to the International Atomic Energy Agency that was created to limit the spread of nuclear weapons. One challenge will be overcoming the geopolitical mistrust, economic competition, and nationalist impulses that have become so intertwined with the development of ai.