The US government is taking its first tentative steps to lay down rules for artificial intelligence tools as the frenzy over generative AI and chatbots reaches fever pitch.
The US Commerce Department announced Tuesday that it is officially seeking public comment on how to create accountability measures for AI, seeking help on how to advise US lawmakers to address the technology.
“In the same way that financial audits built confidence in the accuracy of companies’ financial statements, accountability mechanisms for AI can help ensure that an AI system is trustworthy,” said Alan Davidson, director of the National Telecommunications and Information Administration (NTIA). , at a press conference at the University of Pittsburgh.
Davidson said the NTIA is seeking comment from the public, including researchers, industry groups, and privacy and digital rights organizations on the development of audits and evaluations of AI tools created by private industry. He also said that NTIA seeks to establish guardrails that allow the government to determine whether AI systems work as companies claim they do, whether they are safe and effective, whether they have discriminatory results or “reflect unacceptable levels of bias,” whether they disseminate or perpetuate misinformation, and whether they respect people’s privacy.
“We have to move fast because these AI technologies move very fast in some respects,” Davidson said. “We’ve had the luxury of spending time with some of those other technologies…this seems a lot more urgent.”
The Biden administration previously introduced “guidance” on the development of artificial intelligence systems in the form of a “bill of rights” which imply five principles that companies should consider for their products. These include data privacy, protections against algorithmic discrimination, and transparency about when and how an automated system is used.
He National Institute of Standards and Technology it has also published an AI risk management framework, voluntary guardrails that companies can use to try to limit the risk of harm to the public.
In addition, Davidson said, many federal agencies are looking at the ways that the current rules on the books may apply to AI.
And US lawmakers have introduced more than 100 AI-related bills in 2021, he noted. “That’s a big difference from the early days of, say, social media, cloud computing, or even the internet, when people weren’t really paying attention,” Davidson said.
That being said, the federal government has historically been slow to respond to rapidly advancing technologies with national regulations, particularly in comparison to European countries. Tech companies in the US, for example, can collect and share user data relatively free of federal restrictions. That allowed data brokers, companies that buy and sell user data, to thrive and made it harder for consumers to keep the private information they share with tech companies out of the hands of third parties or law enforcement.
Until now, chatbots and other AI tools have been developed and released publicly without restriction by any federal rule or regulatory framework. It has enabled rapid adoption of AI tools like ChatGPT by companies across all industries, despite concerns about privacy, misinformation, and a lack of transparency about how chatbots have been trained.
European regulators have proposed a legal framework that would classify AI systems by risk: unacceptable risk, high risk, limited risk, and minimal risk. Passage of the 2021 Artificial Intelligence Law would position the EU as a world leader in AI regulation, but it has recently faced pushback from companies investing in the booming chatbot industry.
Microsoftfor example, he has argued that because chatbots have more than one purpose and are used for low-risk activities, they cannot be easily categorized even though they can and have carried out activities considered “high-risk”, such as spreading disinformation.
Davidson said that’s why the government needs public input to determine what a responsible AI regulatory framework should look like.
“Good guardrails implemented with care can promote innovation,” he said. “They let people know what good innovation looks like, they provide safe spaces to innovate, and they address the very real concerns we have about harmful consequences.”