The government should not “abdicate” its responsibilities and leave the future path of AI solely to Big Tech, said Aleksander Mądry, professor of Cadence Design Systems Computing at MIT and director of MIT’s Center for Deployable Machine Learning. , to a congressional panel on Wednesday. .
Rather, Mądry said, the government should ask questions about the purpose and explainability of the algorithms corporations use, as a precursor to regulation, which he described as “an important tool” to ensure that AI is consistent with the goals of the society. If the government doesn’t start asking questions, then “I’m extremely concerned” about the future of AI, Mądry said in response to a question from Rep. Gerald Connolly.
Mądry, a leading expert on explainability and AI, testified at a hearing titled “Advances in AI: Are we ready for a technological revolution?” before the House Subcommittee on Cyber Security, Information Technology and Government Innovation, a panel of the House Government Oversight and Reform Committee. The other witnesses at the hearing were former Google CEO Eric Schmidt, IBM Vice President Scott Crowder, and Center for AI and Digital Policy Senior Research Director Merve Hickok.
In her opening remarks, the subcommittee chair, Rep. Nancy Mace, cited the book “The Age of AI: And Our Human Future” by Schmidt, Henry Kissinger, and Dan Huttenlocher, dean of MIT’s Schwarzman School of Computing. She also drew attention to a March 3 op-ed in He Wall Street Journal by the three authors who summarized the book while discussing ChatGPT. Mace said that his formal opening remarks had been written entirely by ChatGPT.
In her prepared remarks, Mądry made three general points. First, he noted that AI is “no longer a matter of science fiction” nor is it confined to research labs. It’s out in the world, where it can bring enormous benefits but also carries risks.
Second, he said that AI exposes us to “counter-intuitive interactions.” He said that because AI tools like ChatGPT mimic human communication, it’s highly likely that people unquestioningly believe what produces such large language models. At worst, Mądry warned, human analytical skills will atrophy. He also said that it would be a mistake to regulate the AI as if it were human, for example by asking the AI to explain its reasoning and assuming that the resulting answers are credible.
Finally, he said too little attention has been paid to the problems that will result from the nature of the “supply chain” of AI, the way AI systems are built on top of each other. At the base are general systems like ChatGPT, which can only be developed by a few companies because they are very expensive and complex to build. On top of such systems are many AI systems designed to handle a particular task, like figuring out who a company should hire.
Mądry said this overlap raised several “policy-relevant” concerns. First, the entire AI system is subject to whatever vulnerabilities or biases exist in the big system at its base, and is dependent on the work of a few big companies. Second, the interaction of AI systems is not well understood from a technical standpoint, making AI outcomes even more difficult to predict or explain, and the tools difficult to “audit” . Finally, the combination of AI tools makes it difficult to know who to hold accountable when an issue arises: who should be legally responsible and who should address the concern.
In written material submitted to the subcommittee, Mądry concluded: “AI technology is not particularly well-suited for deployment across complex supply chains,” although that is exactly how it is being deployed.
Mądry ended her testimony by calling on Congress to investigate AI issues and be prepared to act. “We are at a tipping point in terms of what AI will bring in the future. Seizing this opportunity means discussing the role of AI, what exactly we want it to do for us, and how to make sure it benefits us all. This is going to be a difficult conversation, but we need to have it, and have it now,” he told the subcommittee.
Testimony of all hearing witnesses and a video of the hearing, which lasted about two hours, is available online.