A long-running Senate working group has issued its policy recommendation for federal ai funding: $32 billion annually, covering everything from infrastructure to grand challenges to national security risk assessments.
This “roadmap” is not a bill or a detailed policy proposal, but it gives an idea of the scale that lawmakers and “stakeholders” look at when they get down to the real thing, although the likelihood of that happening over an election year is evanescently small.
In a final report released by the office of Senate Majority Leader Chuck Schumer (D-NY), the bipartisan task force identifies the most important areas of investment to keep the United States competitive against rivals abroad .
Here are some highlights from the roadmap:
- “An intergovernmental ai research and development effort, including relevant infrastructure,” which means getting the DOE, NSF, NIST, NASA, Commerce, and a half-dozen other agencies and departments to format and share data in a way that is compatible with ai. In some ways, this seemingly relatively simple task is the most daunting of all and will likely take years to complete.
- Fund American ai hardware and software work at the semiconductor and architectural level, both through the CHIPS Act and elsewhere.
- Fund and further expand the national ai research resource, still in its infancy.
- “ai Grand Challenges” to stimulate innovation through competition in “ai applications that would fundamentally transform the process of science, engineering or medicine, and in fundamental issues in the design of safe and secure software and hardware.” efficient”.
- “Support ai and cybersecurity preparedness” in the elections, particularly to “mitigate ai-generated content that is factually false, while continuing to protect First Amendment rights.” Probably harder than it looks!
- “Modernize the federal government and improve the delivery of government services” by “upgrading IT infrastructure to utilize modern data science and artificial intelligence technologies and deploying new technologies to find inefficiencies in US code, federal rules and procurement programs. I understand what they're saying here, but that's a lot for an ai program.
- Lots of vague but important defense-related stuff, like “ai-enhanced Chemical, Biological, Radiological, and Nuclear (CBRN) Threat Assessment and Mitigation by DOD, Department of Homeland Security (DHS), DOE, and other relevant agencies ”.
- Examine the “regulatory gap” in finance and housing, where ai-driven processes can be used to further marginalize vulnerable groups.
- “Review whether other potential uses of ai should be extremely limited or prohibited.” After a section on potentially harmful things like ai-powered social scores.
- Legislation prohibiting ai-generated child sexual abuse material and other non-consensual images and media.
- Ensure the NIH, HHS, and FDA have the tools necessary to evaluate ai tools in medical and healthcare applications.
- “Establish a consistent approach to public-facing transparency requirements for ai systems,” private and public.
- Improve the general availability of “content provenance information”, i.e. training data. What was used to make a model? Is the model used to train it further? Etc. ai makers will fight this tooth and nail until they can sufficiently sanitize the ill-gotten amounts of data they used to create today's AIs.
- Look at the risks and benefits of using private versus open source ai (if the latter ever exists in a form that can scale).
You can read the full report here; There are many more points where the above comes from (a longer list than I anticipated). No budget figures are suggested.
Since the next six months will be spent mostly on election-related mumbo-jumbo, this document serves more to put a lot of general ideas into play than to spur actual legislation. Much of what is proposed would require months, if not years, of research and iteration before arriving at a law or standard.
The ai industry is advancing faster than the rest of the tech sector, meaning it outpaces the federal government by several orders of magnitude. Although the priorities listed above are mostly prudent, one wonders how many of them will still be relevant when Congress or the White House actually takes action.