The Department of Homeland Security has seen firsthand the opportunities and risks of artificial intelligence. Years later, he found a trafficking victim using an artificial intelligence tool that conjured up an image of the child a decade older. But it has also been misled in investigations by deeply fake images created by ai.
Now, the department is becoming the first federal agency to adopt the technology with a plan to incorporate generative ai models across a wide range of divisions. In partnership with OpenAI, Anthropic, and Meta, it will launch pilot programs using chatbots and other tools to help combat drug and human trafficking crimes, train immigration officials, and prepare for emergency management across the country.
The rush to deploy this unproven technology is part of a larger struggle to keep up with the changes brought about by generative ai, which can create hyper-realistic images and videos and imitate human speech.
“You can't ignore it,” Alejandro Mayorkas, secretary of the Department of Homeland Security, said in an interview. “And if you don't lean into the future by recognizing and being prepared to address its potential for good and harm, it will be too late and that is why we are moving forward quickly.”
The plan to incorporate generative ai across the agency is the latest demonstration of how new technology like OpenAI's ChatGPT is forcing even the most serious industries to reevaluate the way they do their work. Still, government agencies like DHS are likely to face some of the harshest scrutiny for the way they use the technology, which has sparked rancorous debate because it has at times proven unreliable and discriminatory.
Those within the federal government have scrambled to form plans following President Biden's executive order issued late last year calling for the creation of safety standards for ai and their adoption across the federal government.
DHS, which employs 260,000 people, was created after the 9/11 terrorist attacks and is charged with protecting Americans within the country's borders, including policing human and drug trafficking, protecting critical infrastructure , disaster response and border patrol.
As part of his plan, The agency plans to hire 50 artificial intelligence experts to work on solutions to keep the country's critical infrastructure safe from attacks generated by artificial intelligence and combat the use of the technology to generate child sexual abuse material and create biological weapons.
In pilot programs, on which it will spend $5 million, the agency will use artificial intelligence models such as ChatGPT to assist in investigations of child abuse materials, human trafficking and drugs. It will also work with companies to sift through their vast amounts of text-based data to find patterns to help researchers. For example, a detective searching for a suspect driving a blue van will be able to search for the same type of vehicle for the first time in all national security investigations.
DHS will use chatbots to train immigration officials who have worked with other employees and contractors posing as refugees and asylum seekers. ai tools will allow officials to get more training with mock interviews. Chatbots will also combine information about communities across the country to help them create disaster relief plans.
The agency will report results of its pilot programs at the end of the year, said Eric Hysen, the department's chief information officer and chief ai officer.
The agency chose OpenAI, Anthropic and Meta to experiment with a variety of tools and will use cloud providers Microsoft, Google and Amazon in its pilot programs. “We can't do this alone,” he said. “We need to work with the private sector to help define what the responsible use of generative ai is”