SA Francisco board of supervisors recently voted to allow his police to deploy robots equipped with lethal explosives, before going back several weeks later. In the United States, the vote sparked a fierce debate about the militarization of the police, but it raises fundamental questions for all of us about the role of robots and AI in fighting crime, how police decisions are made, and indeed , the very purpose of our criminal policy. justice systems In the UK, officers operate on the principle of ‘policing by consent’ rather than by force. But according to the 2020 England and Wales Crime Survey, public trust in the police has fallen from 62% in 2017 to 55%. a recent survey asked Londoners if the Met was institutionally sexist and racist. Nearly two-thirds answered “probably” or “definitely.”
This is perhaps not surprising, given the high-profile cases of crimes committed by police officers such as Wayne Couzens, who murdered Sarah Everard, and , who recently pleaded guilty to 49 counts, including rape and sexual assault.
The new commissioner, Mark Rowley, has said “we have to prepare for more painful stories” and warned that two to three officers a week are expected to appear in court on criminal charges in the coming months. But what if the problem with surveillance goes beyond the so-called “bad apples”, beyond even the culture and politics that allow discrimination to flourish unchecked? What if it’s also embedded in the way human beings make decisions?
Surveillance requires hundreds of judgments to be made each day, often under conditions of extreme pressure and uncertainty: who and where to watch, which cases and victims to prioritize, whom to believe, and which lines of investigation to pursue. As Malcolm Gladwell explains at Blink, These snap decisions, often described as “hunches,” are based on our individual social and emotional experiences, but also on biases we’ve all internalized from society at large, such as racism, sexism, homophobia, and transphobia.
So could artificial intelligence offer a fairer and more efficient path to 21st century policing? Broadly speaking, there are two types of AI: “narrow AI”, which can perform specific tasks such as image recognition, and “general-purpose AI”, which makes much more complex judgments and decisions that span all kinds of domains. . General-purpose AI is based on deep learning: absorbing vast amounts of data and using it to continually adjust and improve performance, and has the potential to take over more and more tasks that humans perform at work. ChatGPT, a state-of-the-art language processing model that has the ability to write research papers, articles, and even poems in a matter of seconds, is the latest example of this to capture the public imagination.
AI can already search through millions of images and analyze vast amounts of social media posts to identify and locate potential suspects. Drawing on other types of data, it could also help predict the times and places where crime is most likely to occur. In particular cases, it could test hypotheses and filter out errors, allowing officers to focus on the lines of inquiry most justified by the available evidence.
Faster, fairer, evidence-based decisions for a fraction of the cost certainly sounds appealing, but early research suggests the need for caution. So-called “predictive surveillance” uses historical information to identify potential perpetrators and future victims, but studies have shown that the source data for this type of modeling can be riddled with preconceptions, generating, for example, results that categorize people disproportionately colored. dangerous” or “lawless”. A 2016 Rand Corporation study concluded that Chicago’s “heat map” of anticipated violent crime failed to reduce gun violence, but led to more arrests in low-income and racially diverse neighborhoods.
More deeply, AI is designed to achieve the goals we set for it. So, as Professor Stuart Russell warned in his Reith Conferences 2021any task must be carefully defined within a framework that benefits humanity so that, as in The Sorcerer’s Apprentice, the order to fetch water results in an unstoppable flood.
Eventually, we can learn to engineer bias and avoid perverse consequences, but will that be enough? As Professor Batya Friedman of the University of Washington School of Information has observed: “Justice is more than a correct decision. It is a process in which human beings bear witness to each other, acknowledge each other, hold each other accountable, restore each other.”
Instead of debating what AI will or will not be able to do in the future, we should be asking ourselves what we want from our justice and criminal justice system, and how AI could help us achieve it. It is unlikely that our ambitions will be realized simply by replacing officers with computers, but think what could be achieved in a human-machine team, where each learns from and adds value to the other. What if we subjected humans to the same scrutiny that we rightly place on AI, exposing our biases and assumptions to constant and constructive challenge? What if AI could help with repetitive and resource-intensive tasks, giving police officers what Professor Eric Topol, writing about the AI revolution in medicine, has called the “gift of time”? ? This would allow them to treat both the victims and the accused with the dignity that only humans can embody and that all members of society deserve.
Perhaps this would win the trust and consent of the public on which surveillance really depends.
Jo Callaghan is an AI-focused workplace strategist and author of the first detective novel In the Blink of an Eye, published by Simon & Schuster.
Other reading
Life 3.0: being human in the age of artificial intelligence by Max Tegmark (Penguin, £10.99)
To blink by Malcolm Gladwell (Penguin, £10.99)
Mark Coeckelbergh’s Political Philosophy of AI (Polity, £16.99)