fbpx

Tuesday Talk*: Is AI In Law Enforcement Worth It?

While calling it “artificial intelligence” is somewhat new, the use of algorithms in law enforcement has been going on for a while now, and nobody knows whether the cost/benefit analysis makes it worthwhile.

The Office of Management and Budget guidance, which is now being finalized after a period of public comment, would apply to law enforcement technologies such as facial recognition, license-plate readers, predictive policing tools, gunshot detection, social media monitoring and more. It sets out criteria for A.I. technologies that, without safeguards, could put people’s safety or well-being at risk or violate their rights. If these proposed “minimum practices” are not met, technologies that fall short would be prohibited after next Aug. 1.

As tech emerged which purported to provide a new mechanism for law enforcement to be more effective, it’s been adopted without either fanfare or critique. Facial recognition, for example, is some really cool stuff in the movies, but it has also been the product of some spectacular failures. Notably, the failures tend to be very much racial, as its effectiveness in recognizing black people doesn’t seem to be nearly as valid as white people. Much as we don’t leap to find excuses to blame racism, this is very much a racial problem.

Consider the cases of Porcha Woodruff, Michael Oliver and Robert Julian-Borchak Williams. All were arrested between 2019 and 2023 after they were misidentified by facial recognition technology. These arrests had indelible ­­­consequences: Ms. Woodruff was eight months pregnant when she was falsely accused of carjacking and robbery; Mr. Williams was arrested in front of his wife and two young daughters as he pulled into his driveway from work. Mr. Oliver lost his job as a result.

All are Black. This should not be a surprise. A 2018 study co-written by one of us (Dr. Buolamwini) found that three commercial facial-analysis programs from major technology companies showed both skin-type and gender biases. The darker the skin, the more often the errors arose. Questions of fairness and bias persist about the use of these sorts of technologies.

Other technologies, from license plate readers to shotspotter, has been criticized for a variety of issues, from intrusiveness to error to ease of manipulation resulting in hiding abuse behind the curtain of tech neutrality. While they may be great when they work, are they great enough to overcome when they don’t? How would we know?

As scholars of algorithmic tools, policing and constitutional law, we have witnessed the predictable and preventable harms from law enforcement’s use of emerging technologies. These include false arrests and police seizures, including a family held at gunpoint, after people were wrongly accused of crimes because of the irresponsible use of A.I.-driven technologies including facial recognition and automated license plate readers.

The office of management and budget is proposing “minimum practices” for technology to catch up to its use and create a paradigm for whether its overall a good thing or bad thing, whether we are willing to suffer the cost of errors for the benefits tech purports to provide.

Here are highlights of the proposal: Agencies must be transparent and provide a public inventory of cases in which A.I. was used. The cost and benefit of these technologies must be assessed, a consideration that has been altogether absent. Even if the technology provides real benefits, the risks to individuals — especially in marginalized communities — must be identified and reduced. If the risks are too high, the technology may not be used. The impact of A.I.-driven technologies must be tested in the real world, and be continually monitored. Agencies would have to solicit public comment before using the technologies, including from the affected communities.

In the rush to embrace cool technology as it appears on the market, law enforcement has done little to implement safeguards and limits in its use. If it makes their job easier, or believed to be easier at least, they buy in. They don’t ask the public whether its a good idea. They don’t admit to its failings, which are usually swept under the rug since nobody wants to admit that their shiny new toy sucks, at least toward some people. And the determination of whether the tech is worth it is largely left up to law enforcement itself, without the rest of government or the public getting a chance to question it or call bullshit on its implementation.

Should law enforcement be empowered to latch onto any new tech that promises to be the cool new solution to crime and capture, or should it first require public comment and, to the extent anyone in government cares, approval. Do we wait until facial recognition is proven to be no more valid than dog sniffs to have our say, long after its become so deeply incorporated into their process and likely the law to ever disentangle it because it turns out to be mostly a big sham? But what is it really does work, and all the harm it might have stopped is inflicted while we dither around with its potential flaws?

*Tuesday Talk rules apply, within reason.

Related Articles

Responses

Your email address will not be published. Required fields are marked *