The real-life risks of predictive policing—and what one city is doing differently

The 2002 sci-fi thriller Minority Report depicts a dystopian future where a specialized police unit is tasked with arresting people for crimes they have not yet committed. Directed by Steven Spielberg and based on a short story by Philip K. Dick, the drama revolves around “PreCrime”—a system informed by a trio of psychics, or “precogs,” who anticipate future homicides, allowing police officers to intervene and prevent would-be assailants from claiming their targets’ lives.

The film probes at hefty ethical questions: How can someone be guilty of a crime they haven’t yet committed? And what happens when the system gets it wrong?

While there is no such thing as an all-seeing “precog,” key components of the future that Minority Report envisions have become reality even faster than its creators imagined. For more than a decade, police departments across the globe have been using data-driven systems geared toward predicting when and where crimes might occur and who might commit them.

Far from an abstract or futuristic conceit, predictive policing is a reality. And market analysts are predicting a boom for the technology.

Given the challenges in using predictive machine learning effectively and fairly, predictive policing raises significant ethical concerns. Absent technological fixes on the horizon, there is an approach to addressing these concerns: Treat government use of the technology as a matter of democratic accountability.

Troubling history

Predictive policing relies on artificial intelligence and data analytics to anticipate potential criminal activity before it happens. It can involve analyzing large datasets drawn from crime reports, arrest records and social or geographic information to identify patterns and forecast where crimes might occur or who may be involved.

Law enforcement agencies have used data analytics to track broad trends for many decades. Today’s powerful AI technologies, however, take in vast amounts of surveillance and crime report data to provide much finer-grained analysis.

Police departments use these techniques to help determine where they should concentrate their resources. Place-based prediction focuses on identifying high-risk locations, also known as hot spots, where crimes are statistically more likely to happen. Person-based prediction, by contrast, attempts to flag individuals who are considered at high risk of committing or becoming victims of crime.

These types of systems have been the subject of significant public concern. Under a so-called intelligence-led policing program in Pasco County, Florida, the sheriff’s department compiled a list of people considered likely to commit crimes and then repeatedly sent deputies to their homes. More than 1,000 Pasco residents, including minors, were subject to random visits from police officers and were cited for things such as missing mailbox numbers and overgrown grass.

Four residents sued the county in 2021, and last year they reached a settlement in which the sheriff’s office admitted that it had violated residents’ constitutional rights to privacy and equal treatment under the law. The program has since been discontinued.

This is not just a Florida problem. In 2020, Chicago decommissioned its “Strategic Subject List,” a system where police used analytics to predict which prior offenders were likely to commit new crimes or become victims of future shootings. In 2021, the Los Angeles Police Department discontinued its use of PredPol, a software program designed to forecast crime hot spots but was criticized for low accuracy rates and reinforcing racial and socioeconomic biases.

Necessary innovations or dangerous overreach?

The failure of these high-profile programs highlights a critical tension: Even though law enforcement agencies often advocate for AI-driven tools for public safety, civil rights groups and scholars have raised concerns over privacy violations, accountability issues, and the lack of transparency. And despite these high-profile retreats from predictive policing, many smaller police departments are using the technology.

Most American police departments lack clear policies on algorithmic decision-making and provide little to no disclosure about how the predictive models they use are developed, trained, or monitored for accuracy or bias. A Brookings Institution analysis found that in many cities, local governments had no public documentation on how predictive policing software functioned, what data was used, or how outcomes were evaluated.

This opacity is what’s known in the industry as a “black box.” It prevents independent oversight and raises serious questions about the structures surrounding AI-driven decision-making. If a citizen is flagged as high-risk by an algorithm, what recourse do they have? Who oversees the fairness of these systems? What independent oversight mechanisms are available?

These questions are driving contentious debates in communities about whether predictive policing as a method should be reformed, more tightly regulated, or abandoned altogether. Some people view these tools as necessary innovations, while others see them as dangerous overreach.

A better way in San Jose

But there is evidence that data-driven tools grounded in democratic values of due process, transparency, and accountability may offer a stronger alternative to today’s predictive policing systems. What if the public could understand how these algorithms function, what data they rely on, and what safeguards exist to prevent discriminatory outcomes and misuse of the technology?

The city of San Jose, California, has embarked on a process that is intended to increase transparency and accountability around its use of AI systems. San Jose maintains a set of AI principles requiring that any AI tools used by city government be effective, transparent to the public, and equitable in their effects on people’s lives. City departments also are required to assess the risks of AI systems before integrating them into their operations.

If taken correctly, these measures can effectively open the black box, dramatically reducing the degree to which AI companies can hide their code or their data behind things such as protections for trade secrets. Enabling public scrutiny of training data can reveal problems such as racial or economic bias, which can be mitigated but are extremely difficult if not impossible to eradicate.

Research has shown that when citizens feel that government institutions act fairly and transparently, they are more likely to engage in civic life and support public policies. Law enforcement agencies are likely to have stronger outcomes if they treat technology as a tool—rather than a substitute—for justice.


Maria Lungu is a postdoctoral researcher of law and public administration at the University of Virginia.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


https://www.fastcompany.com/91330315/predictive-policing-real-life-risks-ethics?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 1mo | May 8, 2025, 11:50:04 AM


Login to add comment

Other posts in this group

Meta is bringing ads to WhatsApp. Privacy experts are sounding the alarm

Eleven years after purchasing WhatsApp, Meta is going full throttle with its plans to monetize the communication platform. And while officials at the social media giant say that users’ privac

Jun 16, 2025, 9:40:05 PM | Fast company - tech
Trump Mobile is here. Experts are baffled

You can stay at Trump hotels, play golf at Trump courses, and could, for a short time in 2007,

Jun 16, 2025, 7:30:10 PM | Fast company - tech
Pentagon Pizza Index: The theory that surging pizza orders signal global crises

A different kind of pie chart is being used to predict global crises.

A surge in takeout deliveries to the Pentagon has become a surprisingly accurate predictor of major geopolitical eve

Jun 16, 2025, 7:30:07 PM | Fast company - tech
The debate over state-level AI bans misses the point

Both sides are missing the point entirely as Congress debates the proposed 10-year ban on state AI laws contained in the “Big, Beautiful Bill.”

The current wrangling over who should regu

Jun 16, 2025, 12:30:04 PM | Fast company - tech
China is catching up to the U.S. in pharmaceuticals, but it’s not too late to turn that around

A decade ago, China had just a few hundred pharmaceutical drugs actively in development. Today, China has thousands of drugs in active development and is

Jun 16, 2025, 12:30:04 PM | Fast company - tech
Block’s CFO explains Gen Z’s surprising approach to money management

One stock recently impacted by a whirlwind of volatility is Block—the fintech powerhouse behind Square, Cash App, Tidal Music, and more. The company’s COO and CFO, Amrita Ahuja, shares how her tea

Jun 16, 2025, 5:30:04 AM | Fast company - tech
Computer simulations reveal the first wheel was invented nearly 6,000 years ago

Imagine you’re a copper miner in southeastern Europe in the year 3900 BCE. Day after day you haul copper ore through the mine’s sweltering tunnels.

You’ve resigned yourself to the grueli

Jun 15, 2025, 10:50:05 AM | Fast company - tech