IN a move that sounds ripped straight from the film Minority Report, the UK Government is trying to build a system to predict who is likely to commit murder before they actually do it.
The initiative, formerly dubbed the “homicide prediction project,” is now titled: Sharing Data to Improve Risk Assessment. It appears to be an effort to merge vast datasets – criminal records, probation histories and who knows what else – into one predictive algorithm that flags potential future killers.
The Ministry of Justice claims the goal is public safety, to “intervene early” and reduce harm. But critics argue it is yet another step into a tech-dystopian state, where data replaces due process and the assumption of innocence is swapped for predictive suspicion.
Civil liberties groups are ringing alarm bells. The UK’s own data watchdog says the project could breach data protection laws. Campaigners warn it risks dragging people into surveillance based on who they are, not what they have done. It is all too easy to imagine how this could become another blunt instrument that disproportionately targets working-class communities, ethnic minorities and people already entangled in the justice system.
Sound familiar? It should.
In the US, similar “precog” systems have already rolled out. Parole boards in cities such as Philadelphia and Baltimore use software developed by criminologist Richard Berk to assess who might commit a violent crime post-release. The model pulls from tens of thousands of crime records. It crunches data such as the offender’s age and the type of prior offence. And yes – it predicts who is more likely to kill.
Berk defends his approach, saying: “We’re squeezing more information out of those predictors by having the computer document what the relationships are rather than imposing a priority.”
But it does not stop at serious criminals. Even people with relatively minor records can find themselves flagged, monitored more heavily, or denied parole based on what a machine thinks they might do.
The consequences? Real-life people being judged not on their actions, but on a predictive model’s idea of their potential threat. It’s the digital equivalent of pre-emptive punishment – a model ripe for abuse, especially by a state looking for high-tech shortcuts to crime control.
In the UK, this isn’t the first attempt either. Police in the West Midlands once tried their own predictive violence tool. It was scrapped after experts found it riddled with bias and fundamentally unfit for purpose.
But the state, it seems, never gives up on a bad idea.
The bottom line: giving algorithms the power to predict – and possibly shape – police or legal decisions is a dangerous road. These systems are only as good as the data we feed them and that data is already full of the same biases we pretend we’ve moved past.
Berk himself says that race isn’t an input in any of his systems and believes his own research has shown his algorithms produce similar risk scores regardless of race.
If the Government wants to tackle violent crime, perhaps firstly fund frontline services. But don’t build a machine that decides who is dangerous before they even pick up a weapon.