Computers, Privacy & the Constitution

Data and predictive law enforcement in the age of computing

The age of computing has meant that large amounts of data are readily available together with the potential to automate processes for analysing this data. The ability to predict human behaviour is one progeny of the marriage of these concepts, to the great excitement of a range of actors. In particular, the state has a profound interest in predicting human behaviour, particularly in the politically lucrative area of law and order. In 1996, criminologist David Garland presciently identified 'one of the foundational myths of modern societies: namely, the myth that the sovereign state is capable of providing security, law and order, and crime control within its territorial boundaries.' This hoary beast of a myth has become a fire-breathing dragon with rise of computing and data science, particularly in the context of the war on terror. There is something appealingly scientific about using computing to predict and prevent crime, but we need to remember that technology is not neutral, and we need to think very carefully about the future we are creating.

The present: predictive policing

Top cops around the country are announcing that predictive policing is upon us. Academic experiments have been running for a while, and last year, Microsoft announced it would be assisting police in developing technology for predictive policing purposes. Microsoft was somewhat cautious about framing the purpose of such technology: 'predictive policing is not about making arrests, but about preventing crime through more effective and efficient resource allocation.' Other commentators have excitedly noted, without irony, the similarities with the film Minority Report. This is no longer simply fantasy: a study from UCLA has found that predictive policing algorithms actually reduce crime in the field. Moreover, a group of criminologists claim they have used a similar species of program to identify Banksy. 'These results support previous suggestions that analysis of minor terrorism-related acts (e.g., graffiti),' the abstract gleefully notes, 'could be used to help locate terrorist bases before more serious incidents occur.'

What are we to make of this? Should critics be silent, and agree that security is the beneficial outcome of a rather disquieting development in technology?

Predictive policing algorithms are based primarily on crime statistics, in other words, past behaviour of law enforcement in tackling crime. It is therefore arguable that crime statistics do not reflect what crimes are occurring; a better way to think about this data set is that it provides a picture of the state's response to crime. This creates a real risk the biases and social trends we see in everyday policing being reproduced in the supposedly more objective and scientific methodology of computerized predictive policing. Feeding data into automated processes without careful analysis of the assumptions being made can provide misleading answers to important questions.

The future of law enforcement?

A stark example of this specific problem and the potential problems it creates, albeit in a slightly different context, was revealed in documents leaked by Edward Snowden. The Skynet program run by the NSA uses an algorithm applied to data to identify terrorists. This algorithm was developed using data about 'known terrorists' and comparing it with a wide range of behavioural data taken from mobile phone use.

After no doubt many tax payer dollars and NSA man hours, this algorithm's highest rating target was Ahmad Zaidan, not an actual terrorist at all, rather the bureau chief in Islamabad for Al-Jazeera. The NSA documents refer to Zaidan as a MEMBER OF AL-QA'IDA, again, seemingly without irony.

There was a range of problems with the Skynet program. But one of the most obvious appears to be a rejection of one of the most basic principles of data science: correlation does not imply causation. While Zaidan may meet with known terrorism suspects, travel with them, and share social networks, he is clearly engaging in this behaviour as part of his role as a journalist. While Zaidan may fit the algorithm to identify terrorists perfectly, it is immediately obvious to any human that he doesn't actually belong in this category at all.

Indeed, our approach to terrorism is perhaps a future echo of more general trends in policing. The intelligence and law enforcement resources devoted to terrorism are inordinately large and they have been deployed to prevent a social problem that is relative tiny. The outcome has been over-policing of the worse kind; with unnecessary surveillance, increased data collection and invasive investigation techniques. In short, our approach to counter-terrorism takes our current approaches to policing, both in data collection and its analysis, and turbo-charges it.

Perhaps what is most troubling of all is the outcome to which all of this leads: large numbers of terrorism convictions arising as a result of entrapment. With all this knowledge and resources directed towards terrorism, the proverbial sledgehammer has ended up creating its own nuts to crack. The lesson we can learn from over policing in an effort to predict human behaviour is that it becomes a self-fulfilling prophecy. These high levels of confected terror threats, heroically avoided thanks to the FBI, are perhaps the logical outcome of policing that has socially constructed our understanding of crime. If we add data science to this heady mix, there is a grave risk that it will provide 'a veneer of technological authority' to these practices.

Arresting these trends

This gives us pause to think about how the law can intervene into such debates to protect ourselves from these serious problems. One obvious strategy, at least initially, is to provide those working within the criminal justice system with transparency over algorithms used in predictive policing. Currently, these algorithms are not publicly available. Such information ought to be considered vital to protection offered by the fifth and fourteenth amendment. There are security implications, no doubt, in revealing this, but there are risks if we do not. Such transparency will also allow the court to test the reliability and accuracy of such programs in providing the reasonable suspicion (and probable cause) required by the fourth amendment. If courts allow predictive policing determinations to substitute themselves for reasonable suspicion absent this kind of transparency, the fourth amendment will be weakened, arguably to an unprecedented degree.

Another policy alternative could be to decouple the idea of algorithmic trends in antisocial behaviour from law enforcement and criminal justice entirely. It is possible to imagine a world where this kind of data analysis is used to inform government spending and social programs. This is, of course, another way to reduce crime and resource burdens on policing. Yet, perhaps unsurprisingly in this political environment, it remains woefully under-explored.

Lastly, like so many moments in our post-Snowden world, the take up of these kinds of programs gives us a chance to reflect upon our relationship with technology. Technology companies are now drawing on wider data sets than criminal statistics as data inputs for predicting crime to balance out potential biases in crime statistics. Perhaps it is time to find more robust protections over data, technically and legally, including statutory protection over personal data and the requirement of informed consent from user before data can be used.

There may well be positive results that can be achieved by using data and data science to deploy inevitably limited police resources in an effort to reduce crime. But in doing so, we place vast troves of information and power in the hands of political forces that have proven to be both irresponsible and uninterested in informing the general public about the contours of the issue we are grappling with. 'We are developing an official criminology that fits our social and cultural configuration,' Garland argued, twenty years ago, 'one in which amorality, generalized insecurity and enforced exclusion are coming to prevail over the traditions of welfare-ism and social citizenship.' Garland has been proven more right than he would probably care to imagine.

-- LizzieOShea - 04 Mar 2016



Webs Webs

r5 - 09 May 2016 - 17:34:45 - LizzieOShea
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM