In the media coverage of the recent attack in Edmonton on a police officer and several civilians, questions were asked about why this man was able to commit such a crime given that he had been questioned by police several years ago about his radical views.  The implicit suggestion was that we should be able to predict who will commit violent acts so we could take steps to prevent their bad deeds.

The belief that we should be able to predict who is likely to commit a crime has a long history, but in recent years has become an entrenched part of criminal justice in many countries.  Several instruments have been developed by criminologists and psychologists that are intended to do just that, and they are widely used by parole boards, probation officers, prosecutors and judges.

Unfortunately, an increasing amount of evidence throws the validity and value of these instruments into doubt.  While they are often cited with many compelling statistics about their predictive power, analysis shows that they have serious flaws that should throw considerable caution onto their use in courts.

First study

Four recent studies make this point in differing ways.  Michael Tonry, a law professor at the University of Minnesota, points out that we have forty years of experience to show that predictions of who is dangerous are often wrong, and that “many offenders sentenced to extended terms of imprisonment would not have re-offended”.  Sorting people into risk categories based on various characteristics is incompatible, he argues, with the mission of courts in considering the particular circumstances of each case and each person.  Tonry also notes that risk gets conflated with factors such as socio-economic status or race or marital status, which can result in serious biases against certain groups.  Moreover, the focus on risk judged from a scale removes attention from other important factors such as a person’s involvement in therapy or education or other pro-social activities; people change over time, which changes their risk profile.  In particular, just getting older reduces anyone’s risk of committing a crime.

Second study

Melissa Hamilton, a law professor and scholar, takes on the technical issues around some of the most commonly used various risk assessment instruments.  Paying attention only to the possibility of re-offending ignores other important issues such as the severity of a potential further crime.  But the main problem, she finds, is that “the predictive ability of actuarial tools is rather weak, and high error rates are a consequence thereof…. altogether, actuarial risk models fail to meet the high standards of validity and reliability for admissibility in the law as expert evidence.” (p. 3).  She points out that the different instruments can yield quite different results for the same people, noting that in one study of sex offenders, more than half of all those involved were rated high risk on at least 1 of 5 scales, while fewer than 5% received the same rating on all 5 scales.  Each of the main instruments, she finds, has a high likelihood of ‘false positives’ – that is, people who are incorrectly rated as higher risk and may well be subject to a harsher punishment as a result of that rating.  Hamilton also supports Tonry’s point that the law is concerned with individual cases, whereas predictive instruments are concerned with groups.  The estimate that 25% of a particular group will do x or y does not tell us anything about which people in the group that will be.  Unfortunately, she contends, neither judges nor many of the people who administer and use the results of these tests really understand the technical issues involved, making misinterpretation of the results quite likely, with serious consequences for individuals.  For example, people have been shown to regard a score of 5 out of 10 as being higher or more serious than a score of 50%, even though mathematically the two are identical.  Hamilton concludes that “Whatever merit actuarial assessments may have …. they are far too problematic for use in sentencing matters.” P. 61

Third study

The third study, by researchers from England, the U S and Sweden., was published in the British Medical Journal and is a review of 68 other studies with a total sample of nearly 25,000 people who were rated on one of 9 risk assessment tools and followed for an average of 5 years after completing a criminal sentence.  About 24% of these people re-offended. The tools correctly predicted less than half of the re-offenders; in the case of sex offenders, they predicted less than 25%.  They conclude that “even after 30 years of development, the view that violent, sexual or criminal risk can be predicted in most cases is not evidence-based.” (p.5).  It was especially problematic that the tools tend to overrate risk, with the result that many people who are actually unlikely to re-offend are rated as higher risk.

Fourth study

Finally, a recent post on the Forensic Pscyhologist blog reported, based on a study by a group of Canadian researchers, that the Static-99R, often used for risk assessment, produces ratings that the blog refers to as ‘wildly unstable’.  Risk estimates depend greatly on which group a person happens to be part of; the same rating on the scale was associated with 10% recidivism in one sample and 25% in another.  This study also pointed out a particular problem in estimating risk for sex offenders who have very low re-offending rates altogether – reported here as 7% after 5 years of follow-up.  When so few people re-offend in any case, any scale will be inadequate at determining risk.  The author of the blog suggests that this test, still very frequently used in Canada, should no longer be used in any court.

Need for caution in using predictions in the justice system

Predicting the future is difficult in any setting.  The same difficulties occur in predicting, say, who will get lung cancer, or who will drop out of high school.  But in those cases a wrong prediction doesn’t result in someone having a longer sentence, or more restrictive probation conditions, or being placed on a lifetime registry that restricts what he or she can do.

Wider public knowledge about these results, and about the actual rates of re-offending for various categories of crime, would help people form a more realistic picture of the actual risk to the public; in the absence of such information, the media focus on a case here and there can entirely distort people’s sense of real risk (a subject we’ll take up again in future posts).

 


Share:

Back