Template:M intro crime tunnel vision

From The Jolly Contrarian
Jump to navigation Jump to search

By tunnel vision, we mean that “compendium of common heuristics and logical fallacies,” to which we are all susceptible, that lead actors in the criminal justice system to “focus on a suspect, select and filter the evidence that will ‘build a case’ for conviction, while ignoring or suppressing evidence that points away from guilt.” This process leads investigators, prosecutors, judges, and defence lawyers alike to focus on a particular conclusion and then filter all evidence in a case through the lens provided by that conclusion.

The Multiple Dimensions of Tunnel Vision in Criminal Cases by Keith Findley and Michael Scott (2006)

Do not attribute to malice what can satisfactorily be explained by stupidity.

Hanlon’s razor

To a man with a hammer, everything looks like a nail.

—Abraham Maslow

Prosecutor’s tunnel vision
/ˈprɒsɪkjuːtəz/ /ˈtʌnᵊl/ /ˈvɪʒᵊn/ (n.)
The collection of biases and misconceptions that can lead prosecutors — in the widest sense: those who “prosecute” a particular theory of the world — to frame and maintain a view based on one’s starting hypothesis, however starkly it may be at variance with other evidence or common sense.

So named because those who suffer from it are often literal prosecutors, of crimes, and this kind of tunnel vision has led to notorious miscarriages of justice. The Central Park Five, Sally Clark, Lindy Chamberlain — all were convicted beyond reasonable doubt notwithstanding clear, plausible and more credible alternative explanations.

It is not just miscarriages of justice: the same collection of biases, prejudgments and psychological habits can motivate many ideologies, conspiracies and management theories. The batty dogma of outsourcing, for example, where the direct costs of an internal unit are a matter of accounting principle and therefore easy to measure but its benefits — the strength of the informal networks, employee commitment, institutional knowledge, and depth of experience and the capacity to quickly see and respond to unexpected events — are not, leading many businesses to make their operational stacks dramatically more fragile, rigid and brittle — worse, for all practical purposes but accounting ones.

An asymmetry of evidence

At the end of the “prosecutor’s tunnel” are clear but simplistic and often misleading models of a messy world. We are pattern-matching machines: we are drawn to neatness and resile from intractability: it reminds us of our frailty. The appealing simplicity of the prosecutor’s case, especially where backed by data, can crowd out our usually strong attachment to common sense, intuition and inference.

We have set up our institutions to be vulnerable to this: we allow “data” to be evidence but not “intuition”. Data, being a record of positive events, will tell us what a person did do, not what she did not. This cuts both ways: the convicted have that data point against them. They are, thereafter, stained. But we all make mistakes: it is not the state’s formal imprimatur that divides saints and sinners. As Scarlet Roberts put it: “When the DBS flags up nothing, these people aren’t squeaky clean. They just haven’t been caught.”

Just as not all convicts are guilty, nor are all the unconvicted innocent.

Criminal law proceeds upon data and the weight of evidence. Intuition’s role subsists mainly in the presumption of innocence. A prosecutor must prove guilt; the accused need not prove anything: she cannot be expected to explain what happened for the simple reason that, if she did not do it, she has no better idea what happened than anyone else. The jury, we all hope, lean on their intuition when conjuring doubts.

Standards of proof

The prosecution’s standard of proof is, in theory, high: beyond reasonable doubt. It isn’t clear that quite achieves what it is meant to. Courts have moved to dumb it down: that time-honoured phrase has been discarded and juries are directed to convict only if they are “sure”. While this is meant to mean the same thing, not all are persuaded that is how juries understand it.[1] And there is some reason to think that juries start with a presumption that the accused is guilty at least to the balance of probabilities: assuming the police acted in good faith, why else would the defendant be in the dock?

But a scenario where tendentious data may be introduced in support of guilt but there is a total lack of “data” supporting exoneration — only the intuition that it seems highly unlikely that such a person should do such a thing — may lead to that confusion. Lindy Chamberlain was convicted of her own daughter’s murder, with a pair of blunt scissors, on the evidence of bloodlike spatter in the footwell of her husband’s car. The intuition that a sane mother is most unlikely to brutally murder her own nine-week-old child at all, let alone with an improvised weapon and without warning or provocation was not before the court. Somehow the jury was persuaded not just that she did it, but that there was no plausible alternative explanation.

JC draws upon The Multiple Dimensions of Tunnel Vision in Criminal Cases by Keith Findley and Michael Scott in the Wisconsin Law Review (2006) and Robert Cialdini’s Persuasion. To some extent also the madness of crowds and Jon Haidt’s The Righteous Mind. The lesson we draw is that we are not as rational as we like to think and data is never the whole story.

It may describe all view-forming of a “conviction” kind. They are like political and religious views in that, once they take root, they are not easily displaced.

The “wrongful conviction” cases are bracing because, with hindsight, a better narrative and having taken a different cognitive path to the prosecutors, it is so hard to understand how they got there, or why they persisted with such plainly untenable views. If we treat prosecutor’s tunnel vision as a variety of political or even religious conviction, we can see better how “prosecutors” can be so energetic in their maintenance of a bad model. It perhaps explains the gruesome in-house performance in the Post Office Horizon IT scandal.

Prosecutors need not be literal prosecutors: campaigners for innocence, and conspiracy theorists suffer at the hands of the same collection of cognitive traps. Both sides of the public conversation about Lucy Letby are similarly afflicted with tunnel vision: hence, allegations of conspiracy from both sides.

The three phases of tunnel vision

Tunnel vision has three phases: first, the background conditions arise to make us vulnerable to tunnel vision in the first place; secondly, lpthose considerations that push us into a given tunnel; the third are those cognitive artefacts that keep us there.

Call these “setting out”, “getting there” and “staying there”.

In order of appearance:

Background

Certain dispositions, biases and miscellaneous psychological tics come together to create the conditions for tunnel vision to take hold of an earnestly-held narrative. Among them are the following:

The “anchoring” effect

Anchoring is the cognitive bias that describes our tendency to rely too heavily on the first piece of information we get (the “anchor”) when making decisions. We then recalibrate as we go not against some abstract sense of justice, but by reference to the anchor. Our initial impressions can disproportionately influence our strategy and assessment of responsibility. This is the theory behind the “discount sticker” in a car showroom: You are already getting a great deal, and you haven’t started haggling![2]

Overconfidence in expertise

We subject matter experts tend to overestimate our ability to diagnose and judge subtle problems, especially those adjacent to, but not within, our area of expertise. We tend to over-emphasise the experience we have drawn from our own domains when forming conclusions — this is an entry condition to any professional calling — when what we see, at attach significance to, might be only a small part of the story.

Where we are overly confident in our essential rectitude we are less likely to consider alternative models, explanations, theories or even evidence.

“To a man with a hammer...”

Cadets don’t volunteer for basic military training hoping they won’t get to shoot a gun.

The police show up for work to detect crime. Prosecutors to prosecute it. They are primed this way, as are we all: we like to be useful and we like our role in the great cosmos to make a difference. No one wants to say, “there’s nothing to see here, folks”, pack up and go home.

These are archetypal stories we tell ourselves: the dogged sleuth who smelled a rat and stuck at it over the objections of naysayers and superiors — threatened with traffic warden duties, right? — and endured the shadowy machinations of unseen malign forces. We don’t hear so much about the obsessive geek who hounded an innocent woman to her grave. That is not a story we like to hear.

Role pressure

If there has been some apparent calumny, no one will be satisfied by the agency that de-escalates. Officials will be under tremendous pressure to get results, an curiously this may lead to missed opportunities: West Yorkshire police missed opportunities to catch the Yorkshire Ripper because of this, despite interviewing him several times, because he didn’t match the profile they had built, which was based on letters from a hoaxer. The pressure to get convictions fast may arise due to career aspirations, public expectations, or institutional culture.

Getting there

Extrapolation

Especially vulnerable are subject matter experts. We are natural problem-solvers and model builders and we will easily slip beyond our brief and intrude into matters where we have little experience. A statistician can give us a compelling account of the Bayesian probabilities, but when she strays into the causes of elevated insulin readings in a sample containing an apparently significant cluster of readings. Likewise, a medical expert may opine that the insulin readings are elevated beyond what would usually be expected, but the forensic question of who or what caused the excess insulin levels is a question of forensics and not something a diabetes specialist has any better idea about than anyone else.

Precedent: bad heuristics

The greater the expertise, the more grooved the expectations, the stronger the heuristic, the greater the temptation to take that shortcut and presume that this is one of those cases. In the great preponderance of cases, this is a handy, efficient shortcut, but in the rare case that presents in a certain way but is an exception, it can be dangerous. Heuristics are excellent, efficient devices (as Gerd Gigernzer notes, they help us to catch balls without needing to perform differential equations), but when the model is wrong they can lead to trouble.

Base rate neglect

Base rate neglect — also known as the prosecutor’s fallacy — is the natural tendency of our poor facility with probabilities whereby we ignore the “base rate” of a phenomenon in the population and instead focus on “individuating” information that pertains to the specific case in front of us.

If there is a 1/1000 chance of a “false positive” for a given illness, but the general prevalence of that illness in the population is, say, 1 in a million then for every one true positive result, you would still expect 1000 false positives.

If, for example, there is estimated to be a 1 in 342 million so many suspicious events could have occurred with the same nurse on duty if she was not responsible for them, but the incidence of health professionals without criminal history, financial motive nor mental illness murdering multiple patients is one it a billion — it may be even lower than that — you would still expect three false positives for each true one, and this is significantly below the balance of probabilities threshold, never mind beyond reasonable doubt. This was the evidence that convicted Dutch Nurse Lucia de Berk. It turned out the probability of her shift patterns coinciding innocently with all the events was closer to one in twenty-five, but even if they had not been, she was still, more likely than not, innocent.

Confirmation bias

Professor A J Chalmers noted that observation is theory-dependent: scientists must first have a theory before they can gather evidence to test it: otherwise, how do you know what evidence is relevant? When measuring the speed of moving objects, is their colour relevant? Obviously not, right? Except that it is. The “Doppler effect” shifts the visible wavelength of light towards red or blue, depending on how quickly the object is moving relative to the observer. But you have to have a theory that predicts this before you even know to look for it.

Having picked a bad heuristic, we tend to seek out, interpret, and best remember information that confirms it. We may overweight evidence that supports our theory and disregard or minimise anything that contradicts it.

Selective information processing

Focusing on certain pieces of evidence while ignoring others. Prosecutors might only present evidence that strengthens their case and neglect exculpatory evidence that could help the defense.

Groupthink

Thinking or making decisions as a group in a way that discourages creativity or individual responsibility: Prosecutors might conform to the prevailing opinion within their office, stifling dissenting views and critical analysis of the case. See also Dan Davies’ idea of “accountability sinks”. Post Office Horizon IT scandal is perhaps the archetypal example of groupthink.

Reductionism

Drilling deep into technical details that, by themselves, and shorn of all context, seem to lead to one conclusion — especially one you are already anchored to — notwithstanding the wider picture making the hypothesis unlikely. Especially in cases with no direct evidence, there is a great risk of this.

Prosecutors’ focus on “blood” sprayed up in the footwell of the Chamberlains car led them to a theory that Azaria was murdered there, despite no evidence supporting the theory, and quite a lot — principally, the lack of time for Lindy Chamberlain to do any such thing. The prosecution case started with “murder in the car” as the anchoring evidence, and hypothesised a whole story around it, for which there was no supporting evidence but also no contradiucting evidence, so it was “possible”. There is a lot of this in the Lucy Letby case, on both sides.

Staying there

Hindsight bias and the reiteration effect

In hindsight, people tend to think an eventual outcome was inevitable, or more likely or predictable, than they might have before it happened. “What is the chance that that nice woman you met at the campsite just now will, in three hours, brutally murder her own nine-week old infant?” versus “Given that this nine-week old child has disappeared from the campsite, and the police suspect the mother of foul play, what is the prospect that her mother brutally murdered the child?”

Through “hindsight bias” we project new knowledge (of actual outcomes) onto our knowledge of the past (observed behaviour), without realising that the perception of the past has been tainted by the subsequent information.

Once a person becomes a prime suspect and prosecutors arrive at an outcome in their own determination of who they believe is guilty — hindsight bias suggests that, upon reflection, the suspect was the inevitable and likely suspect from the beginning. Evidence is malleable in light of this “realisation”.

This is compounded by a “reiteration” effect. Our confidence in a theory increases the more we hear it, independent of its truth or falsity. The longer that police, prosecutors and witnesses live with a conclusion of guilt, the more entrenched their conclusion becomes, and the more obvious it appears that all evidence pointed to that conclusion from the very beginning. This “reiteration effect” makes it increasingly difficult for police and prosecutors to consider alternative perpetrators or theories of a crime.

Outcome bias

Like hindsight bias, “outcome bias” involves projecting subsequent “outcomes” onto observed behaviour, only about the quality of a suspect’s decision. Subjects are more likely to judge as bad a suspect’s decision to operate when they are told the patient died during surgery than when told the patient survived. This is the operator error presumption from Sidney Dekker’s The Field Guide to Human Error Investigations

Sunk cost fallacy

The inclination to continue an endeavour once money, effort, time or credibility has been invested, even when new evidence suggests the defendant might be innocent. (see also commitment when talking about persuasion)

Antidotes

Q: How many psychiatrists does it take to change a light bulb?
A: Just one; but the light bulb really has to want to change.

Some strategies to counteract the effect, but the predominant one is to want to keep an open mind.

Hanlon’s, and Otto’s razor

“Do not attribute to malice things that can just as well be explained by stupidity.”

Hanlon’s razor

Don’t assume malice where stupidity will do; likewise, per Otto’s razor, don’t attribute to virtue something that could equally be attributed to self-interest; or to skill something that could equally be attributed to dumb luck.

  1. New Law Journal: The Trouble With “Sure”
  2. Anchoring is relatively well documented. Kahneman and Tversky asked subjects to to spin a wheel of fortune and, after spinning, to estimate the percentage of African nations in the UN. Those who landed on a higher number gave significantly higher estimates than those who landed on a lower number.