Template:M intro crime tunnel vision
By tunnel vision, we mean that “compendium of common heuristics and logical fallacies,” to which we are all susceptible, that lead actors in the criminal justice system to “focus on a suspect, select and filter the evidence that will “build a case” for conviction, while ignoring or suppressing evidence that points away from guilt.” This process leads investigators, prosecutors, judges, and defence lawyers alike to focus on a particular conclusion and then filter all evidence in a case through the lens provided by that conclusion.
- — The Multiple Dimensions of Tunnel Vision in Criminal Cases by Keith Findley and Michael Scott (2006)
Prosecutor’s tunnel vision
/ˈprɒsɪkjuːtəz/ /ˈtʌnᵊl/ /ˈvɪʒᵊn/ (n.)
The collection of biases and cognitive gin-traps that can lead prosecutors — those who “prosecute” a particular theory of the world — to stick with it, however starkly it may vary from available evidence and common sense.
So named because it is often literal prosecutors, of crimes, who suffer from it. This kind of tunnel vision has led to notorious miscarriages of justice where innocent people come to be convicted notwithstanding clear and plausible alternative explanations for their ostensible “crimes”.
The same tunnel vision also motivates ideologies, conspiracies and management philosophy: 360-degree performance appraisals, outsourcing, the war on drugs; the worldwide AML military-industrial complex: are all cases where those “prosecuting” the theory stick with it even though the weight of evidence suggests it does not work and may even be counterproductive.
The “prosecutor’s tunnel” begins with clear but simplistic — misleading — models of a messy world. Humans have a weakness for these: we are pattern-matching, puzzle-solving animals. We are drawn to neatness. We resile from intractability as it indicates weakness: that our frail human intellect has been defeated by the ineffable natural order of things.
An elegant hypothesis
Sometimes the sheer elegance of a prosecutor’s case can crowd out common sense and the basic intuition that this cannot be right.
We have built our legal institutions to be vulnerable to this kind of crowding out. Criminal law proceeds upon data and the weight of evidence but disallows “intuition”. Hence, there is an asymmetry: evidence is better at saying what did happen than what did not. This is especially so where there is no direct evidence that the defendant actually did what she is accused of.
Circumstantial evidence does not directly implicate a defendant but is consistent with the prosecution theory. It accumulates: if there is enough of it, and none points away from the defendant, it can tell us something. But, correlation and causation: evidence that is “consistent with” a prosecution theory does not prove it: that JC owns a bicycle is consistent with his competing in the Tour de France; it does not make him any more likely to do it. Evidence can look more meaningful than it is. This is where intuition ought to be able to help us.
As it is, intuition’s role is relegated to underpinning the presumption of innocence. A prosecutor must prove guilt; the accused need not prove anything: she cannot be expected to explain what happened for the simple reason that and innocent person should have no better idea about it than anyone else. The jury, we hope, leans on its intuition when conjuring doubts.
Experience tells us otherwise. In what follows, JC takes three notorious cases from the antipodes to see what can happen when, with no direct evidence, those arguing the case become afflicted with tunnel vision, and intuition and common sense are relegated behind “data” and circumstantial evidence. Then we will look at what causes this condition.
- Case study: Lindy Chamberlain
- Case study: Peter Ellis
- Case study: David Bain
Satanic panic in the Garden City
In 1991, Peter Ellis, a childcare worker at a daycare centre in New Zealand was charged with horrific abuse of several preschool children in his care.[1] Police alleged, on the children’s own evidence that, among other things, Ellis abducted children en masse during the day, took them across town to a flat where he subjected them to bizarre rituals and acts of unthinkable cruelty and violence before returning them to the creche unobserved before their parents were due to collect them at the end of the day.
In 1993 Ellis was convicted on 16 counts of child sex abuse against seven children.
But none of the allegations was true.
In total, police and social workers interviewed 118 children but presented evidence from just 20. They discarded evidence from children who did not report abuse and those whose claims were patently impossible (some claimed amputation of organs that they still possessed).
Therefore, the evidence put before the court — and fully disclosed to the defence — appeared more credible than it might have had it been viewed in the wider context. Police interview techniques may have encouraged the children to embellish their stories or make them up altogether.
Ellis, who died in 2019, was finally exonerated posthumously in 2022.
Murder in the family
At 6:45 am on the morning of June 20 1994, twenty-two-year-old David Bain returned from his Dunedin paper round to find his whole family had been shot dead.[2] He did not discover this immediately: it was midwinter dark at that hour, in the deep south of New Zealand. Without switching on a light, David first went downstairs to put on a load of laundry. He later told police it was so dark he did not notice his father’s bloodstained clothes on the machine and inadvertently washed them with his own, obliterating key evidence that might have exonerated him.
Returning upstairs, David discovered his father Robin lying in the living room beside a .22 rifle with a bullet in his head. He then found his mother, two sisters and younger brother Stephen all dead in their bedrooms. Stephen appeared to have put up a fight.
There was a note typed on the family computer. It said, “You were the only one who deserved to live.”
David placed an agitated call to emergency services. It was recorded and remains a part of the public record.[3]
David told police his father, motivated by a troubled relationship with the family, must have murdered the family before typing the note on the computer to the absent David and turning the gun on himself.
Notwithstanding David’s account, based on circumstantial evidence including bloodstains on his clothing and spectacles, his fingerprints on the rifle, abrasions on his body consistent with a struggle with Stephen, and the dearth of physical evidence pointing to his father, David was convicted of all five murders.
David maintained his innocence. Joe Karam, a former New Zealand rugby international, led a campaign to challenge David’s conviction. It uncovered procedural shortcomings, inconsistencies and oversights in the police investigation and handling of evidence.
Twenty years after the original trial, the Privy Council quashed Bain’s convictions and ordered a retrial. After the second trial, David Bain was acquitted of all charges. He remains a free man.
Narrative biases
These cases illustrate the problem of relying on circumstantial evidence: with no independent direct evidence, one tends to start with a hypothesis and fit whatever secondary and forensic evidence you have into it, discarding whatever does not fit. This is the classic tunnel vision scenario. It can afflict those who would defend suspects just as firmly as those who prosecute them.
All kinds of theories circulated owing to the Chamberlains’ unusual religious beliefs and “odd behaviour” in the aftermath of Azaria’s disappearance. But devout Christianity is hardly a solid prior indicating a tendency to murder. Nor is “odd behaviour” in the aftermath of a mother’s most extreme psychological trauma. Who would not behave oddly in those circumstances?
That anyone could bring themselves to cold-bloodedly murder a nine-week-old baby is hard to imagine. Statistically, it is highly improbable. That the child’s own mother would is, in the absence of compelling evidence, preposterous. To even start with this theory you must surely have compelling grounds to believe it over all other possibilities — if not credible eye-witness evidence, then a documented history of violence, behavioural volatility or psychiatric illness grave enough to overthrow the strong human instinct to protect vulnerable infants. Lindy Chamberlain had no such history.
If there is any plausible alternative explanation for the baby’s disappearance, there must have been a reasonable doubt. It need not be more probable than the prosecution case: just not out of the question. Lindy Chamberlain provided one: a dingo snatching the child might have been unprecedented, but it was possible. There were dingoes in the area. They are predators. They are strong enough to carry away a human infant. A dingo was no less likely than a new mother noiselessly murdering her own infant just yards from a group of independent witnesses. That ought to have been the end of it.
Likewise, what Peter Ellis was alleged to have done is extraordinarily improbable. There are few documented cases of ritualistic abuse on that scale anywhere in the world. There are none in New Zealand. For such a thing to have happened without any prior evidence of such behaviour, with no adult witnesses, no one noticing the absent children and for none of the children to bear any trace of their supposed injuries makes it even less likely.
And there was a plausible alternative: nothing happened at all. All that was required for that to be true was for preschool children, perhaps at the prompt of interviewers already in the grip of prosecutor’s tunnel vision, to make things up. By comparison with “untraceable, unwitnessed, wide-scale ritual satanic abuse”, “children exercising their imaginations to please adults” is not improbable.
It is different for David Bain. While it is true that familicide is extremely rare and, therefore, absent prior evidence, highly improbable, there is no question that the Bain family were murdered. The only question was by whom.
On David’s own theory, only two people could have done it: his father and himself. It was, therefore, definitely familicide: the abstract improbability of that explanation is therefore beside the point. The probability that David was responsible is therefore greatly higher: before considering any further evidence there is a 50% chance he was responsible.
And a lot of the further evidence pointed in his direction. To not be the murderer, on his own evidence, David would have been extremely unlucky — forgetting to turn on the light, inadvertently disposing of exculpatory evidence, having incriminating injuries he could not explain — while no such evidence pointed to Robin. David’s defenders had their own tunnel vision, focusing narrowly on the provenance of each piece of incriminating evidence, identifying formal shortcomings in its value as evidence: questioning the manner of its collection, the chain of custody, raising possibilities of innocent explanations without evidence to support that alternative, and disregarding the wider context of the whole case.
Now, David Bain was acquitted of all charges. On the evidence, the jury could not rule out the possibility that Robin Bain was responsible. Not being satisfied beyond reasonable doubt that David was the perpetrator, he was correctly acquitted at law. But it remains likely that David was the perpetrator.[4] As a piece of judicial procedure, the comparison between Bain’s case and those of Ellis and Chamberlain is stark.
Tunnel vision and circumstantial evidence
Where there is reliable direct evidence — eyewitnesses, recordings, and causative links between a suspect and the allegation — there is little need for inference; the evidence speaks for itself. But cases comprised predominantly of circumstantial evidence — that therefore depend on inferential reasoning — are vulnerable to tunnel vision because the complex of cognitive biases that make up prosecutor’s tunnel vision affect the process of inference.
Upstanding citizen turns master criminal. Does well.
Prosecutor’s tunnel vision cases often involve hitherto law-abiding citizens suddenly committing fiendish crimes without warning, explanation or motive.
Now JC is, ahem, told that committing violent crime without leaving any incriminating evidence is extremely hard. Especially in a controlled environment like an infants’ daycare centre or a hospital.
To be sure, serial criminals can operate in these environments but they will need to be good: meticulous in their preparation and method. Over time, they will hone their techniques and perfect a modus operandi, acquiring a ghoulish sort of expertise in murder: killing patients in a closely monitored, controlled environment populated by trained experts hardly lends itself to opportunistic, freestyle offending. Hospitals, in particular, overflow with specialists who can detect subtle clues that ordinary laypeople — and burgeoning criminals learning their craft — have no idea about.
As with any complicated discipline, one learns as one goes. We should not, therefore, expect “beginners” to perform like master jewel thieves, slipping in and out, striking in the dark and leaving no trace. They will blunder. They will be careless. They will leave evidence. They will slip up, leave giveaways and clumsily trigger red flags. From new criminals, we should expect “smoking guns”.
So if a strange confluence of events is accompanied by no smoking pistol, this too has some prior probability value. It does not exclude the possibility of foul play, but it does make it less likely.
People do not often flip, overnight and without warning, from conscientious citizens to compulsive criminals. If they did, we would notice it.[5] When hitherto law-abiding people do slide into criminality, there is generally motivation, a history of antisocial behaviour, identifiable psychological trauma, drug dependency, observed personality change over time or diagnosed mental illness.[6] Often all of these things. (Let us call them “criminal propensities”.)
The absence of any of criminal propensities in a suspect’s makeup should reduce the “prior probability” of foul play by that suspect. As we will see, “circular correspondence bias” can take such a lack of criminal propensity and somehow invert it into confirmation.
Where a crime has certainly been committed, this goes only to who the perpetrator is. There may (as in David Bain’s case) be only a small universe of credible suspects. If all “possible suspects” have the same lack of criminal propensity, it will count for little. But if the universe of “potential suspects” is large — or if it is plausible that no crime was committed at all — an individual’s lack of any criminal propensity should tell us something “circumstantial”.
Neither Lindy Chamberlain nor Peter Ellis had any criminal propensity and both cases there was a plausible alternative explanation. For David Bain it was different.
Burden and standard of proof
The burden of proof is a different thing to the standard of proof. The burden is who has to prove their case: this falls squarely on the prosecution. The defence is not required to prove anything, least of all the accused’s innocence.
But there is tension between that crystalline legal theory and the practical reality: it is in the defendant’s interest that someone casts doubt into jurors’ minds. Since the Crown plainly won’t be doing that, the defence must either rely on jurors to confect plausible doubts by themselves, or it must plant some doubts there. It is a brave defence counsel indeed who puts her client’s future in the hands of a jury’s imagination and capacity for creative thought.
All the same, the prosecution’s standard of proof — what it must do to discharge its burden of proof — is, in theory, extremely high. Courts have dumbed down the time-honoured phrase beyond reasonable doubt: these days, juries are directed to convict only if they are “sure”. This is meant to mean the same thing, but not everyone is persuaded that is how juries understand it.[7]
There is some reason to think that juries start with an ad hoc presumption that any defendant put before them is somewhat likely to be guilty: if the police were competent and acted in good faith, why else would the defendant be in the dock?
So where there is only tendentious data supporting a defendant’s guilt but a total lack of “data” supporting her innocence — what evidence could there be that you did not do something that did not happen? — there are grounds for confusion here, and there is good evidence that juries do indeed get confused.
Lindy Chamberlain was convicted of her own daughter’s murder, with a pair of blunt scissors, on the circumstantial evidence of what looked like blood sprays in the footwell of the family car.[8]
Evidence supporting the intuition that “a sane mother is most unlikely to brutally murder her own nine-week-old child at all, let alone with an improvised weapon and without warning or provocation” was not before the court. What evidence could there be of that? Somehow the jury was persuaded not just that she did murder her child, but that there was no plausible alternative explanation for the child’s disappearance. This was largely thanks to the strange collection of cognitive biases to which the prosecution had succumbed.
The three phases of tunnel vision
So what is “prosecutor’s tunnel vision”, then and how does it come about?[9] It is a sort of “emotional conviction” to an (as-yet) unproven explanation. We become personally invested in a narrative; the consequences — and personal costs — of rejecting the conviction are great, and grow the more we commit to the position. We go towe are prepared to defend bad mental models because the consequence seems worse.
Tunnel vision has three phases: first, the enabling background conditions that make us vulnerable to tunnel vision; second, the pathways into a given tunnel; third, the cognitive biases that keep us there.
The lessons are two-fold:
We are not as rational as we like to think and
Data is never the whole story.
Background conditions
Certain dispositions, biases and miscellaneous psychological tics come together to create the conditions for tunnel vision to swamp an earnestly-held narrative:
Anchoring
When making decisions we “anchor” our expectations on the first information we get. We then recalibrate not against an abstract measure of value, but by reference to our original anchor. Our anchor disproportionately influences our assessment of subsequent facts.
Expert overreach
There may well be unknown genetic or environmental factors that predispose families to SIDS, so that a second case within the family becomes much more likely than would be a case in another, apparently similar, family.
- — The Royal Statistical Society, on the Sally Clark case
Subject matter experts tend to overestimate their ability to analyse and judge subtle problems, especially those in fields adjacent to, but not directly within, their expertise. They also tend to over-weight the overall significance of matters that do fall within their expertise against those that do not. They become attached to “pet theories”. They are then less likely to consider alternative models, explanations, theories or evidence, let alone contradictory ones.
The example par excellence here is Dr. Roy Meadow, an experienced paediatrician with some expertise in child abuse — he devised the condition Munchausen syndrome by proxy, though that is not without its controversies — but none particularly in statistics. To calculate the chance of two cases of Sudden Infant Death Syndrome (SIDS) occurring in a family, Meadows simply squared the estimated probability of a single case and arrived at 1 in 73 million. We should, he argued, expect such a “double cot death” by chance only once in a century.
This was enough to convict Sally Clark.
But Meadow’s methodology assumed that cot deaths within a family are independent. There is little reason to believe this. Clark was eventually exonerated, and Meadow suffered significant reputational damage, being heavily criticised by the General Medical Council and the Royal Statistical Society.
Expert overreach of the type illustrated by Roy Meadow is germane for all the “healthcare serial murder” cases:[10] human biology is complex. There is much about it that we do not yet, and may never, know. These children may have had undiagnosed conditions that were not detected before or after death. They may have died from conditions as yet unknown to medical science, in which case they would not have been revealed by established tests in any case.
Furthermore, when we are dealing with extremely rare events — such as healthcare serial murders and the “innocent” causes with which they may be confused[11] — the statistical analysis is vulnerable also to base rate neglect. As to which, see below.
Theory-dependence
To a man with a hammer, everything looks like a nail.
- —Abraham Maslow
It is hardly news that personal narrative frameworks influence observations: what we see and what we make of it, is shaped by our existing knowledge and beliefs. Prosecutors hold certain theories about human behaviour which they bring to the task of interpreting the evidence they find.
You don’t enlist in the army hoping never to shoot a gun. The police show up for work to detect crime, and prosecutors to prosecute it. They are primed this way, as are we all: to be useful; for their roles to be important and to make a difference. They aspire to find crime and prosecute it.
We tell ourselves archetypal stories: the dogged cop who sticks at her hunch despite Sarge’s threat to see out her career issuing parking tickets and overcomes the shadowy malign forces to save the day for righteousness and the American way. This is how the lead prosecutor of the Central Park Five saw herself.
We do not tell ourselves stories where we are conspiracy-obsessed geeks who are hounding innocent citizens to the electric chair.
Role pressure
Law enforcement agencies will be under pressure to get results. This may lead to missed opportunities: notoriously, despite repeatedly interviewing him, police overlooked Yorkshire Ripper Peter Sutcliffe, a Yorkshireman, because his accent didn’t match their profile of a man with a Wearside accent, based on a voice recording which turned out to be a hoax.
Getting there
Once you are anchored, equipped with a hammer and have started wandering around the house looking for nails, there should still be scope to falsify your operating theory. But again, psychological biases can override the dispassionate application of cool logic.
Extrapolation and bad heuristics
Subject matter experts are prone to extrapolate. Humans are natural problem-solvers. We build models of the world by habit. It is easy to slip beyond our range of reliable experience and form theories about matters where we have little expertise. A statistician can give a compelling account of the Bayesian probabilities but the forensic question of who or what caused the excess toxins is a question of forensics, not statistics.
The greater the expertise, the more “grooved” the expectations, the stronger the heuristic, the greater the temptation to take shortcuts and presume this is “one of those” cases, relying on information that is most readily available or recent in their minds, rather than conducting a thorough investigation. This can lead to overestimating the importance of certain pieces of evidence
Base rate neglect
Lawyers are not natural statisticians. “Base rate neglect” — the “prosecutor’s fallacy” — is the natural tendency to ignore the “base rate” of a phenomenon in the population and instead focus on “individuating” information that pertains to the specific case in front of us.
We should be careful with statistics about extreme events.
Say a certain medical test gives a correct result 999 times out of 1,000 and only yields one “false positive” result in a thousand tests. It is tempting to assume that a positive result is for such a reliable test will be conclusive. But if the general prevalance in the population of the condition it is testing for — the “base rate” for the illness — is only one in 100,000 then for every single true positive result, we should still expect 100 false positives. Fully 99% of positive tests will be false. [12]
The same principle holds for criminal offending: if there is, say, a 1/342,000,000 chance that “so many suspicious deaths could have occurred with the same nurse on duty by sheer chance” and one nurse was in fact on duty for all those suspicious deaths, it may seem utterly damning. But this is to ignore the base rate of how many healthcare serial killers are there likely to be in the world?
On a planet with eight billion people, for the odds of our nurse being a serial killer to be as high as 1/342,000,000 there would need to be twenty-three. There do not appear to be that many hospital serial killers in the last generation.
Yet “one in three-hundred and forty-two million” was the figure that convicted Dutch nurse Lucia de Berk of serial murder. Had this estimate been correct (it turned out to be a wild underestimate) sheer chance was still a likelier explanation.[13]
Confirmation bias
All observation is theory-dependent: scientists must first have a theory before they can gather evidence to test it: otherwise, how do they know what evidence to look for?
Having picked a bad heuristic, we tend to seek out, interpret, and best remember information that confirms it. We may overweight evidence that supports our theory and disregard or minimise anything that contradicts it.
There are two phases of confirmation bias: the first is theory formation. Thanks to subject matter expertise extrapolation, like the man with a hammer you are primed to see nails. Suspicious that strangely-behaving Lindy Chamberlain might have something to hide, Police searched the family’s possessions, tent and vehicle, eventually finding something forensic tests suggested was consistent with “fetal blood” splattered up inside the footwell of her husband’s car. (Much later the substance was retested. It turned out to be sound-deadening material containing iron oxide.) This set the police in a direction. and before long they had constructed an elaborate theory of how Lindy Chamberlain had committed murder.
Evidence when taken in the abstract, would tend to “be consistent with” innocence can, with confirmation bias, be presented as damning. The innocent defendant who maintains innocence notwithstanding his wrongful conviction is presented as callous, lacking remorse and may expect a harsher sentence and less chance of parole. What is a wrongful convict meant to do?
An innocent defendant who, during a period of misfortune for which she was not responsible, went out with her friends, posting pictures of herself as a vivacious young woman who enjoyed socialising — is portrayed as thereby a callous, calculating demon. But these are exactly the behaviours to be expected from an innocent person. This ought to point away from the defendant, though with confirmation bias it points towards her.
Selective information processing
Focusing on certain pieces of evidence while ignoring others. Prosecutors might only present evidence that strengthens their case and neglect exculpatory evidence that could help the defence. Peter Ellis’ prosecutors interviewed one hundred and eighteen children. Many denied all suggestion of abuse. Some gave plainly preposterous accounts of what went on. They were not called to give evidence. Their statements do not appear to have been fully disclosed to the defence.
Groupthink
Thinking or making decisions as a group in a way that discourages creativity or individual responsibility: Prosecutors might conform to the prevailing opinion within their office, stifling dissenting views and critical analysis of the case. See also Dan Davies’ idea of the “accountability sinks” within an organisation. The Post Office Horizon IT scandal is perhaps the archetypal example of groupthink amongst a group of prosecuting lawyers.
Reductionism
Immersing into technical details that, by themselves and shorn of all context, seem to lead to one conclusion — especially one to which you are already anchored — notwithstanding the wider picture making the hypothesis unlikely. Especially in cases with no direct evidence, there is a great risk of this.
Staying there
Once we are in a tunnel, there are cognitive biases that prevent us from finding our way out.
Hindsight bias and the reiteration effect
In hindsight, people tend to think an eventual outcome was inevitable or more likely than they might have before it happened. “What is the chance that that nice woman you met at the campsite just now will, in three hours, brutally murder her own nine-week-old infant?” versus “Given that this woman’s nine-week-old infant has disappeared from the campsite, and the police suspect her of foul play, what is the change that she brutally murdered her own child?”
Through “hindsight bias” we project knowledge of actual outcomes onto our knowledge of the observed behaviour in the past, without realising that the perception of the past has been tainted by the subsequent information.
Once a person becomes a prime suspect hindsight bias suggests that, upon reflection, the suspect was the inevitable and likely suspect from the beginning. Evidence is malleable in light of this “realisation”.
There is also a “reiteration” effect. Our confidence in a theory increases the more we hear it, independent of its validity. The longer that police, prosecutors and witnesses live with the conclusion of guilt, the more invested in it they are, the more entrenched their conclusion becomes, and the more obvious it appears that all evidence pointed that way from the very beginning. It is increasingly difficult for police and prosecutors to consider alternative perpetrators or theories of a crime.
Circular correspondence bias: Randle McMurphy’s dilemma
Correspondence bias, or the “fundamental attribution error” attributes observed behaviour to personality, or malice, without considering more probable situational factors. This you can, but few enough do, avoid with Hanlon’s razor:
“Do not attribute to malice things that can just as well be explained by stupidity.”
A strangely prevalent form of reasoning that combines it with circular logic as follows:
- There is weak circumstantial evidence that X committed a crime.
- The (highly unusual) traits of people who commit that sort of (highly unusual) crime are attributed to X (this is a “fundamental attribution error”).
- X’s newly-attributed traits are cited as evidence corroborating the existing weak circumstantial evidence (this is a circularity). X is now a “sociopath”, “narcissist”, “attention-seeker”, or even “Munchausen’s syndrome by proxy sufferer”, so is more likely to have committed the highly unusual crime.
- Other aspects of X’s behaviour which, but for the allegation, would be normal, now appear to verify the trait (this is a further circularity).
- X is convicted on the compelling evidence of the opportunity and possession of that highly unusual trait.
You might recognise this as the plot driver of Ken Kesey’s One Flew Over the Cuckoo’s Nest.
Bob the doctor
So, say there is an unusual cluster of deaths in the geriatric ward of a hospital. It coincides with the shift pattern of one doctor, Bob, raising a weak and easily rebuttable presumption of foul play on Bob’s part.
Before this statistical correlation, Bob was not under suspicion. Nor was his behaviour unusual.
But the foul play — if that’s what it is — is horrific: someone is systematically murdering little old ladies. Only a psychopath with a stone-cold heart and a narcissistic personality disorder could do that. If it is Bob, he must be a psychopath with a narcissistic personality disorder. This logic is already circular, but the circle is big and conditional enough not to be obvious.
We now start inspecting Bob’s behaviour. We find him to be meticulously tidy. He enjoys socialising, attending a regular salsa class on Wednesdays. He sent condolence cards to the families of several deceased patients.
Now: what could be more indicative of a stone-cold, narcissistic psychopath who has just murdered someone than a fastidiously tidy person — hence, no evidence, right? — who sent a card to the victim’s wife before going out drinking and dancing with his friends? The net is closing in.
Sunk cost fallacy
The inclination to continue an endeavour once money, effort, time or credibility has been invested, even when new evidence suggests the defendant might be innocent. (see also commitment when talking about persuasion)
Antidotes
Q: How many psychiatrists does it take to change a light bulb?
A: Just one; but the light bulb really has to want to change.
Some strategies to counteract the effect, but the predominant one is to want to keep an open mind.
Hanlon’s, and Otto’s razor
“Do not attribute to malice things that can just as well be explained by stupidity.”
Don’t assume malice where stupidity will do; likewise, per Otto’s razor, don’t attribute to virtue something that could equally be attributed to self-interest; or to skill something that could equally be attributed to dumb luck.
- ↑ Alexander Behse and Ali Jones’ Conviction: The Christchurch Civic Creche Case is an outstanding account of the Peter Ellis case.
- ↑ Black Hands: A Family Mass Murder
- ↑ https://www.youtube.com/watch?v=m7HsjKDSaKA
- ↑ Christchurch Journalist Martin Van Beynen’s fantastic podcast Black Hands compellingly makes this case.
- ↑ They might snap into a sudden orgy extreme violence — but this plays out as desperate, meltdown mass murder, not calculated ongoing serial murder, and there is generally no doubt that it is murder and no shortage of direct evidence implicating the accused.
- ↑ Mental illnesses having a clear medical pathology, not suspiciously made-up ones out of ex- post facto symptoms like “Munchausen by proxy”. See the “circular correspondence bias” discussion below.
- ↑ New Law Journal: The Trouble With “Sure”
- ↑ In fairness the crown submitted expert forensic analysis entered that it was specifically infant blood, so you can hardly fault the jury here. You can fault the crown forensics team though: it turned out to be acoustic deadening spray and not blood of any kind!
- ↑ JC draws upon The Multiple Dimensions of Tunnel Vision in Criminal Cases by Keith Findley and Michael Scott in the Wisconsin Law Review (2006).
- ↑ We characterise “healthcare serial murder cases” as cases where:
- Situation: An unusual increase in deaths or medical incidents at a controlled hospital or care facility exceeding statistical averages for that facility and for which there is no obvious “innocent” explanation.
- Suspect: A given carer or medical professional was present for all the incidents.
- No direct evidence: There is no reliable direct evidence of the identified carer actually harming any of the patients.
- “Small arrows”: There are many pieces of weak circumstantial evidence pointing to (or at least consistent with) the carer’s involvement, but which, when taken individually, do not strongly implicate the carer.
- No motive: The suspect has no apparent motive.
- No criminal propensity: The suspect has no record of violence, antisocial behaviour or mental illness other than the alleged offending.
- ↑ If the “plausible innocent cause of death” explanation is not rare then the likelihood of a malicious explanation is therefore much lower
- ↑ Assume you test 100,000 people. At a 99.9% accuracy rate, you will expect 99,900 correct results, and 100 false ones. But in that sample of 100,000 you would only expect 1 actual case of the condition. So if you are tested and receive a positive result, you have a 1 in 100 chance that it is a true result.
- ↑ It turned out the statistics were in any case wrong: the probability of her shift patterns coinciding by chance was reassessed as being more like one in twenty-five. De Berk’s acquittal was overturned in 2010.