Template:M intro crime tunnel vision
By tunnel vision, we mean that “compendium of common heuristics and logical fallacies,” to which we are all susceptible, that lead actors in the criminal justice system to “focus on a suspect, select and filter the evidence that will “build a case” for conviction, while ignoring or suppressing evidence that points away from guilt.” This process leads investigators, prosecutors, judges, and defence lawyers alike to focus on a particular conclusion and then filter all evidence in a case through the lens provided by that conclusion.
- — The Multiple Dimensions of Tunnel Vision in Criminal Cases by Keith Findley and Michael Scott (2006)
Prosecutor’s tunnel vision
/ˈprɒsɪkjuːtəz/ /ˈtʌnᵊl/ /ˈvɪʒᵊn/ (n.)
The collection of biases and cognitive gin-traps that can lead prosecutors — those who “prosecute” a particular theory of the world — to stick with it, however starkly it may vary from available evidence and common sense.
So named because it is often literal prosecutors, of crimes, who suffer from it. This kind of tunnel vision has led to notorious miscarriages of justice where innocent people come to be convicted notwithstanding clear and plausible alternative explanations for their ostensible “crimes”.
The same tunnel vision also motivates ideologies, conspiracies and management philosophy: 360-degree performance appraisals, outsourcing, the war on drugs; the worldwide AML military-industrial complex: are all cases where those “prosecuting” the theory stick with it even though the weight of evidence suggests it does not work and may even be counterproductive.
The “prosecutor’s tunnel” begins with clear but simplistic — misleading — models of a messy world. Humans have a weakness for these: we are pattern-matching, puzzle-solving animals. We are drawn to neatness. We resile from intractability as it indicates weakness: that our frail human intellect has been defeated by the ineffable natural order of things.
An elegant hypothesis
Sometimes the sheer elegance of a prosecutor’s case can crowd out common sense and the basic intuition that this cannot be right.
We have built our legal institutions to be vulnerable to this kind of crowding out. Criminal law proceeds upon data and the weight of evidence but disallows “intuition”. Hence, there is an asymmetry: evidence is better at saying what did happen than what did not. This is especially so where there is no direct evidence that the defendant actually did what she is accused of.
Circumstantial evidence does not directly implicate a defendant but is consistent with the prosecution theory. It accumulates: if there is enough of it, and none points away from the defendant, it can tell us something. But, correlation and causation: evidence that is “consistent with” a prosecution theory does not prove it: that JC owns a bicycle is consistent with his competing in the Tour de France; it does not make him any more likely to do it. Evidence can look more meaningful than it is. This is where intuition ought to be able to help us.
As it is, intuition’s role is relegated to underpinning the presumption of innocence. A prosecutor must prove guilt; the accused need not prove anything: she cannot be expected to explain what happened for the simple reason that and innocent person should have no better idea about it than anyone else. The jury, we hope, leans on its intuition when conjuring doubts.
Experience tells us otherwise. In what follows, JC takes three notorious cases from the antipodes to see what can happen when, with no direct evidence, those arguing the case become afflicted with tunnel vision, and intuition and common sense are relegated behind “data” and circumstantial evidence. Then we will look at what causes this condition.
- Case study: Lindy Chamberlain
- Case study: Peter Ellis
- Case study: David Bain
Narrative biases
These cases illustrate the problem of relying on circumstantial evidence: with no independent direct evidence, one tends to start with a hypothesis and fit whatever secondary and forensic evidence you have into it, discarding whatever does not fit. This is the classic tunnel vision scenario. It can afflict those who would defend suspects just as firmly as those who prosecute them.
All kinds of theories circulated owing to the Chamberlains’ unusual religious beliefs and “odd behaviour” in the aftermath of Azaria’s disappearance. But devout Christianity is hardly a solid prior indicating a tendency to murder. Nor is “odd behaviour” in the aftermath of a mother’s most extreme psychological trauma. Who would not behave oddly in those circumstances?
That anyone could bring themselves to cold-bloodedly murder a nine-week-old baby is hard to imagine. Statistically, it is highly improbable. That the child’s own mother would is, in the absence of compelling evidence, preposterous. To even start with this theory you must surely have compelling grounds to believe it over all other possibilities — if not credible eye-witness evidence, then a documented history of violence, behavioural volatility or psychiatric illness grave enough to overthrow the strong human instinct to protect vulnerable infants. Lindy Chamberlain had no such history.
If there is any plausible alternative explanation for the baby’s disappearance, there must have been a reasonable doubt. It need not be more probable than the prosecution case: just not out of the question. Lindy Chamberlain provided one: a dingo snatching the child might have been unprecedented, but it was possible. There were dingoes in the area. They are predators. They are strong enough to carry away a human infant. A dingo was no less likely than a new mother noiselessly murdering her own infant just yards from a group of independent witnesses. That ought to have been the end of it.
Likewise, what Peter Ellis was alleged to have done is extraordinarily improbable. There are few documented cases of ritualistic abuse on that scale anywhere in the world. There are none in New Zealand. For such a thing to have happened without any prior evidence of such behaviour, with no adult witnesses, no one noticing the absent children and for none of the children to bear any trace of their supposed injuries makes it even less likely.
And there was a plausible alternative: nothing happened at all. All that was required for that to be true was for preschool children, perhaps at the prompt of interviewers already in the grip of prosecutor’s tunnel vision, to make things up. By comparison with “untraceable, unwitnessed, wide-scale ritual satanic abuse”, “children exercising their imaginations to please adults” is not improbable.
It is different for David Bain. While it is true that familicide is extremely rare and, therefore, absent prior evidence, highly improbable, there is no question that the Bain family were murdered. The only question was by whom.
On David’s own theory, only two people could have done it: his father and himself. It was, therefore, definitely familicide: the abstract improbability of that explanation is therefore beside the point. The probability that David was responsible is therefore greatly higher: before considering any further evidence there is a 50% chance he was responsible.
And a lot of the further evidence pointed in his direction. To not be the murderer, on his own evidence, David would have been extremely unlucky — forgetting to turn on the light, inadvertently disposing of exculpatory evidence, having incriminating injuries he could not explain — while no such evidence pointed to Robin. David’s defenders had their own tunnel vision, focusing narrowly on the provenance of each piece of incriminating evidence, identifying formal shortcomings in its value as evidence: questioning the manner of its collection, the chain of custody, raising possibilities of innocent explanations without evidence to support that alternative, and disregarding the wider context of the whole case.
Now, David Bain was acquitted of all charges. On the evidence, the jury could not rule out the possibility that Robin Bain was responsible. Not being satisfied beyond reasonable doubt that David was the perpetrator, he was correctly acquitted at law. But it remains likely that David was the perpetrator.[1] As a piece of judicial procedure, the comparison between Bain’s case and those of Ellis and Chamberlain is stark.
Tunnel vision and circumstantial evidence
Where there is reliable direct evidence — eyewitnesses, recordings, and causative links between a suspect and the allegation — there is little need for inference; the evidence speaks for itself. But cases comprised predominantly of circumstantial evidence — that therefore depend on inferential reasoning — are vulnerable to tunnel vision because the complex of cognitive biases that make up prosecutor’s tunnel vision affect the process of inference.
Upstanding citizen turns master criminal. Does well.
Prosecutor’s tunnel vision cases often involve hitherto law-abiding citizens suddenly committing fiendish crimes without warning, explanation or motive.
Now JC is, ahem, told that committing violent crime without leaving any incriminating evidence is extremely hard. Especially in a controlled environment like an infants’ daycare centre or a hospital.
To be sure, serial criminals can operate in these environments but they will need to be good: meticulous in their preparation and method. Over time, they will hone their techniques and perfect a modus operandi, acquiring a ghoulish sort of expertise in murder: killing patients in a closely monitored, controlled environment populated by trained experts hardly lends itself to opportunistic, freestyle offending. Hospitals, in particular, overflow with specialists who can detect subtle clues that ordinary laypeople — and burgeoning criminals learning their craft — have no idea about.
As with any complicated discipline, one learns as one goes. We should not, therefore, expect “beginners” to perform like master jewel thieves, slipping in and out, striking in the dark and leaving no trace. They will blunder. They will be careless. They will leave evidence. They will slip up, leave giveaways and clumsily trigger red flags. From new criminals, we should expect “smoking guns”.
So if a strange confluence of events is accompanied by no smoking pistol, this too has some prior probability value. It does not exclude the possibility of foul play, but it does make it less likely.
People do not often flip, overnight and without warning, from conscientious citizens to compulsive criminals. If they did, we would notice it.[2] When hitherto law-abiding people do slide into criminality, there is generally motivation, a history of antisocial behaviour, identifiable psychological trauma, drug dependency, observed personality change over time or diagnosed mental illness.[3] Often all of these things. (Let us call them “criminal propensities”.)
The absence of any of criminal propensities in a suspect’s makeup should reduce the “prior probability” of foul play by that suspect. As we will see, “circular correspondence bias” can take such a lack of criminal propensity and somehow invert it into confirmation.
Where a crime has certainly been committed, this goes only to who the perpetrator is. There may (as in David Bain’s case) be only a small universe of credible suspects. If all “possible suspects” have the same lack of criminal propensity, it will count for little. But if the universe of “potential suspects” is large — or if it is plausible that no crime was committed at all — an individual’s lack of any criminal propensity should tell us something “circumstantial”.
Neither Lindy Chamberlain nor Peter Ellis had any criminal propensity and both cases there was a plausible alternative explanation. For David Bain it was different.
Burden and standard of proof
The burden of proof is a different thing to the standard of proof. The burden is who has to prove their case: this falls squarely on the prosecution. The defence is not required to prove anything, least of all the accused’s innocence.
But there is tension between that crystalline legal theory and the practical reality: it is in the defendant’s interest that someone casts doubt into jurors’ minds. Since the Crown plainly won’t be doing that, the defence must either rely on jurors to confect plausible doubts by themselves, or it must plant some doubts there. It is a brave defence counsel indeed who puts her client’s future in the hands of a jury’s imagination and capacity for creative thought.
All the same, the prosecution’s standard of proof — what it must do to discharge its burden of proof — is, in theory, extremely high. Courts have dumbed down the time-honoured phrase beyond reasonable doubt: these days, juries are directed to convict only if they are “sure”. This is meant to mean the same thing, but not everyone is persuaded that is how juries understand it.[4]
There is some reason to think that juries start with an ad hoc presumption that any defendant put before them is somewhat likely to be guilty: if the police were competent and acted in good faith, why else would the defendant be in the dock?
So where there is only tendentious data supporting a defendant’s guilt but a total lack of “data” supporting her innocence — what evidence could there be that you did not do something that did not happen? — there are grounds for confusion here, and there is good evidence that juries do indeed get confused.
Lindy Chamberlain was convicted of her own daughter’s murder, with a pair of blunt scissors, on the circumstantial evidence of what looked like blood sprays in the footwell of the family car.[5]
Evidence supporting the intuition that “a sane mother is most unlikely to brutally murder her own nine-week-old child at all, let alone with an improvised weapon and without warning or provocation” was not before the court. What evidence could there be of that? Somehow the jury was persuaded not just that she did murder her child, but that there was no plausible alternative explanation for the child’s disappearance. This was largely thanks to the strange collection of cognitive biases to which the prosecution had succumbed.
The three phases of tunnel vision
So what is “prosecutor’s tunnel vision”, then and how does it come about?[6] It is a sort of “emotional conviction” to an (as-yet) unproven explanation. We become personally invested in a narrative; the consequences — and personal costs — of rejecting the conviction are great, and grow the more we commit to the position.
Tunnel vision has three phases: first, there must be enabling background conditions that make us vulnerable to tunnel vision; second, there are pathways into a given tunnel; third, there are cognitive biases that keep us there.
The lessons are two-fold:
We are not as rational as we like to think and
Data is never the whole story.
Background conditions
Certain dispositions, biases and miscellaneous psychological tics come together to create the conditions for tunnel vision to swamp an earnestly-held narrative:
Mainly circumstantial evidence
In that tunnel vision is a collection of cognitive biases infecting how we draw inferences, it usually arises where there is no direct evidence of wrongdoing. Where there are reliable eyewitnesses there is little need to infer what happened: someone saw it. Where there are no witnesses and the case depends on circumstantial evidence — even more so where it isn’t clear there was a crime at all — the jury must infer what happened from purely circumstantial evidence.
There was no clear evidence Azaria Chamberlain was dead, let alone murdered: she was simply missing. Had a reliable witness seen a dingo carrying her away — even Lindy Chamberlain did not claim to have seen that — there would be little scope for inference about the significance of the red-brown material spattered under the dashboard of the Chamberlain’s car. (The coroner’s damning report to the crown prosecutor in the Chamberlain case is a chilling example of tunnel vision.)
Information glut
The fact is, there are very few political, social, and especially personal problems that arise because of insufficient information. Nonetheless, as incomprehensible problems mount, as the concept of progress fades, as meaning itself becomes suspect, the Technopolist stands firm in believing that what the world needs is yet more information.[7]
The more circumstantial evidence there is, the more scope for inference, and the more fantastical narratives one can draw. If the alleged crime occurs in a tightly-controlled environment designed to generate technical data and specialist information, and where deep subject matter experts are on hand to observe and analyse that information in retrospect, conditions are ripe for tunnel vision.
This is exactly the scenario in which the healthcare serial murder cases arise. The alleged crimes, though rarely witnessed, take place within a carefully controlled environment[8]. Access is monitored by CCTV and controlled by swipecards and elaborate security systems. Detailed medical protocols govern and track the storage and dispensation of medicines. Sophisticated equipment — machines that go “ping” — monitor patients’ vital signs around the clock. Nurses, orderlies, consultants and doctors constantly mill around at all hours, doing ward rounds, checking in on patients and generally keeping an eye out for signs of trouble.
Though these systems seem incapable of capturing direct evidence of wrongdoing, they still generate a colossal amount of very “scientific” medical and digital data. This is capable of being analysed and framed to support — to be “consistent with” — any number of different and frequently contradictory inferences and theories of the case.
Expert overreach
To a man with a hammer, everything looks like a nail.
- —Abraham Maslow
It is hardly news that existing knowledge and beliefs shape what we see and how we see it. Prosecutors (and defenders) hold pet theories about human behaviour just likeanyone else.
You don’t sign up for the army if you hope not to shoot a gun. Nor do you join the police hoping to never find crime, nor take a crown warrant if you don’t expect to prosecute. Those involved in prosecution are primed this way, as are we all: to be useful; for their roles to be important and to make a difference.
This is no less so for expert witnesses. They are incentivised to support the cases on which they are engaged.[9] Yet subject matter experts can overestimate their ability to analyse and judge subtle problems, especially those in fields adjacent to, but not directly within, their expertise. They may over-weight the overall significance of matters that do fall within their expertise against those that do not. They are less likely to consider alternative models, explanations, theories or evidence that de-emphasise their expertise, let alone theories of the case that contradict it.
This kind of expert overreach is germane for the “healthcare serial murder” cases. Human biology is, in the technical sense, complex. There is much about it that even experts do not yet, and may never, know. Patients may have conditions that are never tested for, and therefore never detected, before or after death. They may have conditions as yet unknown to medical science, in which case they would not have been revealed by established tests in any case.
High improbability, whatever the explanation
When the allegation involves extremely improbable events — where there is no commonly experienced explanation — our natural human weakness for statistical reasoning is in play. Base rate neglect (see below) becomes a risk.
Both possible explanations for Azaria Chamberlain’s disappearance (being snatched by a dingo and maternal infanticide) were extremely improbable. What little data there was on the dingo abductions, suggested it was rare, but the data were not good. Not many people camp with infants in the Outback. Dingoes rarely get the chance to take them. There is a lot of evidence that maternal infanticide is extremely improbable: You, dear reader, are evidence of that.
Now, had dingo attacks been commonly reported before Azaria’s disappearance the possibility of maternal infanticide may never have come up.
The “healthcare serial murderers” cases typically have this same feature: both the crime (serial murder) and the “innocent alternative” (an unusual cluster of natural deaths coinciding with the attendance same nurse) are intrinsically improbable. Base rate neglect — healthcare serial murders remain vanishingly unlikely — is a real risk.
Getting there
Once you are equipped with a hammer and have started wandering around the house looking for nails, there should still be scope to falsify a bad operating theory. But again, psychological biases can override the dispassionate application of cool logic.
Hopeful extrapolation
We are natural riddle-solvers. We build models of the world by habit. It is easy to slip beyond our range of reliable experiences and form theories for which we have little expertise, especially where abundant circumstantial evidence is consistent with — able to be fitted to — our side of the argument, rather than dispositive of it.
Litigation’s adversarial nature, in which advocates are meant to present their arguments in the best possible light, emphasising helpful facts and neglecting unhelpful ones, hardly helps. The underlying philosophy here is akin to Adam Smith’s “invisible hand”: from the interaction of opposed, self-interested advocates we expect the invisible hand of justice miraculously to emerge.
Base rate neglect
Lawyers are not natural statisticians. We should be careful with statistics especially when they concern extremely improbable events.
“Base rate neglect” — the “prosecutor’s fallacy” — is the natural tendency to ignore the “base rate” of an outcome in a population and instead focus on “individuating” information about the specific case.
Linda is a single, middle-aged female philosophy graduate who is active in CND. If asked which description of Linda is more likely to be true, most people will choose “she is a bank teller who is active in the feminist movement” over “she is a bank teller” when plainly the former is a subset of the latter.[10]
Say a medical test is expected to give a correct result 999 times out of 1,000. We are tempted to assume, therefore, that any positive result will be pretty much conclusive. But if the general prevalence of the condition in the population — the “base rate” — is just 1 in 100,000 then for every true positive result, we should expect 100 false ones. The probability that a positive test is accurate is only 1%. [11]
Take the healthcare serial murderer: say there is a 1 in 342 million chance that a nurse would be on duty for all suspicious deaths in a cluster at random. If one nurse was on duty for all those suspicious deaths, we might suppose it to be damning. The jury in the trial of Dutch nurse Lucia de Berk did: they convicted her of serial murder in 2010.
But this is to ignore the base rate. With no other prior information suggesting her guilt (in de Berk’s case, there was none) what is the probability that a given individual is a healthcare serial murderer?
For those odds to be as likely — not more — as random chance — there would need to be twenty-three healthcare serial killers operating at a given moment. There do not appear to be that many in the world, let alone in healthcare environments.
Had that probability been correct (it turned out to be a wild underestimate) sheer chance was still a likelier explanation that Lucia de Berk was a serial murderer. (The probability of her shift patterns coinciding by chance was subsequently reassessed as being more like 1 in 25. De Berk’s conviction was overturned in 2010.
Confirmation bias
All observation is theory-dependent: scientists must first have a theory before they can test it: otherwise, how do they know what they are looking for?
Having committed to a bad theory, we tend to seek out, interpret, and best remember information that confirms it. We may over-weight supporting evidence and disregard contradicting evidence. This is what the adversarial system expects.
This is confirmation bias. It has two phases. The first applies in “theory formation”: the man with a hammer is primed to see nails.
Suspicious that strangely-behaving Lindy Chamberlain might have something to hide, Police searched her possessions, tent and vehicle, eventually finding what they believed to be “fetal blood” splashed under the dashboard of her husband’s car.[12] This set the police in a direction. Before long they had constructed an elaborate theory of how Lindy Chamberlain had committed murder.
The second aspect is in theory corroboration. Here evidence which, when taken in the abstract, would tend to “be consistent with” innocence can, with confirmation bias, be presented as damning.
We would expect an innocent mother to behave “oddly” in the aftermath of her infant daughter’s disappearance. We would expect an innocent nurse to lead a healthy social life. We would expect an innocent defendant not to admit his crimes or express remorse for his actions.
But these are exactly the behaviours one might expect of the innocent: things that point away from the defendant’s guilt appear, with confirmation bias, to point towards it.
Staying there
Once we are in the tunnel, there are cognitive biases that prevent us from finding our way out.
Hindsight bias and the reiteration effect
In hindsight, people tend to think an eventual outcome was inevitable or more likely than they might have before it happened. “What is the chance that that nice woman you met at the campsite just now will, in three hours, brutally murder her own nine-week-old infant?” versus “Given that this woman’s nine-week-old infant has disappeared from the campsite, and the police suspect her of foul play, what is the chance that she brutally murdered her own child?”
Through “hindsight bias” we project onto outcomes our knowledge of the observed behaviour in the past, without realising that the perception of the past has been tainted by our knowledge of the outcome.
Once a person becomes a prime suspect, hindsight bias suggests that, upon reflection, she was the likely suspect from the beginning. Evidence is malleable in light of this “realisation”.
There is also a “reiteration” effect. Our confidence in a theory increases the more we hear it, independent of its validity. The longer that police, prosecutors and witnesses live with the conclusion of guilt, the more they become invested in it. Their conclusions become entrenched, and it appears obvious that all evidence pointed to the defendant from the outset. Prosecutors and defenders find it increasingly hard to consider alternative theories of situation.
Circular correspondence bias: Randle McMurphy’s dilemma
Correspondence bias, or the “fundamental attribution error” attributes observed behaviour to personality, or malice, without considering more probable situational factors. This you can, but few enough do, avoid with Hanlon’s razor:
“Do not attribute to malice things that can just as well be explained by stupidity.”
A strangely prevalent form of reasoning that combines it with circular logic as follows:
- There is weak circumstantial evidence that X committed a crime.
- The (highly unusual) traits of people who commit that sort of (highly unusual) crime are attributed to X (this is a “fundamental attribution error”).
- X’s newly-attributed traits are cited as evidence corroborating the existing weak circumstantial evidence (this is a circularity). X is now a “sociopath”, “narcissist”, “attention-seeker”, or even “Munchausen’s syndrome by proxy sufferer”, so is more likely to have committed the highly unusual crime.
- Other aspects of X’s behaviour which, but for the allegation, would be normal, now appear to verify the trait (this is a further circularity).
- X is convicted on the compelling evidence of the opportunity and possession of that highly unusual trait.
You might recognise this as the plot driver of Ken Kesey’s One Flew Over the Cuckoo’s Nest.
Bob the doctor
So, say there is an unusual cluster of deaths in the geriatric ward of a hospital. It coincides with the shift pattern of one doctor, Bob, raising a weak and easily rebuttable presumption of foul play on Bob’s part.
Before this statistical correlation, Bob was not under suspicion. Nor was his behaviour unusual.
But the foul play — if that’s what it is — is horrific: someone is systematically murdering little old ladies. Only a psychopath with a stone-cold heart and a narcissistic personality disorder could do that. If it is Bob, he must be a psychopath with a narcissistic personality disorder. This logic is already circular, but the circle is big and conditional enough not to be obvious.
We now start inspecting Bob’s behaviour. We find him to be meticulously tidy. He enjoys socialising, attending a regular salsa class on Wednesdays. He sent condolence cards to the families of several deceased patients.
Now: what could be more indicative of a stone-cold, narcissistic psychopath who has just murdered someone than a fastidiously tidy person — hence, no evidence, right? — who sent a card to the victim’s wife before going out drinking and dancing with his friends? The net is closing in.
Antidotes
Q: How many psychiatrists does it take to change a light bulb?
A: Just one; but the light bulb really has to want to change.
Some strategies to counteract the effect, but the predominant one is to want to keep an open mind.
Hanlon’s, and Otto’s razor
“Do not attribute to malice things that can just as well be explained by stupidity.”
Don’t assume malice where stupidity will do; likewise, per Otto’s razor, don’t attribute to virtue something that could equally be attributed to self-interest; or to skill something that could equally be attributed to dumb luck.
- ↑ Christchurch Journalist Martin Van Beynen’s fantastic podcast Black Hands compellingly makes this case.
- ↑ They might snap into a sudden orgy extreme violence — but this plays out as desperate, meltdown mass murder, not calculated ongoing serial murder, and there is generally no doubt that it is murder and no shortage of direct evidence implicating the accused.
- ↑ Mental illnesses having a clear medical pathology, not suspiciously made-up ones out of ex- post facto symptoms like “Munchausen by proxy”. See the “circular correspondence bias” discussion below.
- ↑ New Law Journal: The Trouble With “Sure”
- ↑ In fairness the crown submitted expert forensic analysis entered that it was specifically infant blood, so you can hardly fault the jury here. You can fault the crown forensics team though: it turned out to be acoustic deadening spray and not blood of any kind!
- ↑ JC draws upon The Multiple Dimensions of Tunnel Vision in Criminal Cases by Keith Findley and Michael Scott in the Wisconsin Law Review (2006).
- ↑ Neil Postman, Technopoly: The Surrender of Culture to Technology, 1992.
- ↑ In the cases the JC has managed to track down, not one involves anyone seeing the accused commit any unequivocal act of harm
- ↑ There is some debate about expert witness conflicts of interest in criminal law circles but, given how public interest there is in the question, it has had little public exposure as yet: perhaps Post Office Horizon IT scandal and the Letby case will change that.
- ↑ per Kahneman and Tversky. This work is not without its critics. JC wonders whether the same thing propels the commercial solicitor’s compulsion to over-description.
- ↑ This sounds properly nuts but it is true. Assume you test 100,000 people. At a 99.9% accuracy rate, you will expect 99,900 correct results, and 100 false ones. But in that sample of 100,000 you would only expect 1 actual case of the condition. So if you are tested and receive a positive result, you have a 1 in 100 chance that it is a true result.
- ↑ Much later the substance was retested. It turned out to be sound-deadening material containing iron oxide.