Trolley problem: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
No edit summary
No edit summary
 
(9 intermediate revisions by the same user not shown)
Line 2: Line 2:
:—Voltaire}}
:—Voltaire}}
{{quote|
{{quote|
A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the cost of a sixth person working on the spur line. Should you pull the switch to divert the trolley, actively choosing to save five lives, but at the certain cost of one?
A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the certain cost of a sixth, who is working on the spur line. Should you pull the switch to divert the trolley, proactively saving five lives, at the certain cost of one?
}}
}}


{{drop|P|hilosophy students ''love''}} the trolley problem. Though introduced in the sixties as a thought experiment about some Roman Catholic moral principle, these days it gets great play when we think about real runaway trams, not metaphorical ones — driverless cars. This intuition pump now functions in the public mind as some kind of live moral dilemma, to be solved (and how could it be solved? It is designed to be insoluble!) before driverless cars can be more widely mandated.
{{drop|P|hilosophy ''loves'' conundrums}} like the trolley problem. Philippa Foot invented it in the 1960s<ref>''The Problem of Abortion and the Doctrine of the Double Effect'' (1967)</ref> as a thought experiment to compare the moral qualities of ''co''mmission and ''o''mission. We might, Foot thought, think it acceptable to divert a runaway trolley away from the larger group of workers, as the desired action is “steering ''away'' from these victims” rather than toward the single worker. While the consequence of doing so (the single worker’s death) is foreseeable, it is an unwanted contingency flowing from the action rather than its intended effect. On the other hand, pushing an innocent bystander onto the first track to stop the trolley before it gets to the six workers is directly to intend one person’s death, which is ''not'' acceptable, even though the outcome would be the same.
{{quote|What if a child runs out in front of a robo-car, inside the total stopping distance for the car? What should the robot do? Should it swerve to avoid the child at the expense of oncoming vehicles? Or carry on, possibly killing the child?}} 


How should we configure this potentially horrific technology? Can we — should we — programme your vehicle in advance to manage the kinds of moral quandary the trolley problem presents?
This is a neat challenge to extreme utilitarianism but, still, a narrow point, long since lost on netizens who now trot the problem out when discussing real, not metaphorical, runaway trolleys: driverless cars.


Look: there are plenty of logistical problems and risk allocation issues to be solved before autonomous vehicles can go mainstream for sure, but this ''highly'' artificial hypothetical is not one of them. This is a ''thought experiment'' dreamed up for a philosophy journal. It is not meant to address real-world problems.
====Driverless cars====
{{quote|What if a child runs out in front of an autonomous vehicle? What if the vehicle can avoid the child, but only at the cost of causing some other damage or injury? How should it be programmed? Where all outcomes are unconscionable, how can the vehicle operator ''avoid'' moral culpability?}}
 
{{Drop|H|ere, the trolley}} problem is not a metaphorical analogy, but a live practical dilemma. With autonomous vehicles, we are told, this could happen so we must choose, in the abstract and without time pressure, between imagined unconscionable outcome ''A'' and imagined unconscionable outcome ''B'', but without recourse to any solution ''C'', which is to redesign the system to avoid outcomes ''A'' ''and'' ''B''.  
 
We can see at once this is an emotive but essentially sterile question to which, by design, there’s no good answer. A hypothetical in which someone dies however it pans out holds no moral lesson other than ''fix it, so there is another outcome''.
 
But, explicitly, we aren’t allowed that choice.


====Parallel worlds of science and philosophy====
====Parallel worlds of science and philosophy====
{{drop|B|y design, Philosophy}} professors live in a world entirely disconnected from the actual problems of real human beings. It is useful by comparison to compare the world philosophers live in with the one scientists live in.  
{{Drop|S|cientists live in}} this [[Simple system|simple]], stable world filled to the brim with countless, benign little engines — “[[nomological machine]]s” — by which everything reliably ''works''. In this idealised world, we can predict the period of a swinging pendulum with Galileo’s equation. We can calculate the pressure and volume of a quantity of gas with Boyle’s law. We can explain the accelerating expansion of the universe with dark matter, dark energy and an arbitrary cosmological constant. [''I don’t think this last one is a great example — Ed'']
 
In the scientists’s world, ''everything'' runs like clockwork.<ref>This clockwork world is a beguiling lie, as [[Nancy Cartwright]] argues.</ref>


Scientists live in this simple, stable world filled with hundreds of benign little [[nomological machine]]s they have designed, and thanks to which everything reliably ''works'': thanks to the pendulum equation, t = 2π√(l/g), we can predict the period of a swinging object. thanks to Boyle’s law we can calculate the pressure and volume of a quantity of gas.
Philosophers inhabit, more or less, a dark inversion of the scientists’ world. Theirs is, instead, a universe of ''insoluble conundrums''. They also fill their space with [[Nomological machine|ingenious devices]], only malicious ones: “thought experiments” invented ''specifically to bugger everything up''.  


Philosophers inhabit, more or less, a dark inversion of this world: it is also filled with ingenious [[Nomological machine|nomological machines]], only they are malicious: they exist — philosophers invent them — ''to bugger everything up'' at the merest opportunity. In the philosophers’ world, ''nothing'' works.  
In the scientist’s world, everything goes to plan. In the philosophers’ world, ''nothing'' does.  


For each of the scientists’ happy heuristics that illustrate how the world works, the philosophers have their own intricately-crafted intellectual contraptions here, a ''Brain in a Vat'', there, a ''Chinese Room'', over yonder, a ''Parallel Universe'' — designed to illustrate how, really, it doesn’t. A world where nothing works is an excellent one for philosophers because it means they can sit around arguing ''why not'' and they never have to get on and ''do'' anything.
For every scientific axiom, whirring happily away in its intricate cage, plotting out the reliable progress of the world, the philosophers have their obstreperous demonic contraptions: here, a ''brain in a vat'', there, a ''Chinese room'', over yonder, a ''parallel universe'' — designed to throw the whole edifice into confusion.  


It is a world of imaginary problems Like a, well, parallel universe. Philosophers therefore occupy themselves with rarefied, nomological problems that suit that ascetic vibe.
====Negative-Lindy effect====
====Negative-Lindy effect====
{{drop|I|f any of }} these philosophical conundrums actually happened in real life, real humans would quickly solve them. For something to have lasting value as a philosophy experiment, therefore, it must have some kind of ''negative'' “[[Lindy effect]]”.  
{{drop|J|ust as science}} is designed to explain the world as we experience it, philosophy questions our basic assumptions about reality. Philosophical thought experiments are designed to draw out that confusion. But since we perceive the world to be reliable, constant and consistent — it is these regularities that our scientific “engines” explain — philosophical thought experiments must have the unalterable property of being ''entirely hypothetical''.
 
If a philosophical conundrum ever cropped up in real life, someone would have to ''solve'' it. This would spoil it: a solved thought experiment is like a drunk bottle: all the fun’s gone out of it: it can’t teach you anything. The best way of resisting solution by practical people is, naturally, to not show up in the first place: to make your exclusive home in the minds of academic philosophers. To have lasting value, therefore, a philosophical thought experiment must have some kind of ''negative'' “[[Lindy effect]]”.  
 
The ''real'' [[Lindy effect]] is a measure of [[Antifragile|antifragility]]: things like scientific experiments must be “road tested”. Ideas that have taken a beating at the hands of children, animals and practical people over time and that ''still work'' — are less likely to fail in the future. If they had a critical weakness, someone would have found it. The longer an idea has been round, the more resilient we can expect it to be.
 
Untested, new ideas are more likely to break. We don’t yet know their vulnerabilities. We don’t know how fragile they are. If you had to bet which would last longer, the Pyramids of Giza or the Las Vegas Sphere?


{{quote|The Negative Lindy Effect: A good test of the quality and resilience of a philosophical thought experiment is how long it can survive in the literature ''without'' ever presenting as a genuine problem in real life to actual people.}}
The endurance test for philosophical thought experiments is the opposite:


The real Lindy effect: things that have stood the test of time are more likely to continue doing so, while new things are more fragile and likely to become obsolete. The philosophical effect is the opposite. Plato’s cave: brilliant.  
{{quote|The Negative Lindy Effect: the longer a philosophical thought experiment can survive ''without'' presenting in real life to actual people to be tested, the more resilient it will be.}}


The Trolley Problem is just such an excellent philosophical thought experiment. What kind of real-world event has such a deterministic outcome, over such a short time horizon that is just long enough for a subject to take action that is partly ''but not fully'' remedial, that can be acted on with total moral clarity, and yet there is not enough time to contrive an alternative different solution, or warn anyone, or avert the outcome some other way or, if you can’t, at least communicate to the putative victims, in a scenario in which there is no overriding moral agency from someone else?
Descartes’ brain in a vat: ''excellent''. Berkeley’s falling tree in a forest: ''brilliant''. (By definition, no one can ever experience it!) Fully untestable.
 
The Trolley Problem ''seemed'' like just such an untestable thought experiment: a subject is presented with a set of alternative [[Unknowns|known knowns]] over a period both long enough for the subject to act with definitive moral clarity but yet not entirely avoid bad outcomes, but not long enough for her to problem-solve, while all the time no-one else with any moral responsibility for the scenario has the practical ability to do anything to prevent it.
 
Just one person can act, they can only pull a switch, or not pull it, and someone dies either way.
 
This is about as artificial as a problem could be, isn’t it?


=====Artificial certainty=====
=====Artificial certainty=====
The trolley problem assumes that the subject has perfect, certain knowledge of both the present state and the future outcomes on both tracks — that the train is coming, it cannot be stopped, that the only possible variable is points operation and death on one spur or another is certain but no-one else in the scenario has ''any'' knowledge, either of the current status or the future potential.  
Firstly, we are asked to accept that the subject has perfect, absolute ''knowledge'' of the present state and the future outcomes on both tracks — that the train is coming, that it cannot be stopped, that the only possible intervention is her points operation, that death on one spur or the other is inevitable ''and'' that no-one else in the scenario with any moral agency has ''any'' knowledge, either of the current status or the future potential.  


The subject is absolutely certain the affected victims will not survive. ''How''? Can she see them? If so, why can’t she communicate with them? If she can see them, can they not see her? And the on-rushing trolley?
The subject must be ''certain'' that the impacted workers will die. ''How''? Can she ''see'' them? If so, why can’t she ''communicate'' with them? Why can’t they see her?


And ''what are they doing on the train tracks'' in the first place? Are they ''tied down'' there? If so, why are we obsessing about this plainly second-order moral dilemma of the points operator? Should we not be interrogating whoever tied them down? If they are ''not'' tied down, then ''what the hell are they doing''? Can they not pay attention to their own environment? Can they not optimise their ''own'' well-being? Why can’t they get off the tracks? ''Who hangs out on train tracks?'' If they are rail workers, have they not had any HSE training?
Why can’t the workers see anything? Why don’t they get off the tracks?


So, while the subject of the thought experiment is omniscient, necessarily everyone else is ''entirely ignorant'' of every aspect of the scenario going down, and therefore unable to do anything to prevent it.
How about the track operator: can ''she'' not see what is happening?


=====Artificial timeframe=====
=====Artificial timeframe=====
In real life, there will usually be many “intervention points” before a crisis. Where there are not, nor will there be time for abstract moral calculation.
In real life, there are usually many “intervention points” before a crisis. When there aren’t, nor will there be time for abstract moral calculation.
 
The Trolley Problem artificially compresses moral deliberation into ''an instant''. It must, to close off opportunities for resourcefulness, problem-solving and communication that would spoil its explanatory force.  


The Trolley Problem artificially compresses moral deliberation into ''an instant'' to close off opportunities for potential problem-solving and communication that would spoil it. It is a hypothetical: it requires the subject to pre-solve a fantastical moral dilemma in the abstract, before it happens, without being allowed to ''change the system to prevent the dilemma arising in the first place''. This is to impose a simple system on a complex problem. The real world is not so bone-headed as to foresee problems and not solve for them in system design.
The subject must pre-solve a moral dilemma in the abstract but without being allowed to request changes to a necessarily inadequate system to prevent the dilemma from arising in the first place.  


If we know in advance we have an inherent “trolley problem”, we should not wait for a ''literally'' foreseen calamity to happen and only then make a difficult decision. ([[Legal eagle|Legal eagles]]: realising there is a risk and running it anyway is the legal definition of “[[Reckless|recklessness]]: someone ''would'' be guilty of murder in this hypothetical.) We should re-design the system with fail-safes now. Not redesigning the system is the moral failure, not “pulling or failing to pull a lever”.
This is to impose a [[simple system]] on a [[Complexity|complex problem]]. The real world would not tolerate such an approach: if we can foresee an adverse outcome — and we can; we are expostulating about it — ''we are obliged to fix it''. This is a minimum standard of system design. Not redesigning the system is the moral failure, not “pulling or failing to pull a lever”.


If the time-frame were more realistic, there would be time to problem-solve, warn victims, or contrive other outcomes or just ''communicate''.  
If we ''know'' there is an inherent “trolley problem” in our system design we cannot wait for a ''literally foreseen'' calamity to happen before acting to stop it.
 
By the way: realising there is a risk and running it anyway is the legal definition of “[[Reckless|recklessness]]”: someone ''would'' be guilty of murder in this hypothetical.


=====Artificial (lack of) social context =====
=====Artificial (lack of) social context =====
The scenario strips away all social and institutional context: Why have we designed the system this way? How have we managed to get to this crisis point without anyone foreseeing it?  
The trolley problem also strips away all social and institutional contexts. Why have we designed the system this way? How have we managed to get to this point without anyone foreseeing it?  


The reason trolley problems are not generally prevalent is because, in the real world, this social context will intervene to prevent the problems arising in the first place. We build systems to anticipate problems. We encounter design flaws once, and solve them. This is the Lindy effect: bad design gets junked.
Trolley problems do not occur in the real world because this social context will  
prevent them. We build systems to anticipate problems. We encounter design flaws once and solve them. This is the (good) [[Lindy effect]]: ''Bad designs get junked''. ''Good'' ones stand the test of time.


=====Artificial moral agency vacuum=====
=====Artificial moral agency vacuum=====
It reduces the question to a single binary choice while stripping moral agency from the other participants, who almost certainly were better placed to foresee anticipate and control events, and therefore should be pegged with it. It asks us to  ignore questions of greater moral consequence than the one at hand. ''Who designed such an inept system''? Who assigned the workers to dangerous conditions? Why weren’t they looking out? Who lost control of the trolley
By focussing on the switch-puller’s choice, we overlook real moral failures that must have led to the situation. The points operator is, for all intents, a random bystander. She bears ''no'' responsibility for the situation — she neither designed the track, nor scheduled the workers, nor trained them, nor did she cause the runaway trolley — yet she alone is expected to be omniscient while everyone else with any part in this incipient tragedy is ''entirely, and justifiably, [[Inadvertence|inadvertent]]''.


Our subject is, by comparison, a random bystander. She is not responsible for the situation yet, for the thought experiment to work, we are asked to accept that she is somehow fully responsible for the outcome while those who, by rights, ''should'' be responsible for it somehow have ''no moral agency at all''. 
Who designed such an inept system? Who assigned the workers to those dangerous conditions? Who lost control of the trolley? Why weren’t they looking out? Why were there no fail-safes, warning procedures or circuit breakers? Why was the track not designed to prevent accidents like this? Did no one think this could happen?


By focusing on the switch-puller’s choice, we overlook the moral failures that led to the situation. This is great for philosophical machines, but it does not advance any real world moral arguments.
====False paradox====
====False paradox====
{{drop|Y|ou all know}} how JC loves a [[paradox]]; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum. I feel a table coming on, readers.
{{drop|Y|ou all know}} how JC loves a [[paradox]]; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum.  
{{small|80}}
{{small|80}}
{{tabletopflex|100}}
{{tabletopflex|100}}
|+ Caption text
{{aligntop}}
{{aligntop}}
! Rowspan="3" | Player !! Rowspan="3" | Responsible for !! Colspan="3"| Time of awareness  
! Rowspan="3" | Player !! Rowspan="3" | Responsible for !! Colspan="3"| Time of awareness  
|-
|-
!! Rowspan="2" |Before Breakaway !! Colspan="2" |After Breakaway
!! Rowspan="2" |Before breakaway !! Colspan="2" |After breakaway
{{aligntop}}
{{aligntop}}
!time to react !! no time to react
!Time to react !! No time to react
{{aligntop}}
{{aligntop}}
| System Designer || System design safety, foreseeable failure modes ||Could implement: fail-safes, emergency brakes, worker safety zones, warning systems|| Could activate: emergency protocols, backup systems|| Too late
| System Designer || System design safety, foreseeable failure modes. ||Implement fail-safes, emergency brakes, worker safety zones, warning systems.|| Could activate: emergency protocols, backup systems|| Too late.
{{aligntop}}
{{aligntop}}
| System Operator || Worker safety, track access, safe rostering, risk assessment||Could ensure: safe work procedures, worker positioning, track access control Maintenance schedule|| Could initiate: emergency protocols, worker evacuation, system shutdown|| Too late
| System Operator || Worker safety, track access, safe rostering, risk assessment. ||Ensure safe work procedures, worker positioning, track access control, maintenance.|| Could initiate: emergency protocols, worker evacuation, system shutdown|| Too late.
{{aligntop}}
{{aligntop}}
| Track workers || Challenging unsafe conditions, being expert, keeping a look out for danger. || Raise issues. Quit! || Evacuate track || Pray
| Track workers || Challenging unsafe conditions, being experts, keeping a lookout for danger. || Raise issues. Quit! || Evacuate track. || Too late.
{{aligntop}}
{{aligntop}}
| Switch Operator || Looking after his switch
| Switch Operator || Looking after switch.
||Make sure the switch is working. || Call for help || Pull switch!
||Make sureswitch  is working. || Raise alarm. Call for help. || Pull switch!
|}
|}
</div>
</div>
Of all these moral, practical and systems considerations here — there are a lot — the hapless switch operator’s position is the ''least'' vexed. If it gets to the point where the only judgment remaining is “whether to throw the switch” and ''no one else'' is already fully culpable for putting the switch operator in the position where the only person who can save the day is her — and it won’t be: it can’t ''possibly'' be — and if, [[Q.E.D.]], the switch operator has had no time to think about it, much less take any evasive action either, then it necessarily follows that the switch operator will not be responsible either.
Of all these moral, practical and systems considerations here — there are a lot — the hapless switch operator’s position is the ''least'' vexed. If it gets to the point where the only judgment remaining is “whether or not to throw the switch” and ''no one else'' is already fully culpable for putting the switch operator in that position, and if, [[Q.E.D.]], the switch operator has no time to think about it either, then it necessarily follows that the switch operator will not be blamed for doing nothing.  


This will just be one of those things. ''Shit happens''. ''Accidents'' happen. The switch operator does not have enough time much less information to make the moral calculus to intervene, and realistically is not going to.
This will just be one of those things. ''Accidents happen''. The switch operator did not have the time or information to make the moral calculus to intervene, and realistically never would.
====The driverless cars====
====The driverless cars====
{{drop|A|pplying all this}} to the autonomous car proposition, there is a false equivalence. Why does the trolley problem not arise for human drivers? Because, unlike TESLAs, humans aren’t perfect rationalising machines. They don’t run on GeForce chips. They cannot flawlessly assimilate three terabytes of data coming at them in real time — was it a balloon? A fox? A dog? A child? — within a microsecond, calculate a flawless Rawlsian moral outcome, calculate the brake horsepower, vectors, momentum etc and communicate that executive decision to the machine. If humans react at all, they may blindly stomp on the anchors without a conscious thought at all. Most likely they will just flatten whatever ran out in front of them and that will be that. Humans can’t be judged because they lacked the faculties to do anything else.  
{{drop|A|pplying all this}} to autonomous cars, we see a false equivalence. Why does the trolley problem not arise, already, for human drivers? Because, unlike AVs, humans aren’t expected to be perfect calculating, rationalising machines. They don’t run on GeForce chips. They cannot flawlessly assimilate, triangulate and arbitrate on terabytes of real-time data within a microsecond — was it a child? A fox? A dog? A balloon? — calculate the brake horsepower, vectors, momentum, apply flawless Rawlsian moral reasoning ''and'' communicate the resulting executive decision to the drive train in time to achieve the appropriate outcome.  


So, for one thing, we are finding a moral conundrum in a fractal segment of [[spacetime]] previously inaccessible to moral discourse. It would be long gone before the moral passions arrived on the scene. But for another, a hyper-powered super-intelligent machine ''still'' wouldn’t be able to identify the obstacle, identify the alternatives (veer left? veer right?), evaluate the potential collateral damage for each outcome, and assess their relative moral consequences (is an adult worth more than a child? less? what are the criteria? what if they are roughly the same? what if you don’t know?) and correctly execute the most morally profitable manoeuvre.
If a human driver reacted at all, he may blindly slam on the anchors and yank the steering wheel without a conscious thought ''at all''. Most likely, he will just flatten whatever ran out in front of him and that will be that. Terrible accident. But he can’t be judged because he lacks the faculties to do any better.  


There are, perhaps, safety issues with driverless cars. However, the data suggest that their accident ratio is ''way'' lower than for human drivers. By obsessing with hypothetical thought experiments we might prolong a statistically better outcome.  
And, realistically, a hyper-intelligent autonomous machine ''still'' wouldn’t be able to identify the obstacle, identify the alternatives, evaluate the potential damage for each outcome, and assess their relative moral consequences and correctly execute the most morally profitable manoeuvre in the split second available.
 
So why are we obsessing about autonomous cars? Data suggest that driverless cars are materially safer than human-operated ones. They promise a better set of outcomes. Yet because of ''our'' cognitive limitations, humans do not — apparently — suffer from the trolley problem. We are finding a moral conundrum in a [[fractal]] segment of [[spacetime]] previously inaccessible to moral discourse. By obsessing over hypothetical thought experiments, we delay progress on a statistically better outcome.  
{{quote|''Le mieux est l'ennemi du bien.''}}
{{quote|''Le mieux est l'ennemi du bien.''}}


====Too cute====
====Too cute====
{{drop|T|he trolley problem}} is ''too'' cute to be a useful [[nomological machine]], even for philosophers. By pitting dramatically unequal moral outcomes against each other makes it easy for ''philosophers'' to get worked up about it, but at the same time creates artificial clarity where none would exist in the wild. This is the ideal negative Lindy effect. Ethical choices do not present themselves as arithmetic. Moral calculus is ''hard''
{{drop|T|he trolley problem}} is ''too'' cute to be a useful [[nomological machine]] in this context, even for philosophers. By pitting dramatically unequal moral outcomes against each other — ''six'' deaths here, ''one'' there — it is easy for philosophers to get worked up, but exaggerates the already artificial clarity of the dilemma. This is a truly impossible scenario: the ''ideal'' [[negative Lindy effect]]. Tough ethical choices do not manifest as arithmetic. Moral calculus is ''hard''.  
 
So let us rework the dilemma: what if there were ''one'' child on each track? You could save this one, at the expense of that one? What one were a girl, the other a boy? What if one were rich, one poor? What if one were from a marginalised community? What if they both were, from ''different'' marginalised communities? What if the required moral calculus were genuinely ''ambiguous''? 
 
For, surely, in the tiny set of real-world examples where there was not a prior moral outrage that created to the conundrum, these ambivalent cases would greatly outnumber those where a neat utilitarian outcome was beyond doubt.  


Some might say we can draw useful metaphors from the trolley problem — that it is a useful tool for exploring moral intuitions about action versus inaction, and ''intended'' versus ''foreseen'' consequences. But there are better ways of doing that, that would resonate in real-world scenarios, and without absurd mechanical artefacts to distract from the core ethical question.
So, let’s rework the dilemma: what if the alternatives were broadly comparable: what if there were ''one'' child on each track? You could save ''this'' one, at the expense of ''that'' one? What if the alternatives weren’t comparable ''at all''? What if one were a girl, the other a boy? What if one were rich, one poor? What if one were from a marginalised community? What if ''both'' were, from ''different'' marginalised communities? How would we configure that into an algorithm to be operated at a moment’s notice? What if the required moral calculus were genuinely ''ambiguous''?


For, surely these are how cases would present in real life: there would be no neat utilitarian computation which would put the moral outcome beyond doubt.


{{nld}}
You might say we can draw useful metaphors from the trolley problem — it is a useful tool for exploring moral intuitions about action versus inaction, and ''intended'' versus ''foreseen'' consequences. Sure. But there are better ways of doing that, that would resonate in real-world scenarios, and without absurd mechanical artefacts to distract from the core ethical question.
{{nlp}}

Latest revision as of 13:05, 21 November 2024

Lies, Damn Lies and Statistics
Index: Click to expand:
Tell me more
Sign up for our newsletter — or just get in touch: for ½ a weekly 🍺 you get to consult JC. Ask about it here.

Le mieux est l'ennemi du bien.

—Voltaire

A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the certain cost of a sixth, who is working on the spur line. Should you pull the switch to divert the trolley, proactively saving five lives, at the certain cost of one?

Philosophy loves conundrums like the trolley problem. Philippa Foot invented it in the 1960s[1] as a thought experiment to compare the moral qualities of commission and omission. We might, Foot thought, think it acceptable to divert a runaway trolley away from the larger group of workers, as the desired action is “steering away from these victims” rather than toward the single worker. While the consequence of doing so (the single worker’s death) is foreseeable, it is an unwanted contingency flowing from the action rather than its intended effect. On the other hand, pushing an innocent bystander onto the first track to stop the trolley before it gets to the six workers is directly to intend one person’s death, which is not acceptable, even though the outcome would be the same.

This is a neat challenge to extreme utilitarianism but, still, a narrow point, long since lost on netizens who now trot the problem out when discussing real, not metaphorical, runaway trolleys: driverless cars.

Driverless cars

What if a child runs out in front of an autonomous vehicle? What if the vehicle can avoid the child, but only at the cost of causing some other damage or injury? How should it be programmed? Where all outcomes are unconscionable, how can the vehicle operator avoid moral culpability?

Here, the trolley problem is not a metaphorical analogy, but a live practical dilemma. With autonomous vehicles, we are told, this could happen so we must choose, in the abstract and without time pressure, between imagined unconscionable outcome A and imagined unconscionable outcome B, but without recourse to any solution C, which is to redesign the system to avoid outcomes A and B.

We can see at once this is an emotive but essentially sterile question to which, by design, there’s no good answer. A hypothetical in which someone dies however it pans out holds no moral lesson other than fix it, so there is another outcome.

But, explicitly, we aren’t allowed that choice.

Parallel worlds of science and philosophy

Scientists live in this simple, stable world filled to the brim with countless, benign little engines — “nomological machines” — by which everything reliably works. In this idealised world, we can predict the period of a swinging pendulum with Galileo’s equation. We can calculate the pressure and volume of a quantity of gas with Boyle’s law. We can explain the accelerating expansion of the universe with dark matter, dark energy and an arbitrary cosmological constant. [I don’t think this last one is a great example — Ed]

In the scientists’s world, everything runs like clockwork.[2]

Philosophers inhabit, more or less, a dark inversion of the scientists’ world. Theirs is, instead, a universe of insoluble conundrums. They also fill their space with ingenious devices, only malicious ones: “thought experiments” invented specifically to bugger everything up.

In the scientist’s world, everything goes to plan. In the philosophers’ world, nothing does.

For every scientific axiom, whirring happily away in its intricate cage, plotting out the reliable progress of the world, the philosophers have their obstreperous demonic contraptions: here, a brain in a vat, there, a Chinese room, over yonder, a parallel universe — designed to throw the whole edifice into confusion.

Negative-Lindy effect

Just as science is designed to explain the world as we experience it, philosophy questions our basic assumptions about reality. Philosophical thought experiments are designed to draw out that confusion. But since we perceive the world to be reliable, constant and consistent — it is these regularities that our scientific “engines” explain — philosophical thought experiments must have the unalterable property of being entirely hypothetical.

If a philosophical conundrum ever cropped up in real life, someone would have to solve it. This would spoil it: a solved thought experiment is like a drunk bottle: all the fun’s gone out of it: it can’t teach you anything. The best way of resisting solution by practical people is, naturally, to not show up in the first place: to make your exclusive home in the minds of academic philosophers. To have lasting value, therefore, a philosophical thought experiment must have some kind of negativeLindy effect”.

The real Lindy effect is a measure of antifragility: things like scientific experiments must be “road tested”. Ideas that have taken a beating at the hands of children, animals and practical people over time and that still work — are less likely to fail in the future. If they had a critical weakness, someone would have found it. The longer an idea has been round, the more resilient we can expect it to be.

Untested, new ideas are more likely to break. We don’t yet know their vulnerabilities. We don’t know how fragile they are. If you had to bet which would last longer, the Pyramids of Giza or the Las Vegas Sphere?

The endurance test for philosophical thought experiments is the opposite:

The Negative Lindy Effect: the longer a philosophical thought experiment can survive without presenting in real life to actual people to be tested, the more resilient it will be.

Descartes’ brain in a vat: excellent. Berkeley’s falling tree in a forest: brilliant. (By definition, no one can ever experience it!) Fully untestable.

The Trolley Problem seemed like just such an untestable thought experiment: a subject is presented with a set of alternative known knowns over a period both long enough for the subject to act with definitive moral clarity but yet not entirely avoid bad outcomes, but not long enough for her to problem-solve, while all the time no-one else with any moral responsibility for the scenario has the practical ability to do anything to prevent it.

Just one person can act, they can only pull a switch, or not pull it, and someone dies either way.

This is about as artificial as a problem could be, isn’t it?

Artificial certainty

Firstly, we are asked to accept that the subject has perfect, absolute knowledge of the present state and the future outcomes on both tracks — that the train is coming, that it cannot be stopped, that the only possible intervention is her points operation, that death on one spur or the other is inevitable — and that no-one else in the scenario with any moral agency has any knowledge, either of the current status or the future potential.

The subject must be certain that the impacted workers will die. How? Can she see them? If so, why can’t she communicate with them? Why can’t they see her?

Why can’t the workers see anything? Why don’t they get off the tracks?

How about the track operator: can she not see what is happening?

Artificial timeframe

In real life, there are usually many “intervention points” before a crisis. When there aren’t, nor will there be time for abstract moral calculation.

The Trolley Problem artificially compresses moral deliberation into an instant. It must, to close off opportunities for resourcefulness, problem-solving and communication that would spoil its explanatory force.

The subject must pre-solve a moral dilemma in the abstract but without being allowed to request changes to a necessarily inadequate system to prevent the dilemma from arising in the first place.

This is to impose a simple system on a complex problem. The real world would not tolerate such an approach: if we can foresee an adverse outcome — and we can; we are expostulating about it — we are obliged to fix it. This is a minimum standard of system design. Not redesigning the system is the moral failure, not “pulling or failing to pull a lever”.

If we know there is an inherent “trolley problem” in our system design we cannot wait for a literally foreseen calamity to happen before acting to stop it.

By the way: realising there is a risk and running it anyway is the legal definition of “recklessness”: someone would be guilty of murder in this hypothetical.

Artificial (lack of) social context

The trolley problem also strips away all social and institutional contexts. Why have we designed the system this way? How have we managed to get to this point without anyone foreseeing it?

Trolley problems do not occur in the real world because this social context will prevent them. We build systems to anticipate problems. We encounter design flaws once and solve them. This is the (good) Lindy effect: Bad designs get junked. Good ones stand the test of time.

Artificial moral agency vacuum

By focussing on the switch-puller’s choice, we overlook real moral failures that must have led to the situation. The points operator is, for all intents, a random bystander. She bears no responsibility for the situation — she neither designed the track, nor scheduled the workers, nor trained them, nor did she cause the runaway trolley — yet she alone is expected to be omniscient while everyone else with any part in this incipient tragedy is entirely, and justifiably, inadvertent.

Who designed such an inept system? Who assigned the workers to those dangerous conditions? Who lost control of the trolley? Why weren’t they looking out? Why were there no fail-safes, warning procedures or circuit breakers? Why was the track not designed to prevent accidents like this? Did no one think this could happen?

False paradox

You all know how JC loves a paradox; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum.

Player Responsible for Time of awareness
Before breakaway After breakaway
Time to react No time to react
System Designer System design safety, foreseeable failure modes. Implement fail-safes, emergency brakes, worker safety zones, warning systems. Could activate: emergency protocols, backup systems Too late.
System Operator Worker safety, track access, safe rostering, risk assessment. Ensure safe work procedures, worker positioning, track access control, maintenance. Could initiate: emergency protocols, worker evacuation, system shutdown Too late.
Track workers Challenging unsafe conditions, being experts, keeping a lookout for danger. Raise issues. Quit! Evacuate track. Too late.
Switch Operator Looking after switch. Make sureswitch is working. Raise alarm. Call for help. Pull switch!

Of all these moral, practical and systems considerations here — there are a lot — the hapless switch operator’s position is the least vexed. If it gets to the point where the only judgment remaining is “whether or not to throw the switch” and no one else is already fully culpable for putting the switch operator in that position, and if, Q.E.D., the switch operator has no time to think about it either, then it necessarily follows that the switch operator will not be blamed for doing nothing.

This will just be one of those things. Accidents happen. The switch operator did not have the time or information to make the moral calculus to intervene, and realistically never would.

The driverless cars

Applying all this to autonomous cars, we see a false equivalence. Why does the trolley problem not arise, already, for human drivers? Because, unlike AVs, humans aren’t expected to be perfect calculating, rationalising machines. They don’t run on GeForce chips. They cannot flawlessly assimilate, triangulate and arbitrate on terabytes of real-time data within a microsecond — was it a child? A fox? A dog? A balloon? — calculate the brake horsepower, vectors, momentum, apply flawless Rawlsian moral reasoning and communicate the resulting executive decision to the drive train in time to achieve the appropriate outcome.

If a human driver reacted at all, he may blindly slam on the anchors and yank the steering wheel without a conscious thought at all. Most likely, he will just flatten whatever ran out in front of him and that will be that. Terrible accident. But he can’t be judged because he lacks the faculties to do any better.

And, realistically, a hyper-intelligent autonomous machine still wouldn’t be able to identify the obstacle, identify the alternatives, evaluate the potential damage for each outcome, and assess their relative moral consequences and correctly execute the most morally profitable manoeuvre in the split second available.

So why are we obsessing about autonomous cars? Data suggest that driverless cars are materially safer than human-operated ones. They promise a better set of outcomes. Yet because of our cognitive limitations, humans do not — apparently — suffer from the trolley problem. We are finding a moral conundrum in a fractal segment of spacetime previously inaccessible to moral discourse. By obsessing over hypothetical thought experiments, we delay progress on a statistically better outcome.

Le mieux est l'ennemi du bien.

Too cute

The trolley problem is too cute to be a useful nomological machine in this context, even for philosophers. By pitting dramatically unequal moral outcomes against each other — six deaths here, one there — it is easy for philosophers to get worked up, but exaggerates the already artificial clarity of the dilemma. This is a truly impossible scenario: the ideal negative Lindy effect. Tough ethical choices do not manifest as arithmetic. Moral calculus is hard.

So, let’s rework the dilemma: what if the alternatives were broadly comparable: what if there were one child on each track? You could save this one, at the expense of that one? What if the alternatives weren’t comparable at all? What if one were a girl, the other a boy? What if one were rich, one poor? What if one were from a marginalised community? What if both were, from different marginalised communities? How would we configure that into an algorithm to be operated at a moment’s notice? What if the required moral calculus were genuinely ambiguous?

For, surely these are how cases would present in real life: there would be no neat utilitarian computation which would put the moral outcome beyond doubt.

You might say we can draw useful metaphors from the trolley problem — it is a useful tool for exploring moral intuitions about action versus inaction, and intended versus foreseen consequences. Sure. But there are better ways of doing that, that would resonate in real-world scenarios, and without absurd mechanical artefacts to distract from the core ethical question.

  1. The Problem of Abortion and the Doctrine of the Double Effect (1967)
  2. This clockwork world is a beguiling lie, as Nancy Cartwright argues.