Trolley problem: Difference between revisions
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary |
||
(9 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
:—Voltaire}} | :—Voltaire}} | ||
{{quote| | {{quote| | ||
A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the cost of a sixth | A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the certain cost of a sixth, who is working on the spur line. Should you pull the switch to divert the trolley, proactively saving five lives, at the certain cost of one? | ||
}} | }} | ||
{{drop|P|hilosophy | {{drop|P|hilosophy ''loves'' conundrums}} like the trolley problem. Philippa Foot invented it in the 1960s<ref>''The Problem of Abortion and the Doctrine of the Double Effect'' (1967)</ref> as a thought experiment to compare the moral qualities of ''co''mmission and ''o''mission. We might, Foot thought, think it acceptable to divert a runaway trolley away from the larger group of workers, as the desired action is “steering ''away'' from these victims” rather than toward the single worker. While the consequence of doing so (the single worker’s death) is foreseeable, it is an unwanted contingency flowing from the action rather than its intended effect. On the other hand, pushing an innocent bystander onto the first track to stop the trolley before it gets to the six workers is directly to intend one person’s death, which is ''not'' acceptable, even though the outcome would be the same. | ||
This is a neat challenge to extreme utilitarianism but, still, a narrow point, long since lost on netizens who now trot the problem out when discussing real, not metaphorical, runaway trolleys: driverless cars. | |||
====Driverless cars==== | |||
{{quote|What if a child runs out in front of an autonomous vehicle? What if the vehicle can avoid the child, but only at the cost of causing some other damage or injury? How should it be programmed? Where all outcomes are unconscionable, how can the vehicle operator ''avoid'' moral culpability?}} | |||
{{Drop|H|ere, the trolley}} problem is not a metaphorical analogy, but a live practical dilemma. With autonomous vehicles, we are told, this could happen so we must choose, in the abstract and without time pressure, between imagined unconscionable outcome ''A'' and imagined unconscionable outcome ''B'', but without recourse to any solution ''C'', which is to redesign the system to avoid outcomes ''A'' ''and'' ''B''. | |||
We can see at once this is an emotive but essentially sterile question to which, by design, there’s no good answer. A hypothetical in which someone dies however it pans out holds no moral lesson other than ''fix it, so there is another outcome''. | |||
But, explicitly, we aren’t allowed that choice. | |||
====Parallel worlds of science and philosophy==== | ====Parallel worlds of science and philosophy==== | ||
{{ | {{Drop|S|cientists live in}} this [[Simple system|simple]], stable world filled to the brim with countless, benign little engines — “[[nomological machine]]s” — by which everything reliably ''works''. In this idealised world, we can predict the period of a swinging pendulum with Galileo’s equation. We can calculate the pressure and volume of a quantity of gas with Boyle’s law. We can explain the accelerating expansion of the universe with dark matter, dark energy and an arbitrary cosmological constant. [''I don’t think this last one is a great example — Ed''] | ||
In the scientists’s world, ''everything'' runs like clockwork.<ref>This clockwork world is a beguiling lie, as [[Nancy Cartwright]] argues.</ref> | |||
Philosophers inhabit, more or less, a dark inversion of the scientists’ world. Theirs is, instead, a universe of ''insoluble conundrums''. They also fill their space with [[Nomological machine|ingenious devices]], only malicious ones: “thought experiments” invented ''specifically to bugger everything up''. | |||
In the scientist’s world, everything goes to plan. In the philosophers’ world, ''nothing'' does. | |||
For | For every scientific axiom, whirring happily away in its intricate cage, plotting out the reliable progress of the world, the philosophers have their obstreperous demonic contraptions: here, a ''brain in a vat'', there, a ''Chinese room'', over yonder, a ''parallel universe'' — designed to throw the whole edifice into confusion. | ||
====Negative-Lindy effect==== | ====Negative-Lindy effect==== | ||
{{drop| | {{drop|J|ust as science}} is designed to explain the world as we experience it, philosophy questions our basic assumptions about reality. Philosophical thought experiments are designed to draw out that confusion. But since we perceive the world to be reliable, constant and consistent — it is these regularities that our scientific “engines” explain — philosophical thought experiments must have the unalterable property of being ''entirely hypothetical''. | ||
If a philosophical conundrum ever cropped up in real life, someone would have to ''solve'' it. This would spoil it: a solved thought experiment is like a drunk bottle: all the fun’s gone out of it: it can’t teach you anything. The best way of resisting solution by practical people is, naturally, to not show up in the first place: to make your exclusive home in the minds of academic philosophers. To have lasting value, therefore, a philosophical thought experiment must have some kind of ''negative'' “[[Lindy effect]]”. | |||
The ''real'' [[Lindy effect]] is a measure of [[Antifragile|antifragility]]: things like scientific experiments must be “road tested”. Ideas that have taken a beating at the hands of children, animals and practical people over time and that ''still work'' — are less likely to fail in the future. If they had a critical weakness, someone would have found it. The longer an idea has been round, the more resilient we can expect it to be. | |||
Untested, new ideas are more likely to break. We don’t yet know their vulnerabilities. We don’t know how fragile they are. If you had to bet which would last longer, the Pyramids of Giza or the Las Vegas Sphere? | |||
The endurance test for philosophical thought experiments is the opposite: | |||
The | {{quote|The Negative Lindy Effect: the longer a philosophical thought experiment can survive ''without'' presenting in real life to actual people to be tested, the more resilient it will be.}} | ||
The Trolley Problem | Descartes’ brain in a vat: ''excellent''. Berkeley’s falling tree in a forest: ''brilliant''. (By definition, no one can ever experience it!) Fully untestable. | ||
The Trolley Problem ''seemed'' like just such an untestable thought experiment: a subject is presented with a set of alternative [[Unknowns|known knowns]] over a period both long enough for the subject to act with definitive moral clarity but yet not entirely avoid bad outcomes, but not long enough for her to problem-solve, while all the time no-one else with any moral responsibility for the scenario has the practical ability to do anything to prevent it. | |||
Just one person can act, they can only pull a switch, or not pull it, and someone dies either way. | |||
This is about as artificial as a problem could be, isn’t it? | |||
=====Artificial certainty===== | =====Artificial certainty===== | ||
Firstly, we are asked to accept that the subject has perfect, absolute ''knowledge'' of the present state and the future outcomes on both tracks — that the train is coming, that it cannot be stopped, that the only possible intervention is her points operation, that death on one spur or the other is inevitable — ''and'' that no-one else in the scenario with any moral agency has ''any'' knowledge, either of the current status or the future potential. | |||
The subject | The subject must be ''certain'' that the impacted workers will die. ''How''? Can she ''see'' them? If so, why can’t she ''communicate'' with them? Why can’t they see her? | ||
Why can’t the workers see anything? Why don’t they get off the tracks? | |||
How about the track operator: can ''she'' not see what is happening? | |||
=====Artificial timeframe===== | =====Artificial timeframe===== | ||
In real life, there | In real life, there are usually many “intervention points” before a crisis. When there aren’t, nor will there be time for abstract moral calculation. | ||
The Trolley Problem artificially compresses moral deliberation into ''an instant''. It must, to close off opportunities for resourcefulness, problem-solving and communication that would spoil its explanatory force. | |||
The | The subject must pre-solve a moral dilemma in the abstract but without being allowed to request changes to a necessarily inadequate system to prevent the dilemma from arising in the first place. | ||
This is to impose a [[simple system]] on a [[Complexity|complex problem]]. The real world would not tolerate such an approach: if we can foresee an adverse outcome — and we can; we are expostulating about it — ''we are obliged to fix it''. This is a minimum standard of system design. Not redesigning the system is the moral failure, not “pulling or failing to pull a lever”. | |||
If the | If we ''know'' there is an inherent “trolley problem” in our system design we cannot wait for a ''literally foreseen'' calamity to happen before acting to stop it. | ||
By the way: realising there is a risk and running it anyway is the legal definition of “[[Reckless|recklessness]]”: someone ''would'' be guilty of murder in this hypothetical. | |||
=====Artificial (lack of) social context ===== | =====Artificial (lack of) social context ===== | ||
The | The trolley problem also strips away all social and institutional contexts. Why have we designed the system this way? How have we managed to get to this point without anyone foreseeing it? | ||
Trolley problems do not occur in the real world because this social context will | |||
prevent them. We build systems to anticipate problems. We encounter design flaws once and solve them. This is the (good) [[Lindy effect]]: ''Bad designs get junked''. ''Good'' ones stand the test of time. | |||
=====Artificial moral agency vacuum===== | =====Artificial moral agency vacuum===== | ||
By focussing on the switch-puller’s choice, we overlook real moral failures that must have led to the situation. The points operator is, for all intents, a random bystander. She bears ''no'' responsibility for the situation — she neither designed the track, nor scheduled the workers, nor trained them, nor did she cause the runaway trolley — yet she alone is expected to be omniscient while everyone else with any part in this incipient tragedy is ''entirely, and justifiably, [[Inadvertence|inadvertent]]''. | |||
Who designed such an inept system? Who assigned the workers to those dangerous conditions? Who lost control of the trolley? Why weren’t they looking out? Why were there no fail-safes, warning procedures or circuit breakers? Why was the track not designed to prevent accidents like this? Did no one think this could happen? | |||
====False paradox==== | ====False paradox==== | ||
{{drop|Y|ou all know}} how JC loves a [[paradox]]; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum | {{drop|Y|ou all know}} how JC loves a [[paradox]]; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum. | ||
{{small|80}} | {{small|80}} | ||
{{tabletopflex|100}} | {{tabletopflex|100}} | ||
{{aligntop}} | {{aligntop}} | ||
! Rowspan="3" | Player !! Rowspan="3" | Responsible for !! Colspan="3"| Time of awareness | ! Rowspan="3" | Player !! Rowspan="3" | Responsible for !! Colspan="3"| Time of awareness | ||
|- | |- | ||
!! Rowspan="2" |Before | !! Rowspan="2" |Before breakaway !! Colspan="2" |After breakaway | ||
{{aligntop}} | {{aligntop}} | ||
! | !Time to react !! No time to react | ||
{{aligntop}} | {{aligntop}} | ||
| System Designer || System design safety, foreseeable failure modes || | | System Designer || System design safety, foreseeable failure modes. ||Implement fail-safes, emergency brakes, worker safety zones, warning systems.|| Could activate: emergency protocols, backup systems|| Too late. | ||
{{aligntop}} | {{aligntop}} | ||
| System Operator || Worker safety, track access, safe rostering, risk assessment|| | | System Operator || Worker safety, track access, safe rostering, risk assessment. ||Ensure safe work procedures, worker positioning, track access control, maintenance.|| Could initiate: emergency protocols, worker evacuation, system shutdown|| Too late. | ||
{{aligntop}} | {{aligntop}} | ||
| Track workers || Challenging unsafe conditions, being | | Track workers || Challenging unsafe conditions, being experts, keeping a lookout for danger. || Raise issues. Quit! || Evacuate track. || Too late. | ||
{{aligntop}} | {{aligntop}} | ||
| Switch Operator | | Switch Operator || Looking after switch. | ||
||Make | ||Make sureswitch is working. || Raise alarm. Call for help. || Pull switch! | ||
|} | |} | ||
</div> | </div> | ||
Of all these moral, practical and systems considerations here — there are a lot — the hapless switch operator’s position is the ''least'' vexed. If it gets to the point where the only judgment remaining is “whether to throw the switch” and ''no one else'' is already fully culpable for putting the switch operator in | Of all these moral, practical and systems considerations here — there are a lot — the hapless switch operator’s position is the ''least'' vexed. If it gets to the point where the only judgment remaining is “whether or not to throw the switch” and ''no one else'' is already fully culpable for putting the switch operator in that position, and if, [[Q.E.D.]], the switch operator has no time to think about it either, then it necessarily follows that the switch operator will not be blamed for doing nothing. | ||
This will just be one of those things | This will just be one of those things. ''Accidents happen''. The switch operator did not have the time or information to make the moral calculus to intervene, and realistically never would. | ||
====The driverless cars==== | ====The driverless cars==== | ||
{{drop|A|pplying all this}} to | {{drop|A|pplying all this}} to autonomous cars, we see a false equivalence. Why does the trolley problem not arise, already, for human drivers? Because, unlike AVs, humans aren’t expected to be perfect calculating, rationalising machines. They don’t run on GeForce chips. They cannot flawlessly assimilate, triangulate and arbitrate on terabytes of real-time data within a microsecond — was it a child? A fox? A dog? A balloon? — calculate the brake horsepower, vectors, momentum, apply flawless Rawlsian moral reasoning ''and'' communicate the resulting executive decision to the drive train in time to achieve the appropriate outcome. | ||
If a human driver reacted at all, he may blindly slam on the anchors and yank the steering wheel without a conscious thought ''at all''. Most likely, he will just flatten whatever ran out in front of him and that will be that. Terrible accident. But he can’t be judged because he lacks the faculties to do any better. | |||
And, realistically, a hyper-intelligent autonomous machine ''still'' wouldn’t be able to identify the obstacle, identify the alternatives, evaluate the potential damage for each outcome, and assess their relative moral consequences and correctly execute the most morally profitable manoeuvre in the split second available. | |||
So why are we obsessing about autonomous cars? Data suggest that driverless cars are materially safer than human-operated ones. They promise a better set of outcomes. Yet because of ''our'' cognitive limitations, humans do not — apparently — suffer from the trolley problem. We are finding a moral conundrum in a [[fractal]] segment of [[spacetime]] previously inaccessible to moral discourse. By obsessing over hypothetical thought experiments, we delay progress on a statistically better outcome. | |||
{{quote|''Le mieux est l'ennemi du bien.''}} | {{quote|''Le mieux est l'ennemi du bien.''}} | ||
====Too cute==== | ====Too cute==== | ||
{{drop|T|he trolley problem}} is ''too'' cute to be a useful [[nomological machine]], even for philosophers. By pitting dramatically unequal moral outcomes against each other | {{drop|T|he trolley problem}} is ''too'' cute to be a useful [[nomological machine]] in this context, even for philosophers. By pitting dramatically unequal moral outcomes against each other — ''six'' deaths here, ''one'' there — it is easy for philosophers to get worked up, but exaggerates the already artificial clarity of the dilemma. This is a truly impossible scenario: the ''ideal'' [[negative Lindy effect]]. Tough ethical choices do not manifest as arithmetic. Moral calculus is ''hard''. | ||
So, let’s rework the dilemma: what if the alternatives were broadly comparable: what if there were ''one'' child on each track? You could save ''this'' one, at the expense of ''that'' one? What if the alternatives weren’t comparable ''at all''? What if one were a girl, the other a boy? What if one were rich, one poor? What if one were from a marginalised community? What if ''both'' were, from ''different'' marginalised communities? How would we configure that into an algorithm to be operated at a moment’s notice? What if the required moral calculus were genuinely ''ambiguous''? | |||
For, surely these are how cases would present in real life: there would be no neat utilitarian computation which would put the moral outcome beyond doubt. | |||
{{ | You might say we can draw useful metaphors from the trolley problem — it is a useful tool for exploring moral intuitions about action versus inaction, and ''intended'' versus ''foreseen'' consequences. Sure. But there are better ways of doing that, that would resonate in real-world scenarios, and without absurd mechanical artefacts to distract from the core ethical question. | ||
{{nlp}} |
Latest revision as of 13:05, 21 November 2024
Lies, Damn Lies and Statistics
|
Le mieux est l'ennemi du bien.
- —Voltaire
A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the certain cost of a sixth, who is working on the spur line. Should you pull the switch to divert the trolley, proactively saving five lives, at the certain cost of one?
Philosophy loves conundrums like the trolley problem. Philippa Foot invented it in the 1960s[1] as a thought experiment to compare the moral qualities of commission and omission. We might, Foot thought, think it acceptable to divert a runaway trolley away from the larger group of workers, as the desired action is “steering away from these victims” rather than toward the single worker. While the consequence of doing so (the single worker’s death) is foreseeable, it is an unwanted contingency flowing from the action rather than its intended effect. On the other hand, pushing an innocent bystander onto the first track to stop the trolley before it gets to the six workers is directly to intend one person’s death, which is not acceptable, even though the outcome would be the same.
This is a neat challenge to extreme utilitarianism but, still, a narrow point, long since lost on netizens who now trot the problem out when discussing real, not metaphorical, runaway trolleys: driverless cars.
Driverless cars
What if a child runs out in front of an autonomous vehicle? What if the vehicle can avoid the child, but only at the cost of causing some other damage or injury? How should it be programmed? Where all outcomes are unconscionable, how can the vehicle operator avoid moral culpability?
Here, the trolley problem is not a metaphorical analogy, but a live practical dilemma. With autonomous vehicles, we are told, this could happen so we must choose, in the abstract and without time pressure, between imagined unconscionable outcome A and imagined unconscionable outcome B, but without recourse to any solution C, which is to redesign the system to avoid outcomes A and B.
We can see at once this is an emotive but essentially sterile question to which, by design, there’s no good answer. A hypothetical in which someone dies however it pans out holds no moral lesson other than fix it, so there is another outcome.
But, explicitly, we aren’t allowed that choice.
Parallel worlds of science and philosophy
Scientists live in this simple, stable world filled to the brim with countless, benign little engines — “nomological machines” — by which everything reliably works. In this idealised world, we can predict the period of a swinging pendulum with Galileo’s equation. We can calculate the pressure and volume of a quantity of gas with Boyle’s law. We can explain the accelerating expansion of the universe with dark matter, dark energy and an arbitrary cosmological constant. [I don’t think this last one is a great example — Ed]
In the scientists’s world, everything runs like clockwork.[2]
Philosophers inhabit, more or less, a dark inversion of the scientists’ world. Theirs is, instead, a universe of insoluble conundrums. They also fill their space with ingenious devices, only malicious ones: “thought experiments” invented specifically to bugger everything up.
In the scientist’s world, everything goes to plan. In the philosophers’ world, nothing does.
For every scientific axiom, whirring happily away in its intricate cage, plotting out the reliable progress of the world, the philosophers have their obstreperous demonic contraptions: here, a brain in a vat, there, a Chinese room, over yonder, a parallel universe — designed to throw the whole edifice into confusion.
Negative-Lindy effect
Just as science is designed to explain the world as we experience it, philosophy questions our basic assumptions about reality. Philosophical thought experiments are designed to draw out that confusion. But since we perceive the world to be reliable, constant and consistent — it is these regularities that our scientific “engines” explain — philosophical thought experiments must have the unalterable property of being entirely hypothetical.
If a philosophical conundrum ever cropped up in real life, someone would have to solve it. This would spoil it: a solved thought experiment is like a drunk bottle: all the fun’s gone out of it: it can’t teach you anything. The best way of resisting solution by practical people is, naturally, to not show up in the first place: to make your exclusive home in the minds of academic philosophers. To have lasting value, therefore, a philosophical thought experiment must have some kind of negative “Lindy effect”.
The real Lindy effect is a measure of antifragility: things like scientific experiments must be “road tested”. Ideas that have taken a beating at the hands of children, animals and practical people over time and that still work — are less likely to fail in the future. If they had a critical weakness, someone would have found it. The longer an idea has been round, the more resilient we can expect it to be.
Untested, new ideas are more likely to break. We don’t yet know their vulnerabilities. We don’t know how fragile they are. If you had to bet which would last longer, the Pyramids of Giza or the Las Vegas Sphere?
The endurance test for philosophical thought experiments is the opposite:
The Negative Lindy Effect: the longer a philosophical thought experiment can survive without presenting in real life to actual people to be tested, the more resilient it will be.
Descartes’ brain in a vat: excellent. Berkeley’s falling tree in a forest: brilliant. (By definition, no one can ever experience it!) Fully untestable.
The Trolley Problem seemed like just such an untestable thought experiment: a subject is presented with a set of alternative known knowns over a period both long enough for the subject to act with definitive moral clarity but yet not entirely avoid bad outcomes, but not long enough for her to problem-solve, while all the time no-one else with any moral responsibility for the scenario has the practical ability to do anything to prevent it.
Just one person can act, they can only pull a switch, or not pull it, and someone dies either way.
This is about as artificial as a problem could be, isn’t it?
Artificial certainty
Firstly, we are asked to accept that the subject has perfect, absolute knowledge of the present state and the future outcomes on both tracks — that the train is coming, that it cannot be stopped, that the only possible intervention is her points operation, that death on one spur or the other is inevitable — and that no-one else in the scenario with any moral agency has any knowledge, either of the current status or the future potential.
The subject must be certain that the impacted workers will die. How? Can she see them? If so, why can’t she communicate with them? Why can’t they see her?
Why can’t the workers see anything? Why don’t they get off the tracks?
How about the track operator: can she not see what is happening?
Artificial timeframe
In real life, there are usually many “intervention points” before a crisis. When there aren’t, nor will there be time for abstract moral calculation.
The Trolley Problem artificially compresses moral deliberation into an instant. It must, to close off opportunities for resourcefulness, problem-solving and communication that would spoil its explanatory force.
The subject must pre-solve a moral dilemma in the abstract but without being allowed to request changes to a necessarily inadequate system to prevent the dilemma from arising in the first place.
This is to impose a simple system on a complex problem. The real world would not tolerate such an approach: if we can foresee an adverse outcome — and we can; we are expostulating about it — we are obliged to fix it. This is a minimum standard of system design. Not redesigning the system is the moral failure, not “pulling or failing to pull a lever”.
If we know there is an inherent “trolley problem” in our system design we cannot wait for a literally foreseen calamity to happen before acting to stop it.
By the way: realising there is a risk and running it anyway is the legal definition of “recklessness”: someone would be guilty of murder in this hypothetical.
Artificial (lack of) social context
The trolley problem also strips away all social and institutional contexts. Why have we designed the system this way? How have we managed to get to this point without anyone foreseeing it?
Trolley problems do not occur in the real world because this social context will prevent them. We build systems to anticipate problems. We encounter design flaws once and solve them. This is the (good) Lindy effect: Bad designs get junked. Good ones stand the test of time.
Artificial moral agency vacuum
By focussing on the switch-puller’s choice, we overlook real moral failures that must have led to the situation. The points operator is, for all intents, a random bystander. She bears no responsibility for the situation — she neither designed the track, nor scheduled the workers, nor trained them, nor did she cause the runaway trolley — yet she alone is expected to be omniscient while everyone else with any part in this incipient tragedy is entirely, and justifiably, inadvertent.
Who designed such an inept system? Who assigned the workers to those dangerous conditions? Who lost control of the trolley? Why weren’t they looking out? Why were there no fail-safes, warning procedures or circuit breakers? Why was the track not designed to prevent accidents like this? Did no one think this could happen?
False paradox
You all know how JC loves a paradox; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum.
Player | Responsible for | Time of awareness | ||
---|---|---|---|---|
Before breakaway | After breakaway | |||
Time to react | No time to react | |||
System Designer | System design safety, foreseeable failure modes. | Implement fail-safes, emergency brakes, worker safety zones, warning systems. | Could activate: emergency protocols, backup systems | Too late. |
System Operator | Worker safety, track access, safe rostering, risk assessment. | Ensure safe work procedures, worker positioning, track access control, maintenance. | Could initiate: emergency protocols, worker evacuation, system shutdown | Too late. |
Track workers | Challenging unsafe conditions, being experts, keeping a lookout for danger. | Raise issues. Quit! | Evacuate track. | Too late. |
Switch Operator | Looking after switch. | Make sureswitch is working. | Raise alarm. Call for help. | Pull switch! |
Of all these moral, practical and systems considerations here — there are a lot — the hapless switch operator’s position is the least vexed. If it gets to the point where the only judgment remaining is “whether or not to throw the switch” and no one else is already fully culpable for putting the switch operator in that position, and if, Q.E.D., the switch operator has no time to think about it either, then it necessarily follows that the switch operator will not be blamed for doing nothing.
This will just be one of those things. Accidents happen. The switch operator did not have the time or information to make the moral calculus to intervene, and realistically never would.
The driverless cars
Applying all this to autonomous cars, we see a false equivalence. Why does the trolley problem not arise, already, for human drivers? Because, unlike AVs, humans aren’t expected to be perfect calculating, rationalising machines. They don’t run on GeForce chips. They cannot flawlessly assimilate, triangulate and arbitrate on terabytes of real-time data within a microsecond — was it a child? A fox? A dog? A balloon? — calculate the brake horsepower, vectors, momentum, apply flawless Rawlsian moral reasoning and communicate the resulting executive decision to the drive train in time to achieve the appropriate outcome.
If a human driver reacted at all, he may blindly slam on the anchors and yank the steering wheel without a conscious thought at all. Most likely, he will just flatten whatever ran out in front of him and that will be that. Terrible accident. But he can’t be judged because he lacks the faculties to do any better.
And, realistically, a hyper-intelligent autonomous machine still wouldn’t be able to identify the obstacle, identify the alternatives, evaluate the potential damage for each outcome, and assess their relative moral consequences and correctly execute the most morally profitable manoeuvre in the split second available.
So why are we obsessing about autonomous cars? Data suggest that driverless cars are materially safer than human-operated ones. They promise a better set of outcomes. Yet because of our cognitive limitations, humans do not — apparently — suffer from the trolley problem. We are finding a moral conundrum in a fractal segment of spacetime previously inaccessible to moral discourse. By obsessing over hypothetical thought experiments, we delay progress on a statistically better outcome.
Le mieux est l'ennemi du bien.
Too cute
The trolley problem is too cute to be a useful nomological machine in this context, even for philosophers. By pitting dramatically unequal moral outcomes against each other — six deaths here, one there — it is easy for philosophers to get worked up, but exaggerates the already artificial clarity of the dilemma. This is a truly impossible scenario: the ideal negative Lindy effect. Tough ethical choices do not manifest as arithmetic. Moral calculus is hard.
So, let’s rework the dilemma: what if the alternatives were broadly comparable: what if there were one child on each track? You could save this one, at the expense of that one? What if the alternatives weren’t comparable at all? What if one were a girl, the other a boy? What if one were rich, one poor? What if one were from a marginalised community? What if both were, from different marginalised communities? How would we configure that into an algorithm to be operated at a moment’s notice? What if the required moral calculus were genuinely ambiguous?
For, surely these are how cases would present in real life: there would be no neat utilitarian computation which would put the moral outcome beyond doubt.
You might say we can draw useful metaphors from the trolley problem — it is a useful tool for exploring moral intuitions about action versus inaction, and intended versus foreseen consequences. Sure. But there are better ways of doing that, that would resonate in real-world scenarios, and without absurd mechanical artefacts to distract from the core ethical question.
- ↑ The Problem of Abortion and the Doctrine of the Double Effect (1967)
- ↑ This clockwork world is a beguiling lie, as Nancy Cartwright argues.