Trolley problem: Difference between revisions
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit Advanced mobile edit |
||
(11 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{a|stats|{{wmc|Barney Oldfield's Race for a Life.jpg|}}}}{{quote| | {{a|stats|{{wmc|Barney Oldfield's Race for a Life.jpg|}}}}{{quote|''Le mieux est l'ennemi du bien.'' | ||
A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the cost of a sixth | :—Voltaire}} | ||
{{quote| | |||
A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the certain cost of a sixth, who is working on the spur line. Should you pull the switch to divert the trolley, proactively saving five lives, at the certain cost of one? | |||
}} | }} | ||
{{drop|P|hilosophy | {{drop|P|hilosophy ''loves'' condundrums}} like the trolley problem. Philippa Foot invented it in the 1960s,<ref>''The Problem of Abortion and the Doctrine of the Double Effect'' (1967)</ref> as a thought experiment to compare the moral qualities of ''co''mmission and ''o''mission. We might, Foot thought, consider it morally permissible to divert a runaway trolley away from the larger group workers, as the action is “steering away from these victims” and while the alternative death is foreseeable it is still an unwanted “collateral” consequence of the action, whereas “pushing an innocent bystander onto the first track to stop the trolley getting to the victim is less so even though the outcome would be the same. | ||
This is a narrow point, drawn to illustrate a shortcoming in Roman Catholic moral orthodoxy. It has long since been lost on denizens who now trot the problem out when holding forth on real, not metaphorical, runaway trolleys: driverless cars. | |||
====Driverless cars==== | |||
{{quote|What if a child runs out in front of an autonomous vehicle? What if the vehicle can avoid the child, but only at the cost of causing some other damage or injury? How should it be programmed? Where all outcomes are unconscionable, how can the vehicle operator ''avoid'' moral culpability?}} | |||
{{Drop|H|ere, the trolley}} problem presents as not as a metaphorical analogy, but as a live practical dilemma in need of a solution. We are told this scenario could happen, and we are asked to choose, in the abstract and without time pressure, between unconscionable outcome ''A'' and unconscionable outcome ''B'', but without recourse to solution ''C'', which is to redesign the system in a way that avoids outcomes ''A'' and ''B''. We can see at once this is an emotive but essentially sterile question to which, by design, there’s no good answer. A hypothetical in which someone dies however it pans out holds no moral lesson other than ''fix it, so there is another outcome''. But explicitly we aren’t allowed that choice. | |||
====Parallel worlds of science and philosophy==== | ====Parallel worlds of science and philosophy==== | ||
{{ | {{Drop|S|cientists live in}} this [[Simple system|simple]], stable world filled to the brim with countless, benign little engines — “[[nomological machine]]s” — that they have designed, by which everything reliably ''works''. They can predict the period of a swinging pendulum with Galileo’s equation. They can calculate the pressure and volume of a quantity of gas with Boyle’s law. They can explain the accelerating expansion of the universe with dark matter, dark energy and an arbitrary cosmological constant. [''I don’t think this last one is a great example — Ed''] | ||
In the scientists’s world, ''everything'' runs like clockwork. | |||
Philosophers inhabit, more or less, a dark inversion of | Philosophers inhabit, more or less, a dark inversion of the scientists’ world. Theirs is, instead, a universe of ''insoluble conundrum''. They also fill their space with [[Nomological machine|ingenious devices]], only malicious ones: philosophical contraptions invented ''specifically to bugger everything up''. | ||
In the scientist’s world , everything goes to plan. In the philosophers’ world, ''nothing'' does. | |||
For every scientific axiom, whirring happily away in its intricate cage, plotting out the reliable progress of the world, the philosophers have their obstreperous demonic counterpoints: here, a ''brain in a vat'', there, a ''Chinese room'', over yonder, a ''parallel universe'' — designed to throw the whole edifice into confusion. This ideal mendacious world affords philosophers a licence to sit around arguing ''why the hell nothing works''. This is why they turn up. | |||
====Negative-Lindy effect==== | ====Negative-Lindy effect==== | ||
{{drop| | {{drop|J|ust as science}} is designed to explain the world, philosophy is designed to confuse it. Therefore philosophical problems must have the unalterable property of being ''entirely hypothetical''. | ||
If a philosophical conundrum ever actually cropped up in real life, someone would have to ''solve'' it. This would spoil it: a solved thought experiment is like a drunk bottle: all the fun’s gone out of it: it can’t teach you anything. The best way of resisting solution by practical people is, naturally, to not show up in the first place. To make your exclusive home in the minds of academic philosophers. To have lasting value, therefore, a philosophical thought experiment must have some kind of ''negative'' “[[Lindy effect]]”. | |||
The real Lindy effect: things that have | The ''real'' [[Lindy effect]] is a measure of [[Antifragile|antifragility]]: things are “road tested” — that have taken a beating at the hands of children, animals and practical people over time and are still working — are less likely to break in the future. If they had a critical weakness, someone would have found it. Untested, new things are more likely to break. We don’t know yet whether they are fragile. If you had to bet which would last longer, the Pyramids of Giza or the Las Vegas Sphere? | ||
The Trolley Problem | The endurance test for philosophical thought experiments is the opposite: | ||
{{quote|The Negative Lindy Effect: the longer a philosophical thought experiment can survive ''without'' presenting in real life to actual people to be tested, the more resilient it will be.}} | |||
Descartes’ brain in a vat. Excellent. Berkeley’s falling tree in a forest. Brilliant. Fully untestable. | |||
The Trolley Problem seemed like just such an excellent thought experiment: what kind of real-world event could have all the following qualities? A set of determined future outcomes, unfolding over a period that is long enough for the subject to be able to definitively act with moral clarity in a way that is partly ''but not fully'' remedial, and yet not quite long enough that the subject can problem-solve, engineer a superior outcome, or just warn anyone, while all the time there is no supervening moral agency from anyone else involved in the scenario? | |||
Just one person can act, and they can only pull a switch, or not pull it. | |||
This is about as artificial as a problem could be. | |||
=====Artificial certainty===== | =====Artificial certainty===== | ||
Firstly, we are asked to accept that the subject has perfect, absolute ''knowledge'' of the present state and the future outcomes on both tracks — that the train is coming, that it cannot be stopped, that the only possible intervention is her points operation, that death on one spur or the other is inevitable — ''and'' that no-one else in the scenario with any moral agency has ''any'' knowledge, either of the current status or the future potential. | |||
The subject | The subject must be ''certain'' that the impacted workers will die. ''How''? Can she ''see'' them? If so, why can’t she ''communicate'' with them? Why can’t they see her? | ||
Why can’t the workers see anything? Why don’t they get off the tracks? | |||
How about the track operator: can ''she'' not see what is happening? | |||
=====Artificial timeframe===== | =====Artificial timeframe===== | ||
In real life, there | In real life, there are usually many “intervention points” before a crisis. When there aren’t, nor will there be time for abstract moral calculation. | ||
The Trolley Problem artificially compresses moral deliberation into ''an instant''. It must, to close off opportunities for resourcefulness, problem-solving and communication that would spoil its explanatory force. | |||
The subject must pre-solve a moral dilemma in the abstract but without being allowed to request changes to a necessarily inadequate system to prevent the dilemma from arising in the first place. | |||
This is to impose a [[simple system]] on a [[Complexity|complex problem]]. The real world would not tolerate such an approach: if we can foresee an adverse outcome — and we can; we are expostulating about it — ''we are obliged to fix it''. This is a minimum standard of system design. Not redesigning the system is the moral failure, not “pulling or failing to pull a lever”. | |||
If we know | If we ''know'' there is an inherent “trolley problem” in our system design we cannot wait for a ''literally'' foreseen calamity to happen before acting to stop it. | ||
Realising there is a risk and running it anyway is the definition of “[[Reckless|recklessness]]”: someone ''would'' be guilty of murder in this hypothetical. | |||
=====Artificial (lack of) social context ===== | =====Artificial (lack of) social context ===== | ||
The | The trolley problem also strips away all social and institutional contexts. Why have we designed the system this way? How have we managed to get to this point without anyone foreseeing it? | ||
Trolley problems do not occur in the real world because this social context will intervene to prevent them from arising. We build systems to anticipate problems. We encounter design flaws once and solve them. This is the (good) [[Lindy effect]]: ''Bad designs get junked''. Good ones stand the test of time. | |||
=====Artificial moral agency vacuum===== | =====Artificial moral agency vacuum===== | ||
By being made to focus on the switch-puller’s choice, we are forced to overlook real moral failures that led to the situation. The points operator is, by comparison, a random bystander. Despite bearing ''no'' responsibility for the situation — she neither designed the track, nor operated it, nor scheduled the workers, nor trained them, nor did she cause the runaway trolley — the poor points operator is expected to be omniscient while everyone else with any part in this incipient tragedy is ''entirely, and justifiably, [[Inadvertence|inadvertent]]''. | |||
Who designed such an inept system? Who assigned the workers to those dangerous conditions? Who lost control of the trolley? Why weren’t they looking out? Why were there no fail-safes, warning procedures or circuit breakers? Why was the track not designed to prevent accidents like this? Did no one think this could happen? | |||
====False paradox==== | ====False paradox==== | ||
{{drop|Y|ou all know}} how JC loves a [[paradox]]; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum | {{drop|Y|ou all know}} how JC loves a [[paradox]]; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum. | ||
{{small|80}} | {{small|80}} | ||
{{tabletopflex|100}} | {{tabletopflex|100}} | ||
{{aligntop}} | {{aligntop}} | ||
! Rowspan="3" | Player !! Rowspan="3" | Responsible for !! Colspan="3"| Time of awareness | ! Rowspan="3" | Player !! Rowspan="3" | Responsible for !! Colspan="3"| Time of awareness | ||
Line 71: | Line 90: | ||
| System Operator || Worker safety, track access, safe rostering, risk assessment||Could ensure: safe work procedures, worker positioning, track access control Maintenance schedule|| Could initiate: emergency protocols, worker evacuation, system shutdown|| Too late | | System Operator || Worker safety, track access, safe rostering, risk assessment||Could ensure: safe work procedures, worker positioning, track access control Maintenance schedule|| Could initiate: emergency protocols, worker evacuation, system shutdown|| Too late | ||
{{aligntop}} | {{aligntop}} | ||
| Track workers || Challenging unsafe conditions, being | | Track workers || Challenging unsafe conditions, being experts, keeping a lookout for danger. || Raise issues. Quit! || Evacuate track || Pray | ||
{{aligntop}} | {{aligntop}} | ||
| Switch Operator || Looking after his switch | | Switch Operator || Looking after his switch | ||
Line 77: | Line 96: | ||
|} | |} | ||
</div> | </div> | ||
Of all these moral, practical and systems considerations here — there are a lot — the hapless switch operator’s position is the ''least'' vexed. If it gets to the point where the only judgment remaining is “whether to throw the switch” and ''no one else'' is already fully culpable for putting the switch operator in | Of all these moral, practical and systems considerations here — there are a lot — the hapless switch operator’s position is the ''least'' vexed. If it gets to the point where the only judgment remaining is “whether or not to throw the switch” and ''no one else'' is already fully culpable for putting the switch operator in that position, and if, [[Q.E.D.]], the switch operator has no time to think about it either, then it necessarily follows that the switch operator will not be blamed for doing nothing. | ||
This will just be one of those things. ''Shit happens''. ''Accidents'' happen. The switch operator | |||
This will just be one of those things. ''Shit happens''. ''Accidents'' happen. The switch operator did not have the time or information to make the moral calculus to intervene, and realistically never would. | |||
====The driverless cars==== | ====The driverless cars==== | ||
{{drop|A|pplying all this}} to | {{drop|A|pplying all this}} to autonomous cars, we see is a false equivalence. Why does the trolley problem not arise, already, for human drivers? Because, unlike TESLAs, humans aren’t perfect rationalising machines. They don’t run on GeForce chips. They cannot flawlessly assimilate, triangulate and arbitrate on terabytes of real-time data within a microsecond — was it a child? A fox? A dog? A balloon? — calculate the brake horsepower, vectors, momentum ''and'' a flawless Rawlsian moral outcome and communicate its executive decision to the drive train in time to vouchsafe the appropriate outcome. | ||
If a human driver reacted at all, he may blindly stomp on the anchors without a conscious thought at all. Most likely he will just flatten whatever ran out in front of him and that will be that. Terrible accident. but he can’t be judged because he lacked the faculties to do any better. | |||
And, realistically, a hyper-intelligent autonomous machine ''still'' wouldn’t be able to identify the obstacle, identify the alternatives, evaluate the potential damage for each outcome, and assess their relative moral consequences and correctly execute the most morally profitable manoeuvre in the split second available. | |||
So why are we obsessing about autonomous cars? Behold a [[paradox]]: data suggest, strongly, that driverless cars are materially safer than human-operated ones. They promise a better set of outcomes than ones we presently tolerate. Yet because of their cognitive limitations, human drivers do not — apparently — suffer from the trolley problem. We are finding a moral conundrum in a [[fractal]] segment of [[spacetime]] previously inaccessible to moral discourse. By obsessing with hypothetical thought experiments we delay progress on a statistically better outcome. {{quote|''Le mieux est l'ennemi du bien.''}} | |||
====Too cute==== | |||
{{drop|T|he trolley problem}} is ''too'' cute to be a useful [[nomological machine]] in this context, even for philosophers. By pitting dramatically unequal moral outcomes against each other — ''six'' deaths here, ''one'' there — it is easy for philosophers to get worked up, but exaggerates the already artificial clarity of the dilemma. This is a truly impossible scenario: the ''ideal'' [[negative Lindy effect]]. Tough ethical choices do not manifest as arithmetic. Moral calculus is ''hard''. | |||
So, let’s rework the dilemma: what if the alternatives were broadly comparable: what if there were ''one'' child on each track? You could save ''this'' one, at the expense of ''that'' one? What if the alternatives weren’t comparable ''at all''? What if one were a girl, the other a boy? What if one were rich, one poor? What if one were from a marginalised community? What if ''both'' were, from ''different'' marginalised communities? How would we configure that into an algorithm to be operated at a moment’s notice? What if the required moral calculus were genuinely ''ambiguous''? | |||
For, surely these cases would greatly outnumber those where a neat utilitarian computation put the moral outcome beyond doubt. | |||
You might say we can draw useful metaphors from the trolley problem — it is a useful tool for exploring moral intuitions about action versus inaction, and ''intended'' versus ''foreseen'' consequences. Sure. But there are better ways of doing that, that would resonate in real-world scenarios, and without absurd mechanical artefacts to distract from the core ethical question. | |||
{{nld}} |
Revision as of 22:41, 18 November 2024
Lies, Damn Lies and Statistics
|
Le mieux est l'ennemi du bien.
- —Voltaire
A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the certain cost of a sixth, who is working on the spur line. Should you pull the switch to divert the trolley, proactively saving five lives, at the certain cost of one?
Philosophy loves condundrums like the trolley problem. Philippa Foot invented it in the 1960s,[1] as a thought experiment to compare the moral qualities of commission and omission. We might, Foot thought, consider it morally permissible to divert a runaway trolley away from the larger group workers, as the action is “steering away from these victims” and while the alternative death is foreseeable it is still an unwanted “collateral” consequence of the action, whereas “pushing an innocent bystander onto the first track to stop the trolley getting to the victim is less so even though the outcome would be the same.
This is a narrow point, drawn to illustrate a shortcoming in Roman Catholic moral orthodoxy. It has long since been lost on denizens who now trot the problem out when holding forth on real, not metaphorical, runaway trolleys: driverless cars.
Driverless cars
What if a child runs out in front of an autonomous vehicle? What if the vehicle can avoid the child, but only at the cost of causing some other damage or injury? How should it be programmed? Where all outcomes are unconscionable, how can the vehicle operator avoid moral culpability?
Here, the trolley problem presents as not as a metaphorical analogy, but as a live practical dilemma in need of a solution. We are told this scenario could happen, and we are asked to choose, in the abstract and without time pressure, between unconscionable outcome A and unconscionable outcome B, but without recourse to solution C, which is to redesign the system in a way that avoids outcomes A and B. We can see at once this is an emotive but essentially sterile question to which, by design, there’s no good answer. A hypothetical in which someone dies however it pans out holds no moral lesson other than fix it, so there is another outcome. But explicitly we aren’t allowed that choice.
Parallel worlds of science and philosophy
Scientists live in this simple, stable world filled to the brim with countless, benign little engines — “nomological machines” — that they have designed, by which everything reliably works. They can predict the period of a swinging pendulum with Galileo’s equation. They can calculate the pressure and volume of a quantity of gas with Boyle’s law. They can explain the accelerating expansion of the universe with dark matter, dark energy and an arbitrary cosmological constant. [I don’t think this last one is a great example — Ed]
In the scientists’s world, everything runs like clockwork.
Philosophers inhabit, more or less, a dark inversion of the scientists’ world. Theirs is, instead, a universe of insoluble conundrum. They also fill their space with ingenious devices, only malicious ones: philosophical contraptions invented specifically to bugger everything up.
In the scientist’s world , everything goes to plan. In the philosophers’ world, nothing does.
For every scientific axiom, whirring happily away in its intricate cage, plotting out the reliable progress of the world, the philosophers have their obstreperous demonic counterpoints: here, a brain in a vat, there, a Chinese room, over yonder, a parallel universe — designed to throw the whole edifice into confusion. This ideal mendacious world affords philosophers a licence to sit around arguing why the hell nothing works. This is why they turn up.
Negative-Lindy effect
Just as science is designed to explain the world, philosophy is designed to confuse it. Therefore philosophical problems must have the unalterable property of being entirely hypothetical.
If a philosophical conundrum ever actually cropped up in real life, someone would have to solve it. This would spoil it: a solved thought experiment is like a drunk bottle: all the fun’s gone out of it: it can’t teach you anything. The best way of resisting solution by practical people is, naturally, to not show up in the first place. To make your exclusive home in the minds of academic philosophers. To have lasting value, therefore, a philosophical thought experiment must have some kind of negative “Lindy effect”.
The real Lindy effect is a measure of antifragility: things are “road tested” — that have taken a beating at the hands of children, animals and practical people over time and are still working — are less likely to break in the future. If they had a critical weakness, someone would have found it. Untested, new things are more likely to break. We don’t know yet whether they are fragile. If you had to bet which would last longer, the Pyramids of Giza or the Las Vegas Sphere?
The endurance test for philosophical thought experiments is the opposite:
The Negative Lindy Effect: the longer a philosophical thought experiment can survive without presenting in real life to actual people to be tested, the more resilient it will be.
Descartes’ brain in a vat. Excellent. Berkeley’s falling tree in a forest. Brilliant. Fully untestable.
The Trolley Problem seemed like just such an excellent thought experiment: what kind of real-world event could have all the following qualities? A set of determined future outcomes, unfolding over a period that is long enough for the subject to be able to definitively act with moral clarity in a way that is partly but not fully remedial, and yet not quite long enough that the subject can problem-solve, engineer a superior outcome, or just warn anyone, while all the time there is no supervening moral agency from anyone else involved in the scenario?
Just one person can act, and they can only pull a switch, or not pull it.
This is about as artificial as a problem could be.
Artificial certainty
Firstly, we are asked to accept that the subject has perfect, absolute knowledge of the present state and the future outcomes on both tracks — that the train is coming, that it cannot be stopped, that the only possible intervention is her points operation, that death on one spur or the other is inevitable — and that no-one else in the scenario with any moral agency has any knowledge, either of the current status or the future potential.
The subject must be certain that the impacted workers will die. How? Can she see them? If so, why can’t she communicate with them? Why can’t they see her?
Why can’t the workers see anything? Why don’t they get off the tracks?
How about the track operator: can she not see what is happening?
Artificial timeframe
In real life, there are usually many “intervention points” before a crisis. When there aren’t, nor will there be time for abstract moral calculation.
The Trolley Problem artificially compresses moral deliberation into an instant. It must, to close off opportunities for resourcefulness, problem-solving and communication that would spoil its explanatory force.
The subject must pre-solve a moral dilemma in the abstract but without being allowed to request changes to a necessarily inadequate system to prevent the dilemma from arising in the first place.
This is to impose a simple system on a complex problem. The real world would not tolerate such an approach: if we can foresee an adverse outcome — and we can; we are expostulating about it — we are obliged to fix it. This is a minimum standard of system design. Not redesigning the system is the moral failure, not “pulling or failing to pull a lever”.
If we know there is an inherent “trolley problem” in our system design we cannot wait for a literally foreseen calamity to happen before acting to stop it.
Realising there is a risk and running it anyway is the definition of “recklessness”: someone would be guilty of murder in this hypothetical.
Artificial (lack of) social context
The trolley problem also strips away all social and institutional contexts. Why have we designed the system this way? How have we managed to get to this point without anyone foreseeing it?
Trolley problems do not occur in the real world because this social context will intervene to prevent them from arising. We build systems to anticipate problems. We encounter design flaws once and solve them. This is the (good) Lindy effect: Bad designs get junked. Good ones stand the test of time.
Artificial moral agency vacuum
By being made to focus on the switch-puller’s choice, we are forced to overlook real moral failures that led to the situation. The points operator is, by comparison, a random bystander. Despite bearing no responsibility for the situation — she neither designed the track, nor operated it, nor scheduled the workers, nor trained them, nor did she cause the runaway trolley — the poor points operator is expected to be omniscient while everyone else with any part in this incipient tragedy is entirely, and justifiably, inadvertent.
Who designed such an inept system? Who assigned the workers to those dangerous conditions? Who lost control of the trolley? Why weren’t they looking out? Why were there no fail-safes, warning procedures or circuit breakers? Why was the track not designed to prevent accidents like this? Did no one think this could happen?
False paradox
You all know how JC loves a paradox; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum.
Player | Responsible for | Time of awareness | ||
---|---|---|---|---|
Before Breakaway | After Breakaway | |||
time to react | no time to react | |||
System Designer | System design safety, foreseeable failure modes | Could implement: fail-safes, emergency brakes, worker safety zones, warning systems | Could activate: emergency protocols, backup systems | Too late |
System Operator | Worker safety, track access, safe rostering, risk assessment | Could ensure: safe work procedures, worker positioning, track access control Maintenance schedule | Could initiate: emergency protocols, worker evacuation, system shutdown | Too late |
Track workers | Challenging unsafe conditions, being experts, keeping a lookout for danger. | Raise issues. Quit! | Evacuate track | Pray |
Switch Operator | Looking after his switch | Make sure the switch is working. | Call for help | Pull switch! |
Of all these moral, practical and systems considerations here — there are a lot — the hapless switch operator’s position is the least vexed. If it gets to the point where the only judgment remaining is “whether or not to throw the switch” and no one else is already fully culpable for putting the switch operator in that position, and if, Q.E.D., the switch operator has no time to think about it either, then it necessarily follows that the switch operator will not be blamed for doing nothing.
This will just be one of those things. Shit happens. Accidents happen. The switch operator did not have the time or information to make the moral calculus to intervene, and realistically never would.
The driverless cars
Applying all this to autonomous cars, we see is a false equivalence. Why does the trolley problem not arise, already, for human drivers? Because, unlike TESLAs, humans aren’t perfect rationalising machines. They don’t run on GeForce chips. They cannot flawlessly assimilate, triangulate and arbitrate on terabytes of real-time data within a microsecond — was it a child? A fox? A dog? A balloon? — calculate the brake horsepower, vectors, momentum and a flawless Rawlsian moral outcome and communicate its executive decision to the drive train in time to vouchsafe the appropriate outcome.
If a human driver reacted at all, he may blindly stomp on the anchors without a conscious thought at all. Most likely he will just flatten whatever ran out in front of him and that will be that. Terrible accident. but he can’t be judged because he lacked the faculties to do any better.
And, realistically, a hyper-intelligent autonomous machine still wouldn’t be able to identify the obstacle, identify the alternatives, evaluate the potential damage for each outcome, and assess their relative moral consequences and correctly execute the most morally profitable manoeuvre in the split second available.
So why are we obsessing about autonomous cars? Behold a paradox: data suggest, strongly, that driverless cars are materially safer than human-operated ones. They promise a better set of outcomes than ones we presently tolerate. Yet because of their cognitive limitations, human drivers do not — apparently — suffer from the trolley problem. We are finding a moral conundrum in a fractal segment of spacetime previously inaccessible to moral discourse. By obsessing with hypothetical thought experiments we delay progress on a statistically better outcome.
Le mieux est l'ennemi du bien.
Too cute
The trolley problem is too cute to be a useful nomological machine in this context, even for philosophers. By pitting dramatically unequal moral outcomes against each other — six deaths here, one there — it is easy for philosophers to get worked up, but exaggerates the already artificial clarity of the dilemma. This is a truly impossible scenario: the ideal negative Lindy effect. Tough ethical choices do not manifest as arithmetic. Moral calculus is hard.
So, let’s rework the dilemma: what if the alternatives were broadly comparable: what if there were one child on each track? You could save this one, at the expense of that one? What if the alternatives weren’t comparable at all? What if one were a girl, the other a boy? What if one were rich, one poor? What if one were from a marginalised community? What if both were, from different marginalised communities? How would we configure that into an algorithm to be operated at a moment’s notice? What if the required moral calculus were genuinely ambiguous?
For, surely these cases would greatly outnumber those where a neat utilitarian computation put the moral outcome beyond doubt.
You might say we can draw useful metaphors from the trolley problem — it is a useful tool for exploring moral intuitions about action versus inaction, and intended versus foreseen consequences. Sure. But there are better ways of doing that, that would resonate in real-world scenarios, and without absurd mechanical artefacts to distract from the core ethical question.
- ↑ The Problem of Abortion and the Doctrine of the Double Effect (1967)