Trolley problem: Difference between revisions
Amwelladmin (talk | contribs) Created page with "{{a|stats|}}{{drop|D|oes not the}} trolley problem make the misassumption that the points operator definitely knows what is ahead on the tracks on either side, and what the outcome of his actions will necessarily be. It’s a false paradox: those things can only really be seen in retrospect. The humans on either side of the track are autonomous agents, as are the passengers on the train. Given their state of knowledge, their actions are not predictable using expected val..." Tags: Mobile edit Mobile web edit Advanced mobile edit |
Amwelladmin (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
{{a|stats|}}{{drop| | {{a|stats|}}{{quote| | ||
A runaway trolley is hurtling down a track toward five people who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five people, but only at the cost of a different person who is on the spur line and would be killed instead. Should you pull the switch to divert the trolley, actively choosing to cause one death to prevent five deaths? | |||
}} | |||
{{drop|T|he famous trolley}} problem is widely cited as the key philosophical conundrum behind driverless cars as a technology. How do you programme your vehicle in advance to manage the kinds of moral quandary the trolley problem presents? What if a child runs out in front of a robo-car? Should the car swerve to avoid a child at the expense of an adult? | |||
It always struck JC as a ''highly'' artificial hypothetical. What kind of real-world event has such a clearly deterministic outcome, over such an event horizon that is just long enough for to take definitive action that is ''partially'' but not ''fully'' remedial with moral clarity but not long enough to contrive a different solution, or warn anyone, or avert the outcome some other way. | |||
=====Artificial certainty===== | |||
The trolley problem assumes there is perfect, certain knowledge of the outcomes on both tracks — that is the only variable possible is points operation — and that this is a binary choices with no middle ground. There is absolute certainty that the affected victims will not survive, which makes us wonder, ''what are they doing on the train tracks?'' Are they ''tied'' there? In which case why are we obsessing about this plainly second-order moral dilemma? If they are ''not'' tied there, to what extent are they able to get off the tracks? Or cul[pablefor being on them? Who hangs out on train tracks? | |||
====Artificial timeframe==== | |||
The problem artificially compresses moral deliberation into an instant (to close off potential problem-solving that would spoil the conundrum): It asks you to pre-assess the moral calculation in the abstract, without being able to change the system. But if we know in advance there is an inherent “trolley problem”, we should not wait for a plainly foreseeable calamity and have make a difficult decision. ''We should redesign the system with fail-safes now''. Not to redesign the system is the moral failure. | |||
If we allow the time-frame to be more realistic, there will be time to problem-solve, to warn victims, or contrive other outcomes. In real life there are usually multiple intervention points before a crisis. Where there is not, neither is there time for abstract moral calculation. | |||
====Artificial (lack of) social context==== | |||
The scenario strips away all social and institutional context: Why have we done it designed the system this way? How have we managed to get to this crisis point without anyone foreseeing this? The reason trolley problems are not generally prevalent is because in the real world this social context will intervene to prevent the problems arising in the first place | |||
====Artificial moral agency==== | |||
It reduces moral agency to a single binary choice, pretending that the many necessarily involvedhumans have practical agency to forsee and anticipate events. So while the subject of the thought experiment is apparently omnisciient, everyone else is expected to be entirely ignorant of the scenario going down, and therefore unable to take any steps to prevent it. | |||
The trolley problem assumes the “points operator” definitely knows what is ahead on the tracks on either side, and what the outcome of his actions will necessarily be while no-one else knows anything. | |||
It’s a false paradox: those things cannot be known in advance: only in retrospect. The humans on either side of the track are autonomous agents, as are the passengers on the train. Given their state of knowledge, their actions cannot be predicted with some kind of expected value theory. If at the time you know of the future events, there are almost certainly things you can do to change them. |
Revision as of 14:08, 11 November 2024
Lies, Damn Lies and Statistics
|
A runaway trolley is hurtling down a track toward five people who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five people, but only at the cost of a different person who is on the spur line and would be killed instead. Should you pull the switch to divert the trolley, actively choosing to cause one death to prevent five deaths?
The famous trolley problem is widely cited as the key philosophical conundrum behind driverless cars as a technology. How do you programme your vehicle in advance to manage the kinds of moral quandary the trolley problem presents? What if a child runs out in front of a robo-car? Should the car swerve to avoid a child at the expense of an adult?
It always struck JC as a highly artificial hypothetical. What kind of real-world event has such a clearly deterministic outcome, over such an event horizon that is just long enough for to take definitive action that is partially but not fully remedial with moral clarity but not long enough to contrive a different solution, or warn anyone, or avert the outcome some other way.
Artificial certainty
The trolley problem assumes there is perfect, certain knowledge of the outcomes on both tracks — that is the only variable possible is points operation — and that this is a binary choices with no middle ground. There is absolute certainty that the affected victims will not survive, which makes us wonder, what are they doing on the train tracks? Are they tied there? In which case why are we obsessing about this plainly second-order moral dilemma? If they are not tied there, to what extent are they able to get off the tracks? Or cul[pablefor being on them? Who hangs out on train tracks?
Artificial timeframe
The problem artificially compresses moral deliberation into an instant (to close off potential problem-solving that would spoil the conundrum): It asks you to pre-assess the moral calculation in the abstract, without being able to change the system. But if we know in advance there is an inherent “trolley problem”, we should not wait for a plainly foreseeable calamity and have make a difficult decision. We should redesign the system with fail-safes now. Not to redesign the system is the moral failure.
If we allow the time-frame to be more realistic, there will be time to problem-solve, to warn victims, or contrive other outcomes. In real life there are usually multiple intervention points before a crisis. Where there is not, neither is there time for abstract moral calculation.
Artificial (lack of) social context
The scenario strips away all social and institutional context: Why have we done it designed the system this way? How have we managed to get to this crisis point without anyone foreseeing this? The reason trolley problems are not generally prevalent is because in the real world this social context will intervene to prevent the problems arising in the first place
Artificial moral agency
It reduces moral agency to a single binary choice, pretending that the many necessarily involvedhumans have practical agency to forsee and anticipate events. So while the subject of the thought experiment is apparently omnisciient, everyone else is expected to be entirely ignorant of the scenario going down, and therefore unable to take any steps to prevent it.
The trolley problem assumes the “points operator” definitely knows what is ahead on the tracks on either side, and what the outcome of his actions will necessarily be while no-one else knows anything.
It’s a false paradox: those things cannot be known in advance: only in retrospect. The humans on either side of the track are autonomous agents, as are the passengers on the train. Given their state of knowledge, their actions cannot be predicted with some kind of expected value theory. If at the time you know of the future events, there are almost certainly things you can do to change them.