Trolley problem: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
No edit summary
No edit summary
Line 8: Line 8:
How should we configure this potentially horrific technology? Can we — should we — programme your vehicle in advance to manage the kinds of moral quandary the trolley problem presents?  
How should we configure this potentially horrific technology? Can we — should we — programme your vehicle in advance to manage the kinds of moral quandary the trolley problem presents?  


Look: there are plenty of logistical problems and risk allocation issues to be solved before autonomous vehicles can go mainstream for sure, but this ''highly'' artificial hypothetical is not one of them.  This is a contrived thought experiment dreamed up for a philosophy journal.  
Look: there are plenty of logistical problems and risk allocation issues to be solved before autonomous vehicles can go mainstream for sure, but this ''highly'' artificial hypothetical is not one of them.  This is a ''thought experiment'' dreamed up for a philosophy journal. It is not meant to address real-world problems.


====Parallel worlds of science and philosophy====
{{drop|B|y design, Philosophy}} professors live in a world entirely disconnected from the actual problems of real human beings. It is useful by comparison to compare the world philosophers live in with the one scientists live in.
Scientists live in this simple, stable world filled with hundreds of benign little [[nomological machine]]s they have designed, and thanks to which everything reliably ''works'': thanks to the pendulum equation, t = 2π√(l/g), we can predict the period of a swinging object. thanks to Boyle’s law we can calculate the pressure and volume of a quantity of gas.
Philosophers inhabit, more or less, a dark inversion of this world: it is also filled with ingenious [[nomological machine|nomological machines]], only they are malicious: they exist — philosophers invent them — ''to bugger everything up'' at the merest opportunity. In the philosophers’ world, ''nothing'' works.
For each of the scientists’ happy heuristics that illustrate how the world works, the philosophers have their own intricately-crafted intellectual contraptions — here, a ''Brain in a Vat'', there, a ''Chinese Room'', over yonder, a ''Parallel Universe'' — designed to illustrate how, really, it doesn’t. A world where nothing works is an excellent one for philosophers because it means they can sit around arguing ''why not'' and they never have to get on and ''do'' anything. 
It is a world of imaginary problems Like a, well, parallel universe. Philosophers therefore occupy themselves with rarefied, nomological problems that suit that ascetic vibe.
====Negative-Lindy effect====
====Negative-Lindy effect====
{{quote|The JC’s negative lindy effect: Philosophical thought experiments only work ''as long as they don’t occur in real life''. That is ''why'' they are philosophy experiments.}}
{{drop|I|f any of }} these philosophical conundrums actually happened in real life, real humans would quickly solve them. For something to have lasting value as a philosophy experiment, therefore, it must have some kind of ''negative'' “[[Lindy effect]]”.  
{{drop|B|y design, Philosophy}} professors live in a world entirely disconnected from the actual problems of real human beings. It is even disconnected from the world scientists live in. The philosopherIt is like a ''bad inversion'' of the world scientists live in.  


Scientists live in this simple, stable world filled with hundreds of benign little [[nomological machine]]s they have designed, and thanks to which everything reliably ''works''. Philosophers inhabit its dark inversion, a world also filled with ''malicious'' [[nomological machine|nomological machines]], but which are designed to bugger everything up. In the philosophical world ''nothing'' works, which is excellent for philosophers because they can then sit around arguing ''why not''.  For every ''E=MC<sup>2</sup>'' or ''F=ma'' — happy little heuristics by which the world magically works — there is a ''Brain in a Vat'', a ''Chinese Room'', a ''Blind Watchmaker'' a ''Parallel Universe'' or a ''Trolley Problem'' illustrating how, really, it doesn’t.
{{quote|The Negative Lindy Effect: A good test of the quality and resilience of a philosophical thought experiment is how long it can survive in the literature ''without'' ever presenting as a genuine problem in real life to actual people.}}


The real Lindy effect: things that have stood the test of time are more likely to continue doing so, while new things are more fragile and likely to become obsolete. The philosophical effect is the opposite. Plato’s cave: brilliant.


It is a world of imaginary problems Like a, well, parallel universe. Philosophers therefore occupy themselves with rarefied, nomological problems that suit that ascetic vibe.  
The Trolley Problem is just such an excellent philosophical thought experiment. What kind of real-world event has such a deterministic outcome, over such a short time horizon that is just long enough for a subject to take action that is partly ''but not fully'' remedial, that can be acted on with total moral clarity, and yet there is not enough time to contrive an alternative different solution, or warn anyone, or avert the outcome some other way or, if you can’t, at least communicate to the putative victims, in a scenario in which there is no overriding moral agency from someone else?
 
=====Artificial certainty=====
The trolley problem assumes that the subject has perfect, certain knowledge of both the present state and the future outcomes on both tracks — that the train is coming, it cannot be stopped, that the only possible variable is points operation and death on one spur or another is certain — but no-one else in the scenario has ''any'' knowledge, either of the current status or the future potential.  


If they happened in real life to real humans, real humans would have long since solved them. For something to have lasting value as a philosophy experiment, it must have some kind of ''negative [[lindy effect]]''. This principle suggests that time is a good test of the quality and resilience of a philosophical thought experiment is how long it has survived without ever presenting itself as genuine problem to real people.- things that have stood the test of time are more likely to continue doing so, while new things are more fragile and likely to become obsolete.
The subject is absolutely certain the affected victims will not survive. ''How''? Can she see them? If so, why can’t she communicate with them? If she can see them, can they not see her? And the on-rushing trolley?


And ''what are they doing on the train tracks'' in the first place? Are they ''tied down'' there? If so, why are we obsessing about this plainly second-order moral dilemma of the points operator? Should we not be interrogating whoever tied them down? If they are ''not'' tied down, then ''what the hell are they doing''? Can they not pay attention to their own environment? Can they not optimise their ''own'' well-being? Why can’t they get off the tracks? ''Who hangs out on train tracks?'' If they are rail workers, have they not had any HSE training?


This is a sure sign it has never happened and the more enduring it is, the more likely it never will.  
So, while the subject of the thought experiment is omniscient, necessarily everyone else is ''entirely ignorant'' of every aspect of the scenario going down, and therefore unable to do anything to prevent it.


What kind of real-world event has such a deterministic outcome, over such an event horizon that is just long enough for to take definitive action that is partially but not fully remedial with moral clarity but not long enough to contrive a different solution, or warn anyone, or avert the outcome some other way.
=====Artificial timeframe=====
In real life, there will usually be many “intervention points” before a crisis. Where there are not, nor will there be time for abstract moral calculation.


Artificial certainty
The Trolley Problem artificially compresses moral deliberation into ''an instant'' to close off opportunities for potential problem-solving and communication that would spoil it. It is a hypothetical: it requires the subject to pre-solve a fantastical moral dilemma in the abstract, before it happens, without being allowed to ''change the system to prevent the dilemma arising in the first place''. This is to impose a simple system on a complex problem. The real world is not so bone-headed as to foresee problems and not solve for them in system design.
The trolley problem assumes there is perfect, certain knowledge of the outcomes on both tracks — that is the only variable possible is points operation — and that this is a binary choices with no middle ground. There is absolute certainty that the affected victims will not survive, which makes us wonder, what are they doing on the train tracks? Are they tied there? In which case why are we obsessing about this plainly second-order moral dilemma? If they are not tied there, to what extent are they able to get off the tracks? Or cul[pablefor being on them? Who hangs out on train tracks?


Artificial timeframe
If we know in advance we have an inherent “trolley problem”, we should not wait for a ''literally'' foreseen calamity to happen and only then make a difficult decision. ([[Legal eagle|Legal eagles]]: realising there is a risk and running it anyway is the legal definition of “[[Reckless|recklessness]]”: someone ''would'' be guilty of murder in this hypothetical.) We should re-design the system with fail-safes now. Not redesigning the system is the moral failure, not “pulling or failing to pull a lever”.
The problem artificially compresses moral deliberation into an instant (to close off potential problem-solving that would spoil the conundrum): It asks you to pre-assess the moral calculation in the abstract, without being able to change the system. But if we know in advance there is an inherent “trolley problem”, we should not wait for a plainly foreseeable calamity and have make a difficult decision. We should redesign the system with fail-safes now. Not to redesign the system is the moral failure.


If we allow the time-frame to be more realistic, there will be time to problem-solve, to warn victims, or contrive other outcomes. In real life there are usually multiple intervention points before a crisis. Where there is not, neither is there time for abstract moral calculation.
If the time-frame were more realistic, there would be time to problem-solve, warn victims, or contrive other outcomes or just ''communicate''.  


Artificial (lack of) social context
=====Artificial (lack of) social context =====
The scenario strips away all social and institutional context: Why have we done it designed the system this way? How have we managed to get to this crisis point without anyone foreseeing this? The reason trolley problems are not generally prevalent is because in the real world this social context will intervene to prevent the problems arising in the first place
The scenario strips away all social and institutional context: Why have we designed the system this way? How have we managed to get to this crisis point without anyone foreseeing it?  


Artificial moral agency
The reason trolley problems are not generally prevalent is because, in the real world, this social context will intervene to prevent the problems arising in the first place. We build systems to anticipate problems. We encounter design flaws once, and solve them. This is the Lindy effect: bad design gets junked.
It reduces moral agency to a single binary choice, pretending that the many necessarily involvedhumans have practical agency to forsee and anticipate events. So while the subject of the thought experiment is apparently omnisciient, everyone else is expected to be entirely ignorant of the scenario going down, and therefore unable to take any steps to prevent it.


=====Artificial moral agency vacuum=====
It reduces moral agency to a single binary choice. It pretends that the many necessarily involved humans have no practical agency to foresee and anticipate events. It asks us to exclude or ignore unstated conditions of far greater moral consequences than the question at hand. Who designed such an inept system? Who tied the prisoners down? Who lost control of the trolley? Our subject is a random bystander with, [[Q.E.D.]], no responsibility for how the situation arose, and for the thought experiment to work we are expected to accept that ''no-one else has any moral agency either''. 


The trolley problem assumes the “points operator” definitely knows what is ahead on the tracks on either side, and what the outcome of his actions will necessarily be while no-one else knows anything.
====False paradox====
{{drop|Y|ou all know}} how JC loves a [[paradox]]; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum. I feel a table coming on, readers.
{{small|80}}
{{tabletopflex|100}}
|+ Caption text
{{aligntop}}
! Rowspan="3" | Player !! Rowspan="3" | Responsible for !! Colspan="3"| Time of awareness
|-
!! Rowspan="2" |Before Breakaway !! Colspan="2" |After Breakaway
{{aligntop}}
!time to react !! no time to react
{{aligntop}}
| System Designer || System design safety, foreseeable failure modes ||Could implement: fail-safes, emergency brakes, worker safety zones, warning systems|| Could activate: emergency protocols, backup systems|| Too late
{{aligntop}}
| System Operator || Worker safety, track access, safe rostering, risk assessment||Could ensure: safe work procedures, worker positioning, track access control Maintenance schedule|| Could initiate: emergency protocols, worker evacuation, system shutdown|| Too late
{{aligntop}}
| Track workers || Challenging unsafe conditions, being expert, keeping a look out for danger, . || Raise issues. Quit! || Evacuate track || Example
{{aligntop}}
| Switch Operator  || Looking after his switch
||Make sure the switch is working. || Call for help! || Pull switch!
|}
</div>


It’s a false paradox: those things cannot be known in advance: only in retrospect. The humans on either side of the track are autonomous agents, as are the passengers on the train. Given their state of knowledge, their actions cannot be predicted with some kind of expected value theory. If at the time you know of the future events, there are almost certainly things you can do to change them.
those things cannot be known in advance: only in retrospect. The humans on either side of the track are autonomous agents, as are the passengers on the train. Given their state of knowledge, their actions cannot be predicted with some kind of expected value theory. If at the time you know of the future events, there are almost certainly things you can do to change them.

Revision as of 19:43, 11 November 2024

Lies, Damn Lies and Statistics
Index: Click to expand:
Tell me more
Sign up for our newsletter — or just get in touch: for ½ a weekly 🍺 you get to consult JC. Ask about it here.

A runaway trolley is hurtling down a track toward five workers who will certainly be killed if it continues on its course. You are standing next to a point switch. You can divert the trolley onto a side track, saving the five workers, but only at the cost of a sixth person working on the spur line. Should you pull the switch to divert the trolley, actively choosing to save five lives, but at the certain cost of one?

Philosophy students love the trolley problem. Though introduced in the sixties as a thought experiment about some Roman Catholic moral principle, these days it gets great play when we think about real runaway trams, not metaphorical ones — driverless cars. This intuition pump now functions in the public mind as some kind of live moral dilemma, to be solved (and how could it be solved? It is designed to be insoluble!) before driverless cars can be more widely mandated.

What if a child runs out in front of a robo-car, inside the total stopping distance for the car? What should the robot do? Should it swerve to avoid the child at the expense of oncoming vehicles? Or carry on, possibly killing the child?

How should we configure this potentially horrific technology? Can we — should we — programme your vehicle in advance to manage the kinds of moral quandary the trolley problem presents?

Look: there are plenty of logistical problems and risk allocation issues to be solved before autonomous vehicles can go mainstream for sure, but this highly artificial hypothetical is not one of them. This is a thought experiment dreamed up for a philosophy journal. It is not meant to address real-world problems.

Parallel worlds of science and philosophy

By design, Philosophy professors live in a world entirely disconnected from the actual problems of real human beings. It is useful by comparison to compare the world philosophers live in with the one scientists live in.

Scientists live in this simple, stable world filled with hundreds of benign little nomological machines they have designed, and thanks to which everything reliably works: thanks to the pendulum equation, t = 2π√(l/g), we can predict the period of a swinging object. thanks to Boyle’s law we can calculate the pressure and volume of a quantity of gas.

Philosophers inhabit, more or less, a dark inversion of this world: it is also filled with ingenious nomological machines, only they are malicious: they exist — philosophers invent them — to bugger everything up at the merest opportunity. In the philosophers’ world, nothing works.

For each of the scientists’ happy heuristics that illustrate how the world works, the philosophers have their own intricately-crafted intellectual contraptions — here, a Brain in a Vat, there, a Chinese Room, over yonder, a Parallel Universe — designed to illustrate how, really, it doesn’t. A world where nothing works is an excellent one for philosophers because it means they can sit around arguing why not and they never have to get on and do anything.

It is a world of imaginary problems Like a, well, parallel universe. Philosophers therefore occupy themselves with rarefied, nomological problems that suit that ascetic vibe.

Negative-Lindy effect

If any of these philosophical conundrums actually happened in real life, real humans would quickly solve them. For something to have lasting value as a philosophy experiment, therefore, it must have some kind of negativeLindy effect”.

The Negative Lindy Effect: A good test of the quality and resilience of a philosophical thought experiment is how long it can survive in the literature without ever presenting as a genuine problem in real life to actual people.

The real Lindy effect: things that have stood the test of time are more likely to continue doing so, while new things are more fragile and likely to become obsolete. The philosophical effect is the opposite. Plato’s cave: brilliant.

The Trolley Problem is just such an excellent philosophical thought experiment. What kind of real-world event has such a deterministic outcome, over such a short time horizon that is just long enough for a subject to take action that is partly but not fully remedial, that can be acted on with total moral clarity, and yet there is not enough time to contrive an alternative different solution, or warn anyone, or avert the outcome some other way or, if you can’t, at least communicate to the putative victims, in a scenario in which there is no overriding moral agency from someone else?

Artificial certainty

The trolley problem assumes that the subject has perfect, certain knowledge of both the present state and the future outcomes on both tracks — that the train is coming, it cannot be stopped, that the only possible variable is points operation and death on one spur or another is certain — but no-one else in the scenario has any knowledge, either of the current status or the future potential.

The subject is absolutely certain the affected victims will not survive. How? Can she see them? If so, why can’t she communicate with them? If she can see them, can they not see her? And the on-rushing trolley?

And what are they doing on the train tracks in the first place? Are they tied down there? If so, why are we obsessing about this plainly second-order moral dilemma of the points operator? Should we not be interrogating whoever tied them down? If they are not tied down, then what the hell are they doing? Can they not pay attention to their own environment? Can they not optimise their own well-being? Why can’t they get off the tracks? Who hangs out on train tracks? If they are rail workers, have they not had any HSE training?

So, while the subject of the thought experiment is omniscient, necessarily everyone else is entirely ignorant of every aspect of the scenario going down, and therefore unable to do anything to prevent it.

Artificial timeframe

In real life, there will usually be many “intervention points” before a crisis. Where there are not, nor will there be time for abstract moral calculation.

The Trolley Problem artificially compresses moral deliberation into an instant to close off opportunities for potential problem-solving and communication that would spoil it. It is a hypothetical: it requires the subject to pre-solve a fantastical moral dilemma in the abstract, before it happens, without being allowed to change the system to prevent the dilemma arising in the first place. This is to impose a simple system on a complex problem. The real world is not so bone-headed as to foresee problems and not solve for them in system design.

If we know in advance we have an inherent “trolley problem”, we should not wait for a literally foreseen calamity to happen and only then make a difficult decision. (Legal eagles: realising there is a risk and running it anyway is the legal definition of “recklessness”: someone would be guilty of murder in this hypothetical.) We should re-design the system with fail-safes now. Not redesigning the system is the moral failure, not “pulling or failing to pull a lever”.

If the time-frame were more realistic, there would be time to problem-solve, warn victims, or contrive other outcomes or just communicate.

Artificial (lack of) social context

The scenario strips away all social and institutional context: Why have we designed the system this way? How have we managed to get to this crisis point without anyone foreseeing it?

The reason trolley problems are not generally prevalent is because, in the real world, this social context will intervene to prevent the problems arising in the first place. We build systems to anticipate problems. We encounter design flaws once, and solve them. This is the Lindy effect: bad design gets junked.

Artificial moral agency vacuum

It reduces moral agency to a single binary choice. It pretends that the many necessarily involved humans have no practical agency to foresee and anticipate events. It asks us to exclude or ignore unstated conditions of far greater moral consequences than the question at hand. Who designed such an inept system? Who tied the prisoners down? Who lost control of the trolley? Our subject is a random bystander with, Q.E.D., no responsibility for how the situation arose, and for the thought experiment to work we are expected to accept that no-one else has any moral agency either.

False paradox

You all know how JC loves a paradox; well, this is a false one. There is no real-world scenario where the trolley problem can play out as a genuine moral conundrum. I feel a table coming on, readers.

Caption text
Player Responsible for Time of awareness
Before Breakaway After Breakaway
time to react no time to react
System Designer System design safety, foreseeable failure modes Could implement: fail-safes, emergency brakes, worker safety zones, warning systems Could activate: emergency protocols, backup systems Too late
System Operator Worker safety, track access, safe rostering, risk assessment Could ensure: safe work procedures, worker positioning, track access control Maintenance schedule Could initiate: emergency protocols, worker evacuation, system shutdown Too late
Track workers Challenging unsafe conditions, being expert, keeping a look out for danger, . Raise issues. Quit! Evacuate track Example
Switch Operator Looking after his switch Make sure the switch is working. Call for help! Pull switch!

those things cannot be known in advance: only in retrospect. The humans on either side of the track are autonomous agents, as are the passengers on the train. Given their state of knowledge, their actions cannot be predicted with some kind of expected value theory. If at the time you know of the future events, there are almost certainly things you can do to change them.