The Trolley Problem and Autonomous Vehicles

Stephen Ragan
3 min readFeb 19, 2020

Imagine are flying down the autobahn, wind in your hair and the landscape a passing blur while deep house music whips you to unimaginably legal speeds in an exquisitely crafted metal box of the finest German precision when suddenly a sweet, bulging eyed adorable baby deer wobbles out in its finest Bambi impersonation.

You have the option of smashing through this adorable new life form, swerve into slow (a relative term) moving traffic in the right lane, or veer left into the dividing median.

This is a formulation of the classic trolley problem attributed to a number of philosophers and presented in a variety of forms to test intuitions regarding an individual’s ethical thinking. In its original formulation the problem is set up as a run away trolley heading down a set of train tracks towards a group of people. You find yourself in the unenviable position where you can pull a lever switching the trolley onto another track. However, you notice there is also a person on that side of the track. Now you must decide either to pull the lever and sacrifice one to save five or leave it all up to fate.

This problem has achieved a sort of renaissance in recent times because of the development in autonomous vehicles that has progressed beyond the stages of mere theory. The issue is that in a given situation, engineers might be required to program a car to make such a decision. This presents the strange dilemma of philosophy making an actual contribution to real life. A shocking proposition.

While it’s more likely that the application of the trolley problem to autonomous driving is a red herring and that issue of a necessary cause is illusory, as Hume pointed out. Rather what there is a sequence of events. None of which is dispositive in the grand narrative of consequences.

To make this more clear it’s necessary to stretch the timeline of consideration and see that the real value in automation will be avoiding zones of high risk eliminating that moment of decisive action outlined in the first example. This will be done because automated vehicles will be programmed to drive conservatively and will interact with other automated vehicles in the surrounding area to make coordinated decisions through real time transfers of information.

What level of risk reduction is sufficient to result in wide scale adoption?

Automation raises interesting questions related to rationality, and perceptions of risk. How much of a reduction in the risk of an accident would be necessary for you to favor the adoption of autonomous driving? Intuitively it seems that the expectation is that machines will be infallible. But that’s impossible. Computers fail all the time. The real measure should be relative. If automated vehicles perform even marginally better than humans, that means lives are saved and widespread adoption becomes a moral imperative. Unfortunately, the media has a field day every time tech fails because sensational headlines like “Self-driving Uber kills Arizona woman” or “Tesla driver dies in first fatal crash” attract attention and portend the end times. An idea that has for centuries been commoditized in various forms of religious fervor.

The real point of conviction is that any reduction in risk above the threshold of human capability is the fundamental metric. Automated vehicles will avoid the trolley problem by avoiding risky behavior and making such situations a relic like the antiquated idea of human drivers.

--

--