The Trolley Problem & Self-Driving Cars
The trolley problem, devised by philosopher Philippa Foot, is a thought experiment: a runaway trolley is headed toward five people tied to the track. You can pull a lever to switch it to a side track where it will kill one person. Is it morally permissible to pull the lever? A related variant, the “fat man” problem, asks if you would push a large man off a bridge to stop the trolley, killing him to save the five. These dilemmas probe the distinction between killing and letting die, and between utilitarian calculation and deontological constraints against using a person as a mere means. For decades, it was an abstract exercise in moral philosophy.
The development of autonomous vehicles (AVs) makes it terrifyingly concrete. An AV’s software must be programmed to respond to unavoidable accident scenarios. If a child suddenly darts into the road and braking is impossible, should the car swerve? If swerving means hitting a concrete barrier, killing the single occupant? If it means veering onto a sidewalk with three pedestrians? This is no longer a philosopher’s puzzle; it is a software specification.
The utilitarian calculus, minimize total harm, seems the obvious engineering solution. A car programmed on strict utilitarian grounds would sacrifice its own occupant if that action resulted in fewer total deaths. But what customer would buy a car programmed to potentially kill them for the greater good? This highlights the difference between ethics for actors (what should I, as the driver, do?) and ethics for planners (what rules should we, as a society, encode in machines we deploy?).
The legal and regulatory implications are immense. If a utilitarian AV causes a death, who is responsible? The manufacturer who programmed the algorithm? The regulatory body that approved it? The “reasonable person” standard in tort law is useless; there is no human driver. Product liability law would need to adjudicate whether the algorithm’s decision logic was “defective.” Legislators may be forced to choose between mandating a utilitarian calculus (socially optimal but commercially unviable) or an occupant-protective one (marketable but potentially causing more net harm).
Furthermore, the algorithms will likely need to make split-second classifications-is the object on the side of the road a plastic bag or a child?-that embed biases from their training data. The trolley problem thus evolves from a moral logic game into a series of brutal technical and legal challenges: how to encode unavoidable harm-minimization in a way that is morally defensible, legally assignable, publicly acceptable, and commercially feasible.
It forces the realization that ethics can no longer be an afterthought in engineering; it must be a primary design constraint.