Whom should I kill?” That is the fundamentally thorny question that researchers are considering in the programming of autonomous vehicles, as discussed in this article in the New York Times.

More specifically, the question of whom to sacrifice in the event of a high-speed avoidance maneuver – the guy in the car or the pedestrian in the crosswalk – is one which researchers are asking in the hopes of framing the car’s understanding and response to situations of mortal risk.

To the majority of respondents of a recent poll of autonomous vehicle passengers, the answer was clear: ‘hit the pedestrians’. This is not surprising, but it opens a whole raft of moral questions that are not purely theoretical. In a world of autonomous vehicles, this situation will arise, as it currently does with humans behind the wheel.

Making a split-second decision of how to avoid injury to oneself and others is a terrible choice for a person to have to make. The first instance of an autonomous vehicle choosing to hit ‘person X’ in order avoid killing ‘person(s) Y’ will be even more contentious; with lingering societal anxiety over agency and moral priority. This will be doubly fraught because the car will have made a pre-programmed decision, with the priority of lives already part of its parameters in a given situation.

Even stranger, it may be possible that an autonomous car’s ‘prioritization parameters’ could be one of its advertised features. Perhaps cars will be set with a standard set of avoidance programming; but for a bit more, you can get one that will put its passengers’ lives first.

It’s all very weird and unsettling, but I’m a ‘glass half full’ kind of guy. Just think how busy it will keep the lawyers.