Which do you prefer: a Kant Car or an Aristoles Audi?

Driverless vehicles will have to deal with tricky life or death decisions which philosophers have argued about for years. So who will the artificial intelligences behind the wheel choose to favor? I asked Janina Loh, who teaches philosophy at the University of Vienna, for her thoughts on the ethics of self-drivbing cars.

Will autonomous vehicles represent a step toward greater safety or are they an added risk?

Since 90% of all traffic accidents are due to human error, chances are that self-driving cars will reduce the current number of pileups. But even the best autonomous vehicles will eventually be involved in serious incidents. Today, the driver makes decisions spontaneously and reacts by reflex because of the lack of time and information required to make informed ethical decisions before it’s too late. Essentially, the same will be true with self-driving cars but their decisions will be largely based on automation, so it will actually be the algorithms used and their programming that decide what actions to take. That’s why we need to make sure that certain ethical principles are built into our technical systems.

So in the worst case, machines will have to decide over life and death, won’t they?

It will be hard to program autonomous vehicles to include every conceivable scenario that could occur in traffic. That’s why we need to make sure that certain moral principles are being followed. A car that is programmed to protect its own passengers at the cost of everyone else would be just as socially unacceptable as one that willingly sacrifices its occupants to save others.

So to which rules should a self-driving car adhere?

Let’s consider the classic case: A car is driving through a residential area when a group of small children run out from behind a parked car. To avoid them the car would have to pull to the left, but by doing so it would hit an 80-year-old man approaching on his bicycle.

In your opinion, what should the car decide to do?

That depends on which school of ethics you choose to follow. The utilitarian school of thought, founded by Jeremy Bentham in the early 19th century, states that the best action is the one that maximizes utility. According to Bentham’s Greatest Happiness Principle, we should be governed by the credo “the greatest happiness of the greatest number”. This means the old man must die because his usefulness to society is probably less than that of the children, one of whom might grow up to become the next Einstein. However, according to the school of deontological ethics, prescribed by Immanuel Kant, assigning different values to different human lives is completely unethical because human dignity is absolute, and you can’t compare absolutes. In fact, human dignity lies at the bedrock of most legal systems in liberal democracies today.

Sounds like there is no real ethical solution after all.

Philippa Foot, a famous British philosopher and ethicist, called this kind of situational dilemma a “trolley problem” – based on a similar dilemma as described above using a trolley bus rather than a car and adding more complications. Philosophers often engage in such thought experiments where they describe hypothetical situations, sometimes realistic and sometimes fantastical, and then ask about our ‚moral intuitions‘ regarding the case. There is no right or wrong answer to a trolley problem, which means that we don’t need to solve some kind of puzzle before we can let autonomous vehicles loose on mankind.

So which school of ethics should car makers follow?

The automotive industry is focused on making sure that accidents don’t happen in the first place. Driver assistance systems are programmed to react defensively, meaning that when in doubt they will slow down or stop and ask questions later. If an accident becomes unavoidable, the European Ethics Commission las laid down that under no circumstances may a vehicle be programmed to choose between potential human victims. Instead, the steering systems must be programmed to seek to avoid accidents by all means, or at least to reduce speed (and thus collateral damage) as much as possible.

And if worst comes to worst, who is liable?

Certainly not the car itself, since algorithms cannot be held legally accountable.

Do we need some kind of new Digital Road Traffic Act?

The European Parliament has suggested that autonomous driver assistance systems should be considered as legal entities – a kind of electronic persona. After all, companies can be taken to court so why not an algorithm? This is especially true for self-learning artificial intelligence (AI) systems which would need to be issued a sort of digital legal personality. Of course, this also means they would have to be registered with the authorities and would need be provided with assets with which to pay compensation, or at the very least they would require liability insurance. We need to think long and hard about what this kind of ‘digital personhood’ means in practice, because we are heading towards a future in which autonomous and self-learning machines will play an increasingly important role.

Couldn’t the creators of such systems conceivably argue that, since they are self-learning, it wasn’t they who taught the machine to do what it does, but that it actually taught itself?

It’s certainly true that the damage done by a self-learning system can’t directly be traced back to the programmer. in order to act in a virtue-ethical sense, an autonomous driver assistance system will need to be capable of a far greater degree of self-learning than a simple ‘Kant Car’ or ‘Bentham Porsche’. It would not be inconceivable to imagine a kind of ’Aristotlemobile’ that would require owners to drive themselves around at first in order for the car to learn to drive the way they do.


Janina Loh teaches philosophy at the University of Vienna. Together with her husband she recently contributed an essay on digital ethics to Patrick Lin’s anthology Robot Ethics 2.0 (Oxford University Press, 2017).

Dieser Beitrag wurde unter Digitale Aufklärung, Digitale Transformation abgelegt und mit , , , , , , , , , , verschlagwortet. Setze ein Lesezeichen auf den Permalink.

Kommentar verfassen

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.