Insurers Must Navigate The Moral Maze Of AI

The industry must develop a code of ethics around the insurance of artificial intelligence systems if it does not want to have a set of rules arbitrarily imposed on.

For decades, when discussing the basis on which futuristic robots would make decisions, people turned to science fiction for answers - specifically, visionary writer Isaac Asimov, whose laws of robotics seemed to hold the promise of a workable solution.

Back in 1942, Asimov postulated three laws that would govern a robot's behaviour, namely: a robot must not injure a human being; a robot must obey orders given by human beings, unless this resulted in harm to a human; a robot must protect its own existence as long as this did not conflict with the first or second laws.

As a first attempt at creating a code of ethics for artificial intelligence (AI), the three laws were a commendable piece of work, but as real AI proliferates, we can now see their inherent weaknesses.

Consider the well-trodden scenario in which the AI controlling an autonomous vehicle must choose between hitting a pedestrian or, potentially, injuring its occupant. Asimov's first law ¬- a robot may not injure a human being - is immediately revealed as unfit for purpose. The vehicle's AI has no ethical basis, or value system, on which to make the difficult choice. The result: either the vehicle's AI freezes, or it flips a philosophical coin. Clearly, neither option is satisfactory.

Beyond the relative simplicity of the kill/do not kill decision, AI poses a host of less instantly dramatic conundrums, but whose impact could be just as profound. We have already seen the harbingers of these: trolls taking advantage of the programming of Microsoft's chatbot Tay to make it use racist and abusive language; or biased data leading to biased decisions, such as those reached by algorithms used by US courts to guide judges' sentencing.

Delivering on AI's potential

If the full potential of AI is to be reached, we must accept the neural networks underpinning AI systems' decision-making will be too complex to map. Essentially, the "intelligence" manifested by black box algorithms grows exponentially so it rapidly surpasses the cognitive abilities of its maker.

While this could create issues under the EU's General Data Protection Regulation - specifically rights regarding automated individual decision-making - it does not justify stepping back from the technology.

AI and robotics hold out the possibility of tremendous benefits...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT