For many, the words ‘artificial intelligence’ (AI) bring to mind images of The Terminator or some other robot in a blockbuster movie trying to destroy humans and take over the world or our jobs. However, not all AI needs to lead to human destruction or job losses. In fact, many new AI technologies are designed to assist humans (eg by enabling us to do our jobs more efficiently or better understand our health), thereby increasing human wellbeing. This view is reflected by the European Commission’s High-Level Expert Group (HLEG) on AI, which believes that the benefits of AI for individuals and society outweigh its risks, provided we properly manage them and make ‘trustworthy AI’. Europe is leading the way with its guidance on ethical AI.
In an attempt to differentiate itself from the other major players in AI (namely the US and China), the European Commission’s HLEG has put together a working document: the Draft Ethics Guidelines for Trustworthy AI (the guide), which was opened for consultation in December 2018.
The guide sets out the proposed framework for achieving trustworthy AI, which contains an overarching principle that AI should be human-centric with the goal of increasing human wellbeing. It is said to have two components:
To have an ethical purpose, AI needs to be developed, deployed and used with respect for fundamental rights, principles and values, including:
The principles and values that need to be observed are:
The guide also highlights some areas of specific concern such as:
Realisation of trustworthy AI
A non-exhaustive list of 10 requirements that need to be achieved in order to meet the standard of trustworthy AI are set out in the guide. These include accountability; data governance; design for all; governance of AI autonomy (ie human oversight); non-discrimination; respect for an enhancement of human autonomy; respect for privacy; robustness; safety and transparency.
In terms of robustness, the AI must be able to deal with errors or inconsistencies that may occur during the various phases of the system (ie design, development, execution, deployment and use). It must be accurate and the accuracy results must be capable of being confirmed and reproduced by independent evaluation. The AI must also be resilient to attack and have a fall back plan in the event that issues arise.
The guide gives some technical and non-technical methods to achieve trustworthy AI and sets out an assessment list proposing questions that assessors can reflect upon. Assessment is to be continuous during the AI’s full lifecycle (ie from data gathering and design to deployment and usage).
The idea is that trustworthy AI will build consumer confidence.
AI brings with it numerous opportunities to benefit the human race but, at the same time, opens up additional potential for abuse. While the idea of ethical AI could initially seem idealistic to some, it may be sufficient to diminish those nightmares of The Terminator and build customer confidence in AI products, which will in turn lead to a broader uptake of AI systems. The challenge will be how to regulate and police the AI systems and prevent abuses, some of which are likely to be inevitable. Idealistic or not, one has to applaud Europe for trying to lead the way for ethical AI.
It will also be interesting to see whether, with Brexit on the horizon, the UK will follow Europe’s lead.
The final version of the guide is due to be published in March 2019.