News and Publications

Ethical AI: Europe leads the way

Posted: 08/02/2019

For many, the words ‘artificial intelligence’ (AI) bring to mind images of The Terminator or some other robot in a blockbuster movie trying to destroy humans and take over the world or our jobs. However, not all AI needs to lead to human destruction or job losses. In fact, many new AI technologies are designed to assist humans (eg by enabling us to do our jobs more efficiently or better understand our health), thereby increasing human wellbeing. This view is reflected by the European Commission’s High-Level Expert Group (HLEG) on AI, which believes that the benefits of AI for individuals and society outweigh its risks, provided we properly manage them and make ‘trustworthy AI’. Europe is leading the way with its guidance on ethical AI.

The guidelines

In an attempt to differentiate itself from the other major players in AI (namely the US and China), the European Commission’s HLEG has put together a working document: the Draft Ethics Guidelines for Trustworthy AI (the guide), which was opened for consultation in December 2018.

The guide sets out the proposed framework for achieving trustworthy AI, which contains an overarching principle that AI should be human-centric with the goal of increasing human wellbeing. It is said to have two components:

  • an ethical purpose; and
  • being technically robust and reliable.

Ethical purpose
To have an ethical purpose, AI needs to be developed, deployed and used with respect for fundamental rights, principles and values, including: 

  • human dignity;
  • freedom of the individual;
  • democracy, justice and the rule of law;
  • equality, non-discrimination and solidarity, including the rights of persons belonging to minorities; and
  • citizens’ rights.

The principles and values that need to be observed are:

  • to do good;
  • to do no harm;
  • to preserve human autonomy; and
  • to be just and fair and operate transparently.

The guide also highlights some areas of specific concern such as:

  • identification without consent (which may arise in certain types of AI such as face recognition);
  • covert AI systems (arising in human-like robots);
  • mass citizen scoring (eg general assessment of ‘moral personality’ or ‘ethical integrity’ in all aspects on a large scale by public authorities); and
  • lethal autonomous weapon systems (also known as killer bots which operate without human control to select and attack individual targets). 

Realisation of trustworthy AI
A non-exhaustive list of 10 requirements that need to be achieved in order to meet the standard of trustworthy AI are set out in the guide. These include accountability; data governance; design for all; governance of AI autonomy (ie human oversight); non-discrimination; respect for an enhancement of human autonomy; respect for privacy; robustness; safety and transparency.

In terms of robustness, the AI must be able to deal with errors or inconsistencies that may occur during the various phases of the system (ie design, development, execution, deployment and use). It must be accurate and the accuracy results must be capable of being confirmed and reproduced by independent evaluation. The AI must also be resilient to attack and have a fall back plan in the event that issues arise.

The guide gives some technical and non-technical methods to achieve trustworthy AI and sets out an assessment list proposing questions that assessors can reflect upon. Assessment is to be continuous during the AI’s full lifecycle (ie from data gathering and design to deployment and usage).  

The idea is that trustworthy AI will build consumer confidence.

Final thoughts

AI brings with it numerous opportunities to benefit the human race but, at the same time, opens up additional potential for abuse. While the idea of ethical AI could initially seem idealistic to some, it may be sufficient to diminish those nightmares of The Terminator and build customer confidence in AI products, which will in turn lead to a broader uptake of AI systems. The challenge will be how to regulate and police the AI systems and prevent abuses, some of which are likely to be inevitable. Idealistic or not, one has to applaud Europe for trying to lead the way for ethical AI. 

It will also be interesting to see whether, with Brexit on the horizon, the UK will follow Europe’s lead.

The final version of the guide is due to be published in March 2019. 

Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP