This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields Sustainability

| 4 minutes read
Reposted from Freshfields Technology Quotient

Building Trustworthy AI – Commission Publishes New AI Ethical Guidelines

On Monday, the European Commission (the Commission) published “Ethical Guidelines for Trustworthy AI” (the Guidelines) and a Communication on next steps for “Building Trust in Human-Centric Artificial Intelligence” (the Communication). The Guidelines were drawn up after consultation with the Commission’s High Level Expert Group on Artificial Intelligence (AI), which is made up of 52 individuals and organisations coming from academia, business and civil society.

Why focus on ethics?

Trust is obviously the keyword. AI has the potential to transform our world for the better. However, it also brings its own challenges, because it enables machines to learn and make decisions without human intervention. Decisions taken by algorithms in AI systems could result from data that is incomplete, unreliable, erroneous or biased. This could lead to problematic outcomes, which in turn could discourage the adoption of AI systems. The Commission’s strategic response to such challenges is, it says, to place people at the centre of the development of AI and make it worthy of the public’s trust. The Guidelines are a key attempt to achieve this.

Four principles and seven requirements

Overall, in the Commission’s view, trustworthy AI systems should: (i) comply with the law; (ii) fulfil ethical principles; and (iii) be robust from both a technical and social perspective. The Guidelines focus on the latter two objectives and seek to flesh these out in the four ethical principles, which are based on the fundamental rights laid down in the EU Charter:

  1. Respect for human autonomy: 

    AI should be designed to complement and empower human cognitive, social and cultural skills, rather than subordinate, coerce or manipulate humans. The allocation of functions between humans and AI systems should leave meaningful opportunity for human choice.

  2. Prevention of harm: 

    AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings.

  3. Fairness: 

    AI systems should be fair, both substantively and procedurally.

  4. Explicability: 

    decisions and decision-making processes should be transparent, comprehensible and capable of being explained to those directly or indirectly affected. 

They then go on to set out seven concrete requirements, to bring these to life. The Commission encourages stakeholders to apply these requirements and to create the right environment of trust for the successful development and use of AI.

1. Human agency and oversight: Human agency and oversight helps ensure that AI systems do not undermine human autonomy or cause other adverse effects. Oversight could be achieved by governance mechanisms such as human-in-the-loop, human-on-the-loop or human-in-command, depending on the intended use of the AI application and the potential risks that it poses.

2. Technical robustness and safety: Technical robustness requires that AI systems be developed with a preventative approach to risks and in a manner such that they reliably behave as intended while minimising unintentional and unexpected harm. AI systems should be secure, accurate and reliable. They should be resilient to cyberattack and should have safeguards and fall-back mechanisms that would kick in in case of problems.

3. Privacy and data governance: Personal data collected by AI systems should be secure and private. Individuals ought to retain full control over their data and trust that it will not be used to harm or discriminate against them. When data is gathered, it may reflect socially constructed biases, or contain errors or inaccuracies. This should be addressed prior to training an AI system with any given data set.

4. Transparency: Decisions made by AI systems (as well as the decision-making processes) should be logged and documented. Further, to the extent that it is possible, the technical processes of an AI system and the related human decisions (e.g. application areas of a system) should be made explicable. Finally, an AI system should not hold itself out as a human to end-users and its identity, capabilities and limitations should be communicated to them.

5. Diversity, non-discrimination, and fairness: AI systems should be developed in a way which prevents the inclusion of any unfair bias. Establishing diverse design teams and ensuring citizen participation can help address this concern. AI systems should also consider the whole range of human abilities and skills and strive to achieve equal access for persons with disabilities.

6. Societal and environmental well-being: The impact of AI on the environment and on sentient beings should be taken into account. The sustainability and ecological responsibility of AI systems should be encouraged and their impact on society as a whole should also be considered, particularly in opinion-formation, political decision-making or electoral contexts.

7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes e.g. the assessment of AI systems by internal and external auditors. Impact assessments of the AI systems should be carried out to identify and minimise potential negative impacts, and any trade-offs between the requirements of AI systems should be rationalised and accounted for. Finally, when unjust adverse impact occurs, accessible mechanisms should ensure adequate redress.

How will these requirements and principles be translated into real-life applications? Perhaps by AI developers creating ethics committees of the sort that are already familiar to another industry where ethical compliance is paramount: healthcare. A recent article in the Financial Times suggests that these committees will need real teeth, made up of members who are independent and representative enough to ensure that the public trusts the decisions they make.

Next Steps

The Commission is now embarking on a pilot phase to obtain feedback from stakeholders on the guidelines, with a particular focus on a non-exhaustive Trustworthy AI Assessment List (the List), which has been designed to operationalise trustworthy AI in line with the seven requirements set out above. The aim is to road test how the guidelines can be implemented in practice.

Based on feedback from this exercise, the High Level Expert Group will review and update the List in early 2020. The aim is to achieve a framework that can be horizontally used across all applications and offer a foundation for ensuring trustworthy AI in all domains.

On a separate (but related) note, the Commission has also tasked the expert group with preparing recommendations for regulators on how to tackle the mid- to long-term challenges and opportunities raised by AI. These recommendations will be finalised and published in May 2019.

 

Tags

europe, risk, ai