This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields Sustainability

| 3 minute read

Artificial Ethics: evolving guidelines to help thinking machines make the “right” choices.

When the machines take over, how do we ensure that they respect our fundamental rights and values?  It’s a question gaining prominence in a number of fields, from driverless cars to predictive algorithms and facial recognition.  

As reported on our Digital Blog, on 18 December, the European Commission’s High-Level Expert Group on Artificial Intelligence published for consultation its Draft Ethics Guidelines for Trustworthy AI. The draft Guidelines are open for comment until 1 February.  The final version is due in March 2019. 

The Guidelines propose the same kind of broad, intuitively sensible rules about what constitutes “trustworthy” AI as those put forward by a number of other organisations, including Microsoft and Google.  What are its conclusions? First, that AI should respect fundamental rights, societal values and five core ethical principles: “beneficence” (do good), “non-maleficence” (do no harm), human autonomy, justice, and “explicability” (transparency).  Second, that AI imbued with these principles should be reliably designed and developed, that there should be human oversight, and that there should be accountability and redress when things go wrong.  

In other words, “trustworthy” AI makes the right choices, every time.

The Expert Group also includes a list of “critical concerns” raised by AI.  The Guidelines note that the choice of critical concerns was controversial, and seek feedback from consultees on to what extent each concern is a real threat.  The Group’s list – mass surveillance, “covert” AI posing as human, “citizen scoring” by governments, and autonomous weapons – is intriguing.  They encompass concerns about the line between artificial and human intelligence, the risks of mass data collection and processing, and – in autonomous weapons systems – return to the crux of the issue, which is how, as AI systems gain the ability to make choices, we ensure that they make the right choices; and that a “real” person is accountable when they do not.  The Expert Group argues that AI should make choices based on its five ethical principles, which it has drawn from European and international human rights law.  

It’s not as easy as that, of course: the Guidelines’ five principles are broad, and it is possible that they could conflict in real-world situations.  In such cases, human decision-makers would likely make different “right” choices. See, for example, Moral Machine, an MIT research project which collected data from 233 countries and territories on how users thought a driverless car should behave in an ethical quandary, and found significant variations in ethical decision-making between different cultural and geographic “clusters”.

It’s not hard to imagine a world in which regulators set out the principles which programmers must teach AI systems in their jurisdictions, based on their own fundamental rights legislation.  Alternatively, the choice might be given to consumers, who could, for example, program their driverless car to prioritise the safety of children over other road users.  

A number of questions arise from that possibility.  Who is at risk of legal action if a programmed AI goes wrong?  The manufacturer of the end product (e.g. the autonomous vehicle) may not be the programmer of the AI: how would liability be split between them?   The user may also have had a causal impact on the AI’s behaviour: could we even see liability for consumers who have “tweaked” AI to behave in a certain way?  Would it be a defence that a programmer was following guidance recommending, or legislation mandating, that AI be programmed to follow certain principles?  A further European Commission expert group on liability and new technologies is considering how EU product liability legislation should treat AI.  Finally, if a manufacturer chooses to program the AI to favour particular groups – children over adults, for example – could we see claims for discrimination?  

AI opens up the possibility of completely programmable decision-making: it allows us to choose in advance and at our leisure what, for a human decision-maker,might be a subconscious or split-second decision.  However, that very possibility will force lawmakers, manufacturers and consumers to confront the question of what constitutes the “right” choice when ethical principles conflict.  

Tags

human rights, ethics, european commission, eu law, technology, media and telecommunications, data