Explainability and transparency underlie proposed models of AI governance

What does it mean for AI to be ethical? Unless we’re talking about an AI system that makes “morally sensitive” decisions, like whether a smart speaker discloses information about a marijuana-smoking teenager, ethical AI generally refers to the principles that inform its development and use by humans.

Both public and private sector organizations have devised frameworks and guidelines to promote the development of AI systems in a socially responsible and ethical way. While most of the standards directly related to AI are not legally binding, they do provide a pathway to best practices and to potential enforcement mechanisms down the road.

Even in the early stages of developing AI governance, it is clear that certain principles consistently underlie the majority of frameworks put forth by governments around the world. Sets of principles have emerged from various corners of the world; the most widely known may be the FAT principles, which refer to the principles of fairness, accountability, and transparency in machine learning. These basic principles form the backbone of many models of AI governance, although they sometimes take on different definitions.

This series is a review of the major principles that are serving to encourage AI practices are ethical and socially responsible. We’ve identified four key elements:

  • Explainability and transparency;
  • fairness and non-discrimination;
  • safety and security;
  • and, finally, what could be called “the human element.”

This first installment of the series examines how explainability and transparency are understood and defined in the frameworks taking shape in a number of different countries.

Explainability and Transparency

Explainability and transparency, while distinct, are often linked together in their relation with AI. Both principles can help build trust in the AI system in similar ways and are therefore foundational to most frameworks around the globe.

For example, the ability to understand why a system behaves in a certain way plays an important role in practices of risk mitigation. It’s a feature that is already built into some legislation surrounding the use of data, which inevitably affects AI systems that use the data.

Article 22 of the GDPR contains a ‘right to an explanation’ provision. The United Kingdom’s House of Lords AI Select Committee takes this provision into account in their 2018 report on the social, ethical, and economic implications of AI. The report states that the GDPR provision gives individuals subject to fully-automated decisions the right to an explanation as to how that decision was reached, or to ask for a human to make the decision instead.

We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take. – UK House of Lords AI Select Committee

Yet, there are some reservations about full explainability, as seen in the discussions of AI governance around the world.

Many frameworks take into account that the degree and form such explainability takes is largely dependent on the audience—be it a user, developer, or specific industry expert.

Most recently, Singapore’s proposed model of AI governance suggests that the addition of explainability features could focus on why the AI system had a certain level of confidence in its predictions, as opposed to why a prediction was made. This could allow for a more practical application of explainability, allowing for a deeper understanding of AI operations.

India’s discussion paper on a national strategy for AI goes a step further, stating, “The machine learning algorithms of tomorrow should have the built-in capability to explain their logic, enumerate their strengths and weaknesses and specify an understanding of their future behavior.”

In Japan’s “Draft AI R&D Guidelines for International Discussions,” the principle of transparency stresses the verifiability of inputs and outputs of AI systems as well as the “explainability of their judgments.” The AI systems subject to this principle would be those that “might affect the life, body, freedom, privacy, or property of users or third parties.”

The European Commission’s “Draft Ethics Guidelines for Trustworthy AI” sees transparency as relating more to the source of the data, as opposed to the decisions the system makes. In its view, transparency is “being explicit and open about choices and decisions concerning data sources, development processes, and stakeholders,” particularly in AI systems that use human data or significantly affect human beings.

The UK report specifies technical transparency as a way to ensure the intelligibility of an AI system. But in its view, technical transparency would be more useful for experts rather than the layperson, and in fact, may be impossible to achieve in most cases.

There are other caveats to full transparency. Professor of Electronic Commerce Law Chris Reed argues that requiring explanations for all decisions in advance could stifle innovation and limit the extent to which a system may evolve or learn through its mistakes.

To avoid such a consequence, the UK report specifies that there may be “particular safety-critical scenarios where technical transparency is imperative, and regulators in those domains must have the power to mandate the use of more transparent forms of AI.”

It’s clear that the principles of explainability and transparency are still being discussed and defined, and the way that they are put into practice is contingent on a number of factors. However, it’s also clear that there’s consensus on the value of these principles for building the type of AI systems that benefit society and the individual and this brings us steps closer to where we need to be in advancing universal standards and regulatory frameworks.

Next article: Fairness and non-discrimination in AI frameworks

This series is authored by Cara Sabatini, a World Legal Summit researcher and journalist.