A look at the basic principles guiding AI frameworks around the world: Part II
Fairness and non-discrimination in developing AI governance
How do you codify fairness? And is it even possible to ensure that an artificially intelligent system and its decision-making is completely devoid of discrimination?
These are the questions that many AI frameworks have tried to grapple with, and in many cases, the answers they turn up only spark more questions.
This piece in the series explores the questions and the consensus on the principles of AI. The previous article in this series explored transparency and explainability in AI systems, and the various ways that government efforts around the world are attempting to refine and implement the principles in practice.
Now, we turn to the concepts of fairness and non-discrimination in proposed models for AI governance and some of the challenges associated with pinning down these principles in AI development and use.
Australia
On April 5, 2019, Australia released a discussion paper on ethical AI. The paper outlines eight core principles for AI systems, one of which is fairness (perhaps unsurprising for a country whose unofficial motto is “fair go”). In this vein, the paper’s principle of fairness requires that the “development or use of the AI system must not result in unfair discrimination against individuals, communities, or groups,” and stresses that the initial datasets must be free from bias or “characteristics which may cause the algorithm to behave unfairly.”
Yet, the authors recognize that this is challenging in practice, especially when “justice sometimes demands that different situations be treated differently.” That’s why they stress the importance of human responsibility for AI decision-making, especially in “high stakes” situations that could have a significant impact on individuals’ lives.
Singapore
Singapore’s model for AI governance conceives of fairness in a similar way, including guidelines that would ensure “algorithmic decisions do not create discriminatory or unjust impacts across different demographic lines,” such as race or gender.
In their understanding of fairness in AI systems, the UK’s AI Select Committee report does not mince words on the possibility of prejudicial decision-making by AI systems. The authors state that they are “concerned that many of the datasets currently being used to train AI systems are poorly representative of the wider population.” It urges that more must be done to ensure that data is “truly representative of diverse populations,” and that it “does not further perpetuate societal inequalities.”
UAE
A testament to how important this principle is understood to be, Dubai’s high-level “Principles of Artificial Intelligence” lists fairness as its first principle in ensuring ethical AI. It states that “steps should be taken to mitigate and disclose the biases inherent in datasets” and that “significant decisions should be provably fair.”
While desirable in many cases, it is not yet clear how decisions would be “provably fair” in practice or even which metrics would be used to determine fairness in such instances, other than their impact on specific populations, a concern echoed by most frameworks.
Frameworks for Avoiding Algorithmic Bias
Although many if not all of the developing frameworks around the world recognize the potential for bias in datasets used to train AI systems, and their ability to perpetuate inequalities on a larger scale, the concern begs the question of how to address bias, whether in original datasets or in the algorithm itself.
To avoid discriminatory decision-making, Singapore’s model proposes that monitoring and accounting mechanisms be developed to prevent “unintentional discrimination,” and that diverse voices and demographics ought to be consulted when developing AI systems and algorithms.
The UK report proposed a more specific remedy in the form of funding, as part of the country’s Industrial Strategy Challenge Fund, that would aim “to stimulate the creation of authoritative tools and systems for auditing and testing training datasets” to ensure the training data is representative of diverse populations and to ensure AI systems do not result in prejudicial decisions.
It’s worth noting that some experts point out that it is not simply a matter of “eliminating bias” from data and decision-making. There is the question of whether a dataset or a system can truly be neutral.
Managing Director of the IEEE-Standards Association Dr Ing. Konstantinos Karachalios, explained in the UK submissions process: “You can never be neutral; it is us. This is projected in what we do. It is projected in our engineering systems and algorithms and the data that we are producing. The question is how these preferences can become explicit, because if it can become explicit it is accountable and you can deal with it…It is the difficulty of making implicit things explicit.”
And therein lies the challenge, a challenge that may require more iterations of guidelines and more resources to develop feasible solutions. However, by recognizing the importance of fairness and defining what this means, at least in principle, AI governance models are building strong foundations for ethical AI systems.
This series is authored by Cara Sabatini, a World Legal Summit researcher and journalist.