Part III of our series, “Principles of AI governance”

Discussions of safety and AI systems may conjure up a range of images, from hacked (or “tricked”) self-driving cars to more nightmarish scenarios of autonomous weapons gone awry, or perhaps, that time 24 workers were hospitalized after a robot punctured a can of bear repellent in an Amazon warehouse.

For all these reasons, it goes without saying that the development of AI systems must be safe and secure. But what is “safety” and “security” in the context of autonomous systems or AI decision-making? And how do you ensure that both remain paramount in the development process without stifling innovation?

Safety and security aren’t ethical principles in the philosophical sense, but they are considered virtues by many of the frameworks around the world that seek to promote ethical AI. The physical safety, metadata, and privacy of an individual, organization, or even entire nations may fall under this umbrella, depending on the framework.

Singapore’s model for AI governance is a groundbreaking effort for ethical AI in Asia, and it’s quickly gaining global recognition. The country recently took home the award for “Ethical Dimensions in the Information Society” at the World Summit in Geneva for their AI governance and ethics initiative.

While it is still a “living document,” a major part of the country’s AI governance model is its focus on ensuring systems are “human-centric,” which includes protecting the safety and security of people’s data and wellbeing. The document states, “AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on.”

In order to protect the safety of humans in contact with AI systems, the model recommends testing the degree to which AI solutions “generalize” and “fail gracefully.”

For example, a warehouse robot tasked with avoiding obstacles to complete a task (e.g. picking packages) should be tested with different types of obstacles and realistically varied internal environments (e.g. workers wearing a variety of different coloured shirts). Otherwise, models risk learning regularities in the environment which do not reflect actual conditions (e.g. assuming that all humans that it must avoid will be wearing white lab coats). Once AI models are deployed in the real-world environment, active monitoring, review and tuning are advisable.

Similarly to Singapore’s model, the European Commission’s “Draft Ethics Guidelines for Trustworthy AI” stresses the importance of humans’ physical safety and data privacy, which includes any information generated about individuals during their interactions with AI systems. The report goes a step further to include AI systems ensure the safety of resources and the environment—not just individuals. It stresses the importance of mechanisms to test a system’s adaptability and the need for processes to clarify and assess potential risks associated with the use of AI products and services.

While Canada and the US do not have specific frameworks for how AI systems should operate in general, they do have various standards for how AI is used and understood in the public sector. Canada’s Directive on Automated Decision-Making recently took effect and includes a specific security provision that requires risk assessments to be conducted during the development of a system and that systems allow for human intervention, when appropriate, as a way to ensure safeguards against unintended consequences.

While Dubai’sPrinciples of Artificial Intelligence” is more high level than some of the other governance frameworks, it lists a number of principles to ensure the security of an AI-driven system. It states unequivocally that “AI systems should not be able to autonomously hurt, destroy or deceive humans,” and that “Safety and security of the people, be they operators, end-users or other parties, will be of paramount concern in the design of any AI system.”

The principle that AI systems should not be able to “autonomously” hurt or destroy humans, of course, has obvious implications for autonomous lethal weapons. Yet, the principle would apply just as well, and perhaps more frequently, to the example of a warehouse robot selecting packages and successfully avoiding its human counterparts (as in Singapore’s model).

Most recently, Australia’s proposed framework includes the core principles of privacy and “do no harm.” The latter states, “Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.”

It’s worth noting that the inclusion of “civilian” in Australia’s core principles sets certain parameters on the development of AI systems that could be used for lethal purposes, but it does not necessarily preclude their development.

While more discussions are necessary about the mechanisms and metrics required to ensure that AI systems are safe and secure, the fact that these principles are paramount to proposed AI frameworks is a necessary first step.

This series is authored by Cara Sabatini, a World Legal Summit researcher and journalist.

This article is part of a series that explores the principles guiding AI frameworks around the world, with a focus on the relevant questions and consensus inspired by these principles.