AI Ethics and eXplainable AI (XAI): Much Easier Said Than Done

Author: Guy Pearce
Date Published: 6 September 2022

More artificial intelligence (AI) proponents are proposing that integrating ethics with AI will result in better customer engagement with the technology. Given that ethics is a vast body of knowledge accumulated over 7,000 years from among so many phenomenally different cultures on Earth, a first question must be, “Whose ethics?” followed quickly by “What ethics?”

Will humankind ever be able to agree on a universal list of moral rights and wrongs to integrate into the relatively new field of AI? Often drawing up a list of ethics principles begins with an assumption that those principles are universally accepted.

Indeed, “If there was a set of universal ethical principles that applied to all cultures, philosophies, faiths and professions, it would provide an invaluable framework for dialogue.” Larry Colero inadvertently found himself at the center of this attempt at a universal ethics framework, which, over 20 years later, has not been subject to the kind of endless controversy he expected. However, 20 years is a fraction of 7,000 years of endeavor, so it is too early to determine the extent to which it begins to be universal, besides not having been exposed to all the cultures of the world for validation.

So, when AI vendors refer to ethics, do they not refer to their regional, cultural behaviors and beliefs in what is right, rather than to ethics absolutes? It would be accurate for a vendor to say that AI is being programmed with regional cultural norms in mind—if this is truly happening—rather than implying that universal truths have been agreed and programmed into the AI. Given this, three tests for a vendor claiming that its AI is ethical are to list the ethical principles being applied, to show that the ethics principles are universal rather than regional and to demonstrate that those principles are exhibited in decision making. If there are no suitable answers to these tests, then it is all just hype.

What is absolute is that ethics decision-making has a context. As soon as the context for the ethics decision changes, the ethics decision can change, too. Another problem with so-called ethical AI lies in the fact that a machine will be making ethics decisions. Assuming that universal ethics have been programmed into the AI, what if neither the machine nor its human creators understand how and why the decision is being made? With a lack of transparency, how then can the machines ever be trusted?

Enter XAI, the search for eXplainable AI, a field that was in decline in the 1980s during one of the winters of AI, but that has emerged again as machine learning technology has advanced. Its emergence is instrumental in explaining the black box phenomenon inherent in much of today’s AI, which, if successful, could help build trust in AI machines as one significant benefit of the paradigm.

As of 2020, however, a gap between the deployment of XAI and end user transparency continued to exist. It is seen as a grudge effort between developers and product managers more excited to get a product to market than to understand how it works. There are also significant differences in the goals for XAI between different stakeholders. This makes the implementation of XAI more complex than expected. In healthcare, some have even argued that the benefits of AI outweigh the need for XAI.

In practice, the overall goals of XAI currently struggle to be realized as a mechanism to ensure understandable, trustworthy and controllable AI. Even more, the XAI techniques for the more complex machine learning algorithms are very complicated to use, some requiring deep statistical and mathematical experience to apply.

That said, XAI is currently all we have in the pursuit of trying to understand our AI, and it is a minimum level of effort required to help build trust that our AI can make the appropriate decisions under complex ethics situations. It is still our only means to make the “how” and the “why” of an AI-driven ethics decision available for subsequent scrutiny, thereby enabling the learning necessary to understand more about the ethics decisioning process while on the path toward increasingly intelligent technology.

Editor’s note: For further insights on this topic, read Guy Pearce’s recent Journal article, “Focal Points for Auditable and Explainable AI,” ISACA Journal, volume 4 2022.

ISACA Journal turns 50 this year! Celebrate with us—and do not forget you can still receive the print copy by visiting your preference center and opting in!