The Key Steps to Successfully Govern Artificial Intelligence

Author: Ravikumar Ramachandran, CISA, CISM, CGEIT, CRISC, CDPSE, OCA-Multi Cloud Architect, CISSP-ISSAP, SSCP, CAP, PMP, CIA, CRMA, CFE, FCMA, CIMA-Dip.MA, CFA, CEH, ECSA, CHFI, MS (Fin), MBA (IT), COBIT-5 Implementer, Certified COBIT Assessor, ITIL 4 -Managing Professional, TOGAF 9 Certified, Certified SAFe5 Agilist, Professional Scrum Master-II, Chennai, India
Date Published: 19 February 2024

The [technology] also wants what every living system wants; to perpetuate itself, to keep itself going. And as it grows, those inherent wants are gaining in complexity and force.”- Kevin Kelly

This blog post is an offshoot to my previous post, titled, “Artificial Intelligence – A Damocles Sword?” published in December 2019. During that period, governance of AI was pronounced as a top priority and justifications were made for its urgent need. Now four years down the line, lots of thought papers, open frameworks, regulations and legislation have come showcasing the need for governance of AI and providing broad steps to achieve it. 

The challenge now is to wade through this vast literature and find ways to operationalize AI governance tailor-made for a particular organization. An attempt is made in this regard here so that it can be useful to readers, especially those in the GRC fraternity. NIST’s AI Risk Management Framework, published in January 2023, has been considered in this blog post.

Assessing AI Risks

AI systems are designed to operate with varying levels of autonomy. What are the main risks that come with AI?

The 15 biggest risks of AI, as shown here, are:

  1. Lack of Transparency
  2. Bias and Discrimination
  3. Privacy Concerns
  4. Ethical Dilemmas
  5. Security Risks
  6. Concentration of Power
  7. Dependence on AI
  8. Job Displacement
  9. Economic Inequality
  10. Legal and Regulatory Challenges
  11. AI Arms Race
  12. Loss of Human Connection
  13. Misinformation and Manipulation
  14. Unintended Consequences
  15. Existential Risks

It is to be noted that AI can generate unique risks beyond universal risks depending on the nature of the organization using it.

AI Trustworthiness

NIST prescribes the following characteristics of trustworthy AI:

  1. Valid and Reliable
  2. Safe
  3. Secure and Resilient
  4. Accountable and Transparent
  5. Explainable and interpretable
  6. Privacy-enhanced
  7. Fair, with harmful bias managed

According to NIST, “A highly secure but unfair systems, accurate but opaque and uninterpretable systems, and inaccurate but secure, privacy-enhanced, and transparent systems are all undesirable. A comprehensive approach to risk management calls for balancing trade-offs among the trustworthiness characteristics.” 

The key phrase above is “balancing trade-offs.”

AI Risk Management Framework Core

The RMF Framework prescribed by NIST comprises four functions and is shown in the diagram below:

  1. Govern
  2. Map
  3. Measure
  4. Manage

Governance is designed to be a cross-cutting function to inform and be infused throughout the other three functions. AI RMF core functions should be carried out after considering multi-disciplinary perspectives, potentially including views outside the organization.

Here are some suggested key steps for AI governance:

  1. The seven characteristics of trustworthiness should be considered for inclusion in risk metrics.
  2. Every characteristic should be valued either quantitatively or qualitatively based on the context, but preferably quantitatively and in percentages. A balance can be reached based on the risk-benefit cost analysis or as per regulatory or industry requirements.
  3. Multi-disciplinary stakeholder perspectives comprising both internal and external stakeholders should be considered for the trustworthiness characteristics of the AI systems, especially for those AI systems that have significant socio-economic consequences.
  4. Categories and sub-categories of each RMF core function should be taken as guidelines and applied while exercising each function.
  5. Importantly these guidelines should be considered based on context and not as a strict checklist.
  6. The audit fraternity should consider AI audit as “Management Audit” or “Social Audit” and apply those principles, and not those of financial audit or internal audit. This point applies regardless of the AI component that needs to be audited.

A Case Study on the Operationalization of AI Principles

A case study on how Microsoft used an AI ethics committee to govern its AI Development is briefly detailed here. In March 2018, Microsoft announced that it was establishing an AI and Ethics in Engineering and Research (AETHER) Committee led by both the president and executive vice president of the company’s AI and Research group. By early 2019, AETHER was expanded to stand for AI, ethics, and effects in engineering and research. AETHER is organized into seven working groups:

  1. Sensitive Uses to assess the impact of automated decisions on people’s lives
  2. Bias and Fairness to assess the impact on minority and vulnerable populations
  3. Reliability and Safety to ensure that AI systems are robust against adversarial attacks
  4. Human Attention and Cognition to monitor algorithmic attention-hacking and abilities of persuasion
  5. Intelligibility and Explanation to provide transparency into machine learning and deep learning models
  6. Human AI Interaction and Collaboration to enhance people’s engagement with AI systems
  7. Engineering Best Practices to recommend best practices for each stage of the AI system development cycle.

The decision to establish AETHER sends a very clear message to employees, users, clients and partners that Microsoft intends to hold its technology to a higher standard.

Industry-Specific Guidelines Can Strengthen AI Governance

AI carries lots of benefits and advantages for humankind, but the associated risks need to be managed in a comprehensive manner. Therefore, following a comprehensive framework will enable organizations to prevent making ad-hoc decisions and ensure better credibility and trust among various stakeholders. 

As AI is evolving, regulatory bodies are coming out with open frameworks to suit various environments and industries. Specialized bodies like ISACA can complement those frameworks with industry-specific guidelines using its expertise and pool of knowledge.

Author’s note:The opinions expressed are of the author’s own views and does not represent that of the organization or of the certification bodies he is affiliated to.

Additional resources