A Better Path Forward for AI By Addressing Training, Governance and Risk Gaps

Author: Meghan Maneval, CISM, CRISC, Vice President of Product Strategy and Evangelism at RiskOptics
Date Published: 7 May 2024

In an age in which artificial intelligence (AI) permeates virtually every facet of our lives, the imperative for robust oversight has never been more essential. ISACA recently surveyed 3,270 IT audit, risk, governance, privacy and cybersecurity professionals about AI, including knowledge gaps, policies, risks, jobs and more. This survey on AI realities sheds light on some pressing concerns and opportunities in the realm of AI, highlighting the need for more comprehensive training and stringent ethical standards.

The Training Imperative

Despite the rapid adoption of AI technologies, there remains a significant gap in available training and guidance. The survey reveals that only a quarter of respondents feel extremely or very familiar with AI, with 46% considering themselves beginners. This lack of proper enablement extends into organizational training programs, with 40% of organizations offering no AI training at all. Even more concerning is that training is typically reserved for those in technical roles, leaving most of the workforce unprepared for the evolving digital landscape.

This discrepancy poses a risk not just to the effective and efficient use of AI but also to the ethical implications that come with its deployment. Without a well-informed workforce, organizations may fail to recognize or address potential biases and security risks inherent in AI systems.

The Governance Gap

The survey also points to a startling gap in AI governance oversight. Only 15% of organizations have a formal policy governing the use of AI technology. This stark deficit in governance is coupled with the finding that merely 34% of respondents believe their organizations adequately prioritize AI ethical standards, and only 32% say that security concerns, such as data privacy and bias, are sufficiently addressed.

The survey’s findings underscore the critical need for robust AI governance frameworks that do more than simply guide the technical deployment of AI. These frameworks must ensure that AI is used ethically, transparently and aligns with organizational objectives. Effective governance should encompass not only policy development and enforcement but also continuous monitoring and adaptation as AI technologies and their organizational impacts evolve.

Addressing AI Risk

The proliferation of AI technologies introduces significant risks that organizations must urgently address. A substantial 60% of respondents to the ISACA survey are very or extremely worried about the potential for generative AI to be exploited by bad actors, including the creation of more sophisticated phishing attacks. Moreover, 81% of respondents identify misinformation and disinformation as the biggest risks associated with AI. Despite these risks, only 20% feel confident in their ability to detect AI-powered misinformation, and just 23% believe their organizations are equipped to handle these challenges effectively.

Perhaps most troubling is that only 35% of those surveyed view the addressing of AI risks as an immediate priority for their organization. This gap between the recognition of AI risks and the prioritization of mitigative actions underscores the need for a strategic approach to AI risk management. Organizations must not only acknowledge these risks but also actively integrate risk management into their AI governance frameworks, ensuring they have the processes and tools in place to detect, respond to and mitigate AI-related threats effectively.

Your Path Forward

To bridge these gaps, organizations must prioritize the development of formal AI governance frameworks that not only address the operational aspects of AI but also its broader impacts on the organization. This includes establishing clear guidelines on AI usage, data-handling, and the mitigation of risks and bias. Equally important is the expansion of AI training programs across all levels of the organization, ensuring that every employee is equipped not only to use AI tools effectively but also responsibly.

As AI continues to reshape the landscape of modern businesses, I'm optimistic about the future of digital trust in an AI-driven world. By fostering an environment of continuous learning and diligent oversight, we can harness the transformative power of AI while safeguarding the principles of digital trust.

Editor’s note: ISACA is addressing the AI knowledge gap with new courses on AI essentials, auditing generative AI and AI governance. Find out more at www.isaca.org/ai.

Additional resources