These days, AI — especially generative AI (gen AI) — has become a hot topic in many boardrooms. Directors are asking questions about AI‘s business value, the risks the technology may present and the board's role in AI governance, especially how to achieve balance between innovation and responsibility. Boards must navigate these new AI challenges and opportunities with an eye toward long-term value creation.
Several studies give us insight into how companies are experimenting with gen AI. For example, a 2023 Fortune/Deloitte survey of CEOs found that more than half of those surveyed (55%) were evaluating or experimenting with gen AI. Seventy-nine percent of CEOs believe generative AI would increase efficiencies, while 52% believe it would increase growth. Furthermore, over a third (37%) were currently implementing generative AI to some degree. A 2024 survey by McKinsey on the state of AI found that nearly 65% of respondents were regularly using gen AI, almost double the percentage from a previous McKinsey survey conducted just 10 months prior. The survey also revealed that most companies focused their gen AI efforts on those functions where they could get the most value, such as marketing and sales, product and service development, and information technology.
Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:
- Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
- Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
- Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
- AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
- AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.
It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be.
Without the appropriate guardrails, gen AI applications can result in long-term reputational damage, another major risk. Unintended negative outcomes, increasingly sophisticated cyberattacks, breaches of data privacy and unintended biases can all contribute to the erosion of trust from shareholders and key stakeholders.
To mitigate these risks and realize the potential of AI, boards and companies should build a governance framework that moves them toward ethical and responsible AI. While there are myriad AI governance frameworks out there — ranging from the voluntary ones to the involuntary ones published by governments, policymakers and multi-stakeholder organizations — it is prudent to focus AI governance efforts holistically across five critical dimensions.
- Intent shows that the design, implementation and data used are aligned with the companies' stated objectives.
- Fairness ensures that the data and algorithms used promote equal access and minimize the likelihood of biased outcomes.
- Transparency ensures that the AI systems can produce explanatory and repeatable outcomes, eliminating the “black box” effects where outcomes cannot be explained.
- Safety and security ensure that AI systems protect data privacy and guards AI decision engines from outside intrusions.
- Accountability ensures that AI systems can be audited and assessed for bias and accuracy, and that the use cases and outcomes follow laws and regulations and ethical standards.
To establish a governance framework that properly balances AI's transformative promise and its inherent risks, boards and management should focus on a few fundamental discussions that are fit-for-purpose for their companies.
Directors and management need to define their organization's gen AI ambition, including how much risk appetite they have, how much money and resources they're willing to invest and how integral gen AI is to their business strategies. While we've all been inundated with AI hype in the past few years, it's important to cut through the hype and complexity and examine gen AI's potential risks and opportunities through several lenses.
First, boards and companies need to understand the difference between what technological research and consulting firm Gartner calls “everyday AI” versus game-changing AI. Everyday AI focuses on productivity, enabling us to work faster and more efficiently. While everyday AI can supercharge our productivity, it's unlikely to give us a long-term, sustainable competitive advantage. Game-changing AI, on the other hand, focuses on creativity. It enables us to offer new products and services that wouldn't otherwise be possible and, in some cases, new capabilities that have the potential to disrupt entire industries. However, game-changing AI typically has a high barrier of entry, and not every organization will have the ability and/or desire to invest in game-changing AI.
Next, it's important for directors and management to assess gen AI's potential risks and opportunities across the company's internal-facing and external-facing capabilities and prioritize to focus resources on areas that make the most sense for their particular business.
Once all of these have been fully analyzed and understood, boards need to collaborate with management teams to facilitate the agile and safe adoption of gen AI by establishing their organization's AI Guiding Principles (the AI Code of Conduct) to guide the organization's AI uses, ensuring that their data is AI-ready and implementing AI-ready security, including the creation of an acceptable use policy for public gen AI solutions.
AI and gen AI present both tremendous opportunities and risks. It is important for boards and management to have a governance framework that provides the right balance in order to create long-term value for their companies.