As artificial intelligence (AI) technologies continue to gain traction in business, corporate boards face increasing pressure to oversee AI adoption responsibly.
Experts from Grant Thornton and industry specialists underscored the importance of balancing innovation with risk management while ensuring compliance with ethical and regulatory standards. These professionals emphasised that while AI offers significant potential, its integration requires deliberate oversight by board members.
"Boards need to make sure management’s AI initiatives are grounded appropriately and have flexibility within their structures to accommodate changes that might occur through M&A or other shifts in the environment,” said Janet Malzone, CEO of Grant Thornton LLP.
She noted that building a robust foundation for AI adoption could pave the way for sustainable and well-managed performance.
AI’s rapid evolution raises questions about whether board members possess the expertise needed to oversee such technology effectively. A survey conducted at the NACD Summit revealed that 23% of respondents have AI specialists on their boards, a significant increase from 4% the previous year. This shift highlights a growing awareness of the importance of specialised knowledge in AI governance.
Some boards are considering forming technology-focused committees to address these challenges. According to the National Association of Corporate Directors (NACD), the number of Fortune 100 boards with dedicated technology committees increased from 7 in 2012 to 36 in 2022.
These committees could provide critical oversight of AI projects and ensure that the organisation’s AI strategies align with long-term goals.
Ethan Rojhani, a principal in Grant Thornton’s Risk Advisory Services, stressed the need for boards to evaluate risks and opportunities in individual AI use cases.
“There’s a different level of risk from the advertising side. If you lose a customer, that’s a problem. But if you’re using AI for hiring and a candidate can prove the AI algorithm was biased, you may have just broken the law," he said, illustrating the complexities of AI oversight.
Although 90% of CFOs surveyed by Grant Thornton in Q3 2024 reported using or exploring generative AI, only 50% had formal training programmes in place—a drop of eight percentage points from the previous quarter. This gap underscores a critical need for investment in employee training to maximise AI’s potential while mitigating associated risks.
Joe Ranzau, Grant Thornton’s managing principal for growth advisory services, highlighted the disconnect between AI investment and workforce preparedness.
“While we’re investing in this technology, we’re also cutting back on the investment in our people to use that technology to increase productivity and also to manage the associated risks," he said.
Ranzau suggested that boards encourage management to allocate AI-driven cost savings toward training initiatives, creating a positive cycle of innovation and workforce development.
AI’s role in human resources is also expanding, with applications such as candidate screening and internal talent identification. However, these developments carry risks, particularly regarding algorithmic bias. Boards are advised to ensure robust safeguards are in place to prevent discriminatory outcomes and to consider how AI can be leveraged to improve workforce productivity.
Boards must oversee how AI aligns with organisational ethics and regulatory frameworks. Investment in data privacy and security measures is critical to addressing the risks associated with AI.
Johnny Lee, a principal in Grant Thornton’s Risk Advisory Services, noted three pillars of AI data security: preventive measures, detective capabilities, and insurance coverage to address residual risks.
“I don’t think there can be AI adoption on a meaningful scale without insurance,” Lee said. “If you’re not detecting that drift and its relative impact on liability, transparency, bias and other factors, then you may not be testing the model in a way that would be deemed responsible.”
AI systems must also be monitored for issues such as model drift—when an AI model’s outputs shift over time due to changes in input data—and hallucinations, where AI generates incorrect or biased outputs. Effective oversight involves ensuring that organisations have mechanisms to detect and mitigate these issues.
Failure to adopt AI effectively could leave organisations at a competitive disadvantage. Experts pointed to sectors such as customer service, pharmaceuticals, and professional services as areas where AI is driving significant disruption.
For example, advanced AI chatbots in customer service have outperformed human operators in customer satisfaction ratings, while generative AI accelerates drug discovery processes in pharmaceuticals.
Boards should ask management how AI investments align with long-term strategic objectives, including whether additional external expertise is needed. In some cases, bringing in third-party consultants to evaluate AI opportunities can provide fresh perspectives and unlock untapped potential.
As organisations integrate AI into their operations, boards have a critical role in ensuring that its adoption aligns with strategic priorities, workforce needs, and ethical standards. By asking the right questions and fostering a culture of continuous learning, boards can help their organisations navigate the complexities of AI and remain competitive in an evolving technological landscape.
AI adoption presents both risks and opportunities, and organisations that fail to address these issues may face operational and reputational challenges. However, with deliberate oversight and strategic investment, boards can help their organisations realise the full potential of AI while managing its inherent risks.
What are your thoughts on this story? Please feel free to share your comments below.