Managing the risk of AI: What nonprofit leaders need to know

The AI risks every nonprofit faces

Managing the risk of AI: What nonprofit leaders need to know

Non-Profits & Charities

By

This is a contributed piece from Intact Insurance Specialty Solutions.

Nonprofit leaders are faced with numerous risk management exposures, and (Artificial Intelligence) AI has been added to the list. AI is revolutionizing industries, including the nonprofit sector, by offering tools to streamline operations, enhance decision-making, and improve service delivery.

However, Nelson Kefauver (pictured left), head of financial and professional lines, and Katherine Gauthier (pictured right), vice president of financial and professional lines at Intact Insurance, emphasize that these advancements also introduce new risks that must be managed effectively. Without adequate insurance coverage, nonprofits may expose themselves to substantial financial, operational, reputational, and strategic risks that can jeopardize their mission and sustainability.

Discrimination, bias, and third parties 

Programmers may inadvertently train AI systems on biased data, resulting in bias perpetuation when making decisions by favoring certain demographics over others. For instance, in employment screening, an AI algorithm might emphasize age, gender, or other stereotypes from its training data, resulting in the elimination of qualified candidates. Similarly, foundations or grantmaking organizations might rely on biased AI-assisted processes to help decide where to allocate grants and donations, and economic development nonprofits might base lending decisions on poorly trained AI models.  

Social engineering and the rise of deep fakes

Nonprofit organizations are becoming prime targets for social engineering attacks, where criminals manipulate human behavior to steal funds. With the rise of AI, these schemes are becoming more sophisticated. Cybercriminals can now use AI to mimic a co-worker’s tone and writing style in emails, making their fraudulent requests harder to detect and more successful. Additionally, deep fakes, combined with AI-driven analysis, enable impersonations in phone or video calls, further increasing the risk of theft for nonprofits.

Intellectual property infringement 

AI-generated content from nonprofits poses the risk of infringing on another party’s intellectual property. This is especially concerning for nonprofit broadcasting and publishing organizations that use AI tools to create and share news or information. Nonprofit research labs and any organization producing written or video content with AI also face heightened exposure. Additionally, nonprofits may unknowingly face intellectual property claims if third-party vendors generate AI content on their behalf without proper oversight.

Mitigating legal risks with insurance

Lawsuits and claims—especially those related to discrimination or fraud—can severely tarnish a nonprofit's reputation. Stakeholders such as donors, beneficiaries, and partners may lose confidence in your ability to manage risks effectively, which can lead to a decline in support and collaboration opportunities. It’s essential to work closely with your insurance broker to review your existing policies and consider coverage for AI-related risks.

While many nonprofit organizations invest in Directors and Officers (D&O) coverage, those without employees often overlook employment practices coverage. Most employment practices policies protect against claims of discrimination, not only by employees but also by third parties. Therefore, discussing these insurance options with your broker is crucial.

Without D&O insurance, board members and executives could jeopardize their personal assets in the event of a lawsuit. Similarly, lacking employment practices insurance exposes the organization to claims related to wrongful termination, harassment, and discrimination, significantly increasing the risk of costly legal battles.

Additionally, it's vital to review insurance policies with your broker to evaluate the necessity of crime and cyber coverage. These policies can help mitigate the financial repercussions of data breaches, covering costs such as notifications, credit monitoring for affected individuals, and even regulatory fines in some cases. Be sure to check whether these policies include coverage for social engineering fraud to protect against financial losses due to scams and fraudulent activities.

Operational dependence and errors

When consolidating information or doing research, AI can be incredibly helpful. However, information needs to be reviewed to ensure accuracy. Over-reliance on AI for decision-making can be risky if the AI system makes an error or fails to adapt to new circumstances. For instance, a chatbot delivering incorrect advice or an AI-driven analysis producing faulty results could harm a nonprofit’s credibility and effectiveness, potentially leading to claims related to the quality of services provided. 

Tips for nonprofits

According to Kefauver and Gauthier, operational measures you can take to better protect your nonprofit from AI risks include:

  • Implement rigorous oversight: Ensure that AI tools undergo the same level of scrutiny and oversight as human-driven processes. Conduct regular audits and reviews of AI outputs to identify and correct any biases or errors.
  • Enhance fraud prevention training: Traditional fraud prevention measures may not suffice. Equip employees with training to recognize and respond to sophisticated AI-driven fraud schemes effectively.
  • Strengthen cybersecurity measures: Invest in robust cybersecurity practices to safeguard your data and AI systems from breaches. This includes employing encryption, establishing access controls, and conducting regular security assessments to identify vulnerabilities.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!