Artificial intelligence (AI) has helped streamline and transform the insurance process, from underwriting to claims. But as these tools become increasingly embedded in carriers’ lines of business, so too does the need for governance to ensure the technologies are used in safe and compliant ways.
Insurance companies should also be increasingly engaged in the governance of AI systems in the face of growing regulatory pressure. Every organization should have an AI governance platform to avoid the risk of violating privacy and data protection laws, being accused of discrimination or bias, or engaging in unfair practices.
“As soon as a similar regulation or legislation is passed, organizations are placed in a precarious position because [lack of governance] can lead to fines, loss of market share, and bad press. Every business who uses AI needs to have this on their radar,” said Marcus Daley (pictured), technical co-founder of NeuralMetrics.
NeuralMetrics is an insurtech data provider that aids in commercial underwriting for property and casualty (P&C) insurers. The Colorado-based firm’s proprietary AI technology also serves financial services companies and banks.
“If carriers are using artificial intelligence to process personally identifiable information, they should be tracking that very closely and understanding precisely how that’s being used, because it is an area of liability that they may not be aware of,” Daley told Insurance Business.
The Council of the European Union last month officially adopted its common position on the Artificial Intelligence Act, becoming the first major body to establish standards for regulating or banning certain uses of AI.
The law assigns AI to three risk categories: unacceptable risk, high-risk applications, and other applications not specifically banned or considered high-risk. Insurance AI tools, such as those used for the risk assessment and pricing of health and life insurance, have been deemed high-risk under the AI Act and must be subject to more stringent requirements.
What’s noteworthy about the EU’s AI Act is that it sets a benchmark for other countries seeking to regulate AI technologies more effectively. There is currently no comprehensive federal legislation on AI in the US. But in October 2022, the Biden administration published a blueprint for an AI “bill of rights” that includes guidelines on how to protect data, minimize bias, and reduce the use of surveillance.
The blueprint contains five principles:
The Blueprint for an #AIBillofRights is for all of us:
— White House Office of Science & Technology Policy (@WHOSTP) October 6, 2022
- Project managers designing a new product
- Parents seeking protections for kids
- Workers advocating for better conditions
- Policymakers looking to protect constituentshttps://t.co/2wIjyAKEmy
The “bill of rights” is regarded as a first step towards establishing accountability for AI and tech companies, many of whom call the US their home. However, some critics say the blueprint lacks teeth and are calling for tougher AI regulation.
Daley suggested insurance companies need to step up the governance of AI technologies within their operations. Leaders must embed several key attributes in their AI governance plans:
Daley stressed that carriers must be able to answer questions about their AI decisions, explain outcomes, and ensure AI models stay accurate over time. This openness also has the double benefit of ensuring compliance by providing proof of data provenance.
When it comes to working with third-party AI technology providers, companies must do their due diligence.
“Many carriers don’t have the in-house talent to do the work. So, they’re going to have to go out and seek aid from an outside commercial entity. They should have a list of things that they require from that entity before they choose to engage; otherwise, it could create a massive amount of liability,” Daley said.
To stay on top of regulatory changes and the enhancements in AI technologies, insurance companies must be consistently monitoring, reviewing, and evaluating their systems, then making changes as needed.
Rigorous testing will also help ensure that biases are eliminated from algorithms. “Governance is just a way to measure risk and opportunities, and the best way to manage risk is through automation,” Daley said. Automating inputs and testing the outputs produced creates consistent, reliable results.
To nurture trust with clients, regulators and other stakeholders, insurance companies must ensure that their AI processes remain accurate and free from bias.
Another thing for carriers to watch for is the sources of their data and whether they are compliant. “As time goes on, you see that sometimes the source of the data is AI. The more you use AI, the more data it generates,” Daley explained.
“But under what circumstances can that data be used or not used? What’s the nature of the source? What are the terms of service [of the data provider]? Ensuring you understand where the data came from is as crucial as understanding how the AI generates the results.”
Do you have any thoughts about AI regulation? Share them in the comments.