State regulations on the use of artificial intelligence (AI) by insurers are not keeping pace with advancements in technology, particularly in the health insurance sector, according to consumer representatives to the National Association of Insurance Commissioners (NAIC).
The group noted that while AI is being used effectively in some cases, its deployment in health insurance poses unique risks that require closer scrutiny.
Health insurers are increasingly using AI for utilization management, a process that determines the necessity and appropriateness of medical services. Proponents of AI claim it streamlines approvals and reduces administrative burdens.
However, Wayne Turner, senior attorney with the National Health Law Program and an NAIC consumer representative, cautioned that the technology also introduces potential harms.
“AI is being used effectively in many cases, but it’s also causing harm, particularly in the health insurance space,” Turner said in a report from AM Best, emphasizing the distinct challenges in this sector compared to life or property/casualty insurance.
Features like prior authorization and step therapy, Turner noted, add complexity to how AI is applied.
In response to these concerns, the NAIC is preparing to distribute a survey to health insurers operating across states to better understand how AI is being utilized. Turner said this initiative aligns with efforts by consumer representatives to raise awareness among state regulators.
Although consumer representatives do not work for the NAIC, they provide guidance on best practices for protecting policyholders. One key area of focus, according to Silvia Yee, senior staff attorney and policy analyst for the Disability Rights Education and Defense Fund, is ensuring that policyholder experiences are central to regulatory decisions.
“Tech experts and even good-hearted regulators can't know all the fresh barriers that AI use may be bringing, or the old barriers that AI may be exacerbating, without consulting with consumers,” Yee said.
Yee and other representatives are advocating for greater transparency in data use, as well as rigorous training and testing of AI tools. They also recommend reviewing existing anti-discrimination and data privacy laws to ensure they adequately cover AI-driven decisions in utilization management.
Consumer representatives stressed the importance of clearly assigning responsibility when AI leads to consumer harm, whether through discrimination, breaches of privacy, or incorrect coverage denials. Regulations should ensure that AI tools prioritize quality of care, with significant penalties for noncompliance to deter misuse.
The group also called for robust oversight and appeals processes for decisions influenced by AI. Human oversight, they said, should be integral to AI-driven utilization management systems.
Recognizing the rapidly evolving nature of AI, the representatives emphasized the need for collaboration among regulators, technology experts, industry stakeholders, and consumers to ensure regulations remain relevant as the technology advances.
As of Oct. 31, 17 states had adopted the NAIC’s model AI bulletin, and four states had implemented insurance-specific regulations or guidance, according to the association.
Recently, Oklahoma released guidelines for insurers’ AI use, modeled on NAIC recommendations. However, Yee said that many states have yet to address AI risks in a meaningful way.
“If the first step is admitting a problem, many states have not even gotten to this first step, much less tried to actively address it through regulation or policy,” Yee said.
She added that the rapid evolution and broad application of AI present challenges for regulators attempting to craft enforceable policies.
“One challenge is coming up with something specific enough to be readily implementable and enforceable, while also being general enough to cover how AI is already evolving, adapting to different uses and spreading across industries rapidly,” Yee said.
What are your thoughts on this story? Please feel free to share your comments below.