The Actuaries Institute and the Australian Human Rights Commission (AHRC) have teamed up to release a guide to help insurers and actuaries comply with the federal anti-discrimination legislation when using artificial intelligence (AI) in pricing and underwriting insurance products.
The guidance resource was developed after a 2021 report by the AHRC that looked at the human rights impacts of new and emerging technologies. One of the recommendations in the report was a set of guidelines for government and non-government organisations on complying with federal anti-discrimination laws when AI is used for decision making.
Elayne Grace, chief executive of the Actuaries Institute, said the collaboration demonstrates the complex nature of society's issues and the need for a multi-disciplinary approach, particularly where data and technology are used to improve the provision of fundamental services such as insurance.
Human Rights Commissioner Lorraine Finlay added: “With AI increasingly being used by businesses to make decisions that may affect people's basic rights, it is essential that we have rigorous protections in place to ensure the integrity of our anti-discrimination laws. But without adequate safeguards, there is the possibility that algorithmic bias might cause people to suffer discrimination due to characteristics such as age, race, disability, or sex.”
An Actuaries Institute survey this year found that at least 70% of the respondents indicated the need for further guidance to comply in the emerging area or wider use of AI.
Grace commented: “Australia's anti-discrimination laws are long-standing, but there is limited guidance and case law available to practitioners. The complexity arising from differing anti-discrimination legislation in Australia at the federal, state, and territory levels compounds the challenges facing Actuaries and may reflect an opportunity for reform.”
Actuary Chris Dolman, who led the Actuaries Institute's contribution to the preparation of the guidance resource as a representative of the Data Science Practice Committee, outlined some strategies for insurers using AI systems to address algorithmic bias and avoid discriminatory outcomes. These strategies include rigorous design, regular testing, and monitoring of AI systems.
“In the insurance context, AI may be used in a wide range of different ways, including in relation to pricing, underwriting, marketing, customer service, including claims management, or internal operations,” he said. “This guidance resource focuses on the use of AI in pricing and underwriting decisions, as these decisions are already likely to use AI and, by their nature, will have a financial impact that may be significant for an individual. Such decisions may also be more likely to give rise to discrimination complaints from customers. However, many of the general principles outlined may also apply to the use of AI-informed decision-making in other contexts.”