Global insurtech investment reached $1.38 billion in the third quarter of 2024, marking the highest funding level since Q1 2023, according to Gallagher Re's Global Insurtech Report.
Of the total investment, 55.5% went to mega-rounds valued at $100 million or more. Although Insurtech saw 77 deals in Q3 2024 – a nearly four-year low – Gallagher Re noted that artificial intelligence remains a core theme, with 63.4% of deals targeting AI-centered insurtechs. Gallagher Re highlighted a trend among (re)insurers who primarily directed their tech investments to mid-stage funding rounds.
Gallagher Re also explored the “democratization” of Insurtech funding, noting that while mega-round deals were once dominant, funding amounts in recent quarters have trended toward a more “true” average, reflecting a shift in market dynamics. In past reports, Gallagher Re observed that while total Insurtech funding remains lower than its 2021 peak, more startups are raising amounts that more accurately represent average funding levels.
During peak funding in 2021, mega-rounds constituted 62% of the capital invested, with $9.8 billion out of $15.8 billion coming from these large deals. Yet these figures stemmed from only 8% of transactions, introducing volatility and creating a misleading perception of average deal sizes.
Gallagher Re’s analysis further suggests that startups in today’s funding environment, with quarterly investments now averaging around $1 billion, might be securing more stable funding relative to the number of deals.
This shift prompts consideration of how capital is distributed across the sector and raises questions about whether smaller insurtechs could be accessing relatively more capital on a per-deal basis in the current market than during peak times.
The report also highlights a significant regulatory development in artificial intelligence, with the introduction of the European Artificial Intelligence Act (AI Act), effective Aug. 1. Gallagher Re notes that this legislation, described as the first comprehensive AI law globally, focuses on managing risks related to health, safety, and rights.
The AI Act mandates clear obligations based on a risk-based framework that categorizes AI applications by risk level. Low-risk systems, such as spam filters, face minimal obligations, while high-risk AI systems in areas like medical applications must adhere to strict guidelines.
Certain AI uses, like social scoring, are outright banned due to perceived threats to fundamental rights.
Gallagher Re observed that the AI Act’s implications may extend beyond Europe, with other regions potentially examining similar regulatory frameworks. In the U.S., state regulators are expected to play a key role in deciding how AI regulations apply within various sectors, including insurance and reinsurance.
The ongoing conversation around AI regulation reflects a broader effort to ensure AI's responsible use in industries where the technology is rapidly evolving.
What are your thoughts on this story? Please feel free to share your comments below.