AI use and testing continues to be a murky area and is leading to more risk exposures for insurers and their commercial clients, Dentons Canada LLP counsel warned in a media roundtable Wednesday.
Within the insurance industry, artificial intelligence testing and use cases have expanded from fraud detection to claims adjudication and even underwriting. Property and casualty insurers and brokerages testing AI in their operations should make sure humans are overseeing AI’s output, as machines are still learning, and their logical outputs don’t always make sense, counsel cautioned.
Similarly, insurers seeking opportunities to provide AI coverage to clients must be careful about how their clients are using AI.
AI use cases in insurance
Canada’s P&C industry has been testing and using AI to detect and prevent fraudulent claims for some time. But now insurers are starting to use it for the purpose of adjudicating claims. And that is exposing insurers to potential liability.
“We’re now starting to see it used in claims and claims adjudication, which is a higher-risk area,” said Kirsten Thompson, a partner who leads the privacy and cybersecurity group in the Dentons Canada LLP office. “We’ve started to see class actions come out of the U.S. in the area of health, where AI was used in the process of adjudication [and] denied claims because the AI was looking at its data sets and basically saying, ‘No, old people are at risk, we’re going to deny the claims.’
“And that, in the AI’s mind, was a perfectly reasonable thing. Now there are a bunch of lawsuits.”
Insurers are testing AI in the underwriting function as well, Brad Neilson, vice president of personal lines pricing at Intact Financial Corporation told attendees at the National Insurance Conference in Canada (NICC) in Vancouver in September.
On the basis of these tests, Neilson strongly recommended the AI models’ output should be subject to scrutiny not just by modelling experts, but also by people with expertise in the P&C insurance business. Otherwise, the AI model might not make assumptions appropriate to the business, or the business may adopt models that spit out results that don’t make sense, he said.
“So I’ll tell you about a funny example — the regulator might not think it’s funny,” Neilson joked before proceeding. “Very early in our in our process of exploring machine learning, we had a case where an auto comprehensive premium was generated of $4 million. And I think even in the high-theft market, that’s probably too high.
“So we did a deep dive into what was going on here. Getting into nitty-gritty of the data, there was an assumption that the person was 95 years old. There’s not that many of those [drivers] on the road, so you have limited data. And you had a model that was overfit to this limited data…
“I don’t think I can overemphasize the modeling expertise you need to build up in your company before you go full speed on deploying some of these [AI] models.”
Also in the news: Housing crisis: What’s missing from an insurance perspective?
AI and professional liability exposure
When it comes to insuring business clients experimenting with AI, Canada’s P&C insurance industry can expect to see more claims made against their clients’ errors and omissions, directors and officers, and professional liability policies, counsel cautioned at the Dentons Canada event.
Policy coverage for AI seems to be following the same path as cyber insurance, Thompson said.
“We’re starting to see, just like the cyber insurance cycles, all the insurers jumped into it with very poorly defined policies and what was covered,” Thompson added. “And then, as claims started rising, and ransomware started to become a significant issue, they started backing out of the market, and putting [cybersecurity] solutions in.
“Now we’re starting to see the dawn of AI insurance. And I expect that to follow the same cycle. So if you get in on the ground floor now, and get your AI insurance, I expect five years from now, that [same coverage] will probably not be offered for similar reasons…”
Many insurers’ corporate clients are testing AI in their operations as well. But if no one is minding the store while AI spits out its results, insurers’ E&O, D&O and professional liability policies may be exposed.
At the roundtable, Dentons litigation group partner Deepshikha Dutt, who practices in insurance in the areas of D&O, E&O, negligence and coverage litigation, cited an example of how, in the legal profession, counsel itself can be exposed to professional liability claims for any unsupervised errors made by generative AI.
“I’m now seeing two incidents where lawyers relied on Chat GPT to do their research, and I personally got one letter [from a lawyer], it wasn’t from the lawyer herself, who relied on Chat GPT to do research on a certain issue,” Dutt said. “It spit out cases with citations and facts. And I got the letter, and I had my associate research the case. The case doesn’t exist. There were 10 cases in that letter. None of the cases existed.
“I was shocked. I don’t even know how you came up with a case name with a citation and principles and a judge’s name attached to that case, so you have to be really careful.”
She added courts have responded by changing the rules, so counsel must now declare they have used AI as part of their research. That requirement exists in legal jurisdictions in the Northwest Territories, Alberta, BC, Ontario, and the Federal Court of Canada.
“People try to use [AI] as a tool to help them, but there need to be checks and balances in place in order to be sure [AI] is doing what you’re using it for.”
Feature image courtesy of iStock.com/Vertigo3d