Artificial intelligence (AI) is proving adept at perpetrating sophisticated fraud.
Here’s one example: A year ago, thieves used AI-generated deepfake video technology to convince a company’s chief financial officer to transfer $35 million in company funds by mimicking the CEO and other staff members on a video call.
While industry sources tell Canadian Underwriter cyber insurance policies do cover AI-perpetrated cybercrime, this begs the question: How will cyber policy terms and conditions evolve to respond when AI-generated fraud becomes more acute? And, will policy exclusions evolve based on a company’s oversight of its AI projects?
“AI is just another cyber or digital technology software tool,” Neal Jardine, global director of cyber risk intelligence and head of claims at BOXX Insurance, explains. “It’s similar to other software we have used to communicate such as email [and] Word documents or to make calculations using Excel. AI software is now being used more broadly, as businesses adapt to new opportunities to achieve their organizational goals.
“Is AI covered by a cyber insurance policy? The answer is yes because we cover cyber and digital technology software. If we were to exclude AI, the insurance policy would be excluding a particular software tool businesses use to operate. The policy has to cover all software, as they are often interdependent, and businesses can choose to use them with a unique combination of technologies.”
Is policy language broad enough?
Lindsey Nelson, head of cyber development at CFC Underwriting, says the wording of cyber policies is intentionally broad to include AI.
“I think it’s fair to say most standalone cyber forms [are] intended to cover almost every form of digital risk they possibly can [including AI],” she says. “Because cyber risk has evolved so quickly, [if cyber policies didn’t use broad wording], we would be in a position where we would have to update our wording every single day if we wanted to capture every single term that came with [a risk] to make sure that it’s addressed.
“So, cyber insurers try to create that umbrella, all-encompassing term by intentionally keeping the language broad so that it captures the entire threat landscape.”
However, AI potentially poses a vast threat to further the ends of cybercriminals, as noted by Kirsten Thompson, partner and the national lead of the privacy and cybersecurity group at Dentons LLP in Toronto.
“In the cyber [insurance] space, claims are being denied on the basis of the actor,” she observed during a Dentons insurance media briefing last November. “When insurers started backing away from cyber insurance [during the pandemic], they started looking at things like ransomware. Well, that’s a criminal actor. That’s a criminal matter, not a cyber [matter], so it’s not covered by the cyber security provisions.”
Thompson adds: “It’ll be interesting to see [what happens with] AI-mediated cyber security incidents because there’s no actor you can point at. There’s autonomous AI that just probes systems and then breaks into them. So, if you have that, you can’t point to an actor for an exclusion.”
This article is excerpted from one that appeared in the February-March print edition of Canadian Underwriter. Feature image by iStock/Sansert Sangsakawrat