As Canadian companies, including insurers and brokers, expand their use of artificial intelligence, they are often relying on third parties to provide data, training and/or expertise. That potentially broadens their exposure to third-party risk, industry defence counsel caution.
“Very few insurers are developing their own AI,” Kirsten Thompson, partner and the national lead of the privacy and cybersecurity group at Dentons LLP, said at the firm’s insurance conference in November 2024. “They’re usually using a vendor, third party, and then training it on their data sets. So, in my experience, there’s not a lot of scrutiny of third-party AI vendors.”
IBM posted a study today observing that a minority of companies experimenting with AI-generated models and technology are doing so “in house.” Most of the help is coming from outside parties.
“The data reveals that Canadian businesses are using a combination of buying or leasing AI tools from vendors (65%), using an open-source ecosystem (57%) as compared to in-house development (42%),” says the online IBM study, which canvassed the opinions of 2,413 IT decision makers in the United States, Canada, Mexico, Brazil, UK, France, Germany, Spain, India, Singapore, Indonesia, and South Korea.
More than half (56%) of Canadians in IBM’s survey say they will increase their AI investments in 2025. They plan to leverage open-source ecosystems (41%), hire specialized talent (41%), evaluate models (43%), and use cloud-managed services (49%) to adopt AI.
Among the reasons companies need outside help is because several say they don’t have the expertise or systems to create their models in-house.
“’Data quality and availability’ [are] identified as the most common obstacles for Canadian organizations (49%) moving from AI pilots to full launch,” the IBM study finds. “This is followed by scalability issues (47%) and integration with existing systems (44%).
“Additionally, when implementing AI, the biggest challenges for Canadian organization are technology integration (27%), lack of AI expertise (27%) and lack of AI governance (25%).”
Also in the news: Top 2025 risk for Canadian businesses
But beware of relying on third-party IT expertise, says Thompson.
“I’ve got a matter on my desk right now, where it’s two kids in [southwestern Ontario] who came up with some genius thing, and a major insurer is about to unleash this into its systems,” she says. “Not much you can do with indemnification. I wouldn’t even go after two kids in [southwestern Ontario]. But that’s where you need good governance.
“What is your training centre? Where did the data come from? What are your fallbacks? What’s the explainability? What are your outcomes? Where is the transparency?”
Financial institutions are now using AI for more critical use cases, such as pricing, underwriting, claims management, trading, investment decisions, and credit adjudication, says a September 2024 report by the Office of the Superintendent of Financial Institutions (OSFI).
“The use of AI may amplify risks around data governance, modelling, operations, and cybersecurity,” OSFI’s report states. “Third-party risks increase as external vendors are relied upon to provide AI solutions. There are also new legal and reputational risks from the consumer impacts of using this technology that may affect financial institutions without appropriate safeguards and accountability.”
In a forthcoming story to be published in the February-March 2025 print edition of Canadian Underwriter, Ruby Rai, cyber practice leader (Canada) at Marsh McLennan, says reliance on any technology is part of the AI risk exposure, and so guardrails such as governance framework will be an important part of risk management efforts for any organization.
As AI adoption permeates through business processes, the goalpost for privacy and security controls will also evolve, she says.
Feature image courtesy of iStock.com/Golden Sikorka