Artificial intelligence (AI) tools are helping brokers increase their operational efficiency and deliver better service, but they’re also creating risks for consumer protection, finds new research by Registered Insurance Brokers of Ontario (RIBO).
The research, conducted by the Behavioural Insights Team, examines reports and articles from regulators, academics and industry participants, as well as interviews with Ontario brokers, to understand how AI is used in the sector.
“One of the reasons we did this research was to keep pace, but also to make sure that the Code of Conduct is relevant,” says Jessica Harper, RIBO’s director of policy, licensing and standards, “and that it’s not flattening or preventing innovation from occurring.”
RIBO will use the research to shape regulatory guidance on AI but it’s still too early to speculate how that guidance will look, she says.
“The short answer…is that RIBO’s existing regulatory principles, like competence, suitability, and confidentiality found in the Code of Conduct are still fit-for-purpose — they just need to be reaffirmed in a novel context,” she explains.
One thing the research finds is that brokers, many of which may have small operations or limited resources, are more likely to use third-party AI tools than to build their own in-house tools.
But unlike in-house tools — closed models, where organizations set their own guardrails and inputs — third-party tools can increase customer risk, as they’re a black box model where users can’t explain the processes.
Despite this, brokers remain responsible for their use of third-party tools, so it’s important they still comply with the Code of Conduct while using them, explains Harper.
How brokers use AI
Robotic process automation (RPA) is widespread among brokerages, RIBO’s research says, for streamlining back-office functions, like data and document management, or form completion.
Brokers are also experimenting with AI for customer-facing uses like chatbots and policy renewal option generation. There’s also a significant trend of AI being used for risk modelling and pricing, RIBO finds.
Less common — though it’s beginning to pick up — are brokerages using generative AI tools like ChatGPT for marketing strategy or content creation.
However, some brokers tell RIBO they’re hesitant to further adopt AI in order to preserve the broker-t0-customer relationship.
“The use today is really about [keeping the] human in the loop,” says Harper. “You’ll hear that a lot when people speak [about it]. It’s still a very hands-on use that’s happening today.”
That’s an approach favoured by financial regulators across the board.
The Canadian Securities Administrators recently released guidance on how market participants may leverage AI systems. And, the Alberta Securities Commission specifically asserted some AI use cases require a human decision-maker in a panel last week, Canadian Underwriter’s sister publication, Investment Executive, reported.
Risk mitigation
RIBO’s research finds firms could begin adopting “broader customer-facing AI tools in the medium term or sooner.”
However, future use of AI tools by brokers could introduce new risks for consumers. Without a human touch, RIBO says brokers risk harming clients’ privacy, confidentiality and data security. For example, AI tools may gather personal client data without the informed consent of brokers.
“A [broker using and AI tool] could be spell checking an email and there’s customer information in that email,” says Harper, “and then [they] suddenly copied and pasted address information” about a client into the AI.
That risk is heightened if brokers use third-party models rather than their own.
“Customers expect that insurers are able to explain and justify their decisions,” says Harper. “So, if a broker doesn’t understand what went into that model to underwrite the risk, that may erode customer confidence in what brokers are offering.”
Third-party models also risk outputting biased insurance decisions, if the data used to train the model is inaccurate or out of date.
The research anticipates most brokers will continue using third-party tools, rather than creating them in-house.
Who’s liable?
AI tools also might not be trained to prioritize the consumers’ best interests, RIBO cautions.
“It is difficult to determine how an AI application may balance providing the best advice to consumers with other potential interests of insurers or brokers (e.g., stronger margins or fees for insurers or brokers),” the research reads.
For RIBO, Harper says the key questions are: “Who is responsible when AI gives advice? And how do we ensure the AI systems adhere to the standards expected of human professionals?”
Those are questions industries of all types are beginning to address. Earlier this year, Air Canada was ordered to uphold a policy fabricated by its AI customer chatbot, after the Civil Resolution Tribunal (CRT) found the airline liable for misrepresentations made by its AI.
Feature image by iStock.com/demaerre