“We want the conversation to be driven and determined by the insurance industry”
Legal and ethical concerns over artificial intelligence (AI) have been mounting in recent months, especially amid the rise of ChatGPT. As technologies continue to advance, one company believes the insurance industry should collaborate on its own AI code of conduct.
AI-driven insurance intelligence provider Cloverleaf Analytics is calling for carriers and MGAs to steer the conversation on AI ethics. It started a group called the “Ethical AI for Insurance Consortium” to help facilitate that conversation.
“One of the things that we interested in is a code of conduct around using AI and machine learning use in insurance,” said Robert Clark, president and CEO of Cloverleaf Analytics (pictured on the right). “We recommended that a working group be started to help with some of those ethics upfront.”
Consortium for ethical AI in insurance
Clark stressed that ethical guidelines for AI usage within the insurance industry can help companies get ahead of the technology’s pitfalls, such as privacy and safety issues, bias and discrimination, and inaccuracy.
“It’s worthwhile doing it upfront to make sure that there aren’t any inherent biases [in the AI technology] and iteratively checking so that you never have an issue,” Clark said.
Cloverleaf Analytics reached out to its customers and asked them to designate individuals to form the consortium, Clark told Insurance Business.
“Our customers include program business carriers, MGAs, and direct underwriters, so we’re starting there,” he said. “We’re happy to help get it started, but ultimately, we want it to be driven and determined by the insurance industry, and not by a vendor.”
Data released by Sprout.ai revealed that over half of US and UK insurers are already using generative AI like ChatGPT in their organizations.
But several concerns have surfaced amid AI’s integration into insurance that need to be addressed, according to Michael Schwabrow (pictured on the left), EVP of sales and marketing at Cloverleaf Analytics.
“Carriers and technology partners need to make sure that bias doesn’t creep into the data and the models we’re using for looking at rate, appropriation of coverage, and everything else,” Schwabrow said. “Because once it’s in there, and you’re not auditing and refining the AI, it’s going to get worse and worse. You can’t set it and forget it.”
Cloverleaf Analytics still bullish on AI advancements
Despite its calls for broader guidance around AI use in insurance, Cloverleaf Analytics remains bullish on generative AI and other advancements.
The augmented capabilities allow insurers to use ChatGPT to generate code, build spreadsheets, and create images, graphics, and designs to support their presentations.
“We’ve incorporated OpenAI to help our insurance customers,” he said. “If you’re an actuary, and you’re trying to do loss triangles or want to develop lost development factors, but you’re not as familiar with Python, we integrated ChatGPT, which can write the code for you in Python.”
However, Clark emphasized that customer data isn’t exposed to OpenAI’s engine. Cloverleaf is looking to utilize a private or segmented version of ChatGPT to further upgrade its platform and be able to use customer-specific data in the future.
The company also wants to use data for benchmarking insurance firms.
“We’re working with some of the rating bureaus and insurance departments, and an area that they’ve expressed interest in working with us is being able to provide feedback to carriers,” said Clark.
“Getting the privatized segment [of ChatGPT] will be the next step. But if it can’t be that, then it’s exploring other AI alternatives that we can use with our customers’ private data, because security is our number-one priority.”
What do you make of this story? Share your views below.
Related Stories
Keep up with the latest news and events
Join our mailing list, it’s free!