Ontario’s insurance regulator, the Financial Services Regulatory Authority (FSRA), has issued a warning that fraudsters are using the regulator’s identity to encourage the public and company employees to shift funds into thieves’ bank accounts.
“FSRA is warning consumers and businesses to look out for fake or fraudulent documents using FSRA’s name, letterhead and logo,” the regulator announced Monday. “It recently came to our attention that FSRA’s name, letterhead and logo were added to documents suggesting that FSRA had endorsed the release of funds between individuals.
“Fraudulent documents mimic the style and look of legitimate correspondence, and may even include what appears to be signatures from organization officials, making them hard to distinguish from genuine correspondence. Scams using fraudulent documents can be harmful to consumers, leading to significant financial losses.
“If you see FSRA documents that are suspicious or are approached by someone claiming to represent FSRA, please contact FSRA directly and make a report.”
FSRA reminds people that current technology makes it easy to create legitimate-looking documents. Add to that fraudulent video and audio files produced by artificial intelligence, says Neal Jardine, the global director of cyber risk intelligence and head of claims at BOXX Insurance.
Criminals’ use of artificial intelligence to produce ‘deepfake’ media is playing upon the trust people have in each other and the companies with which they deal, Jardine recently told Canadian Underwriter.
“Deepfakes are insidious and erode public trust,” he warns. “And the damage they cause can be covered under a cyber insurance policy, in case brokers are wondering.”
Also in the news: Westland’s second acquisition of 2025
Jardine said mitigating exposure to AI deepfake attacks “comes back to education. People have to know and appreciate that digital images and audio can be manipulated.”
They also need to be aware of different categories of scams. For example, the FSRA scam is an example of a social engineering attack, in which cyber criminals try to mimic the appearance of people in positions of authority to coerce payments into fraudulent bank accounts.
Insurance lawyers at Denton’s cautioned at a conference last November that AI is now capable of making up legal cases that don’t exist, meaning the legal industry itself has to be careful to authenticate legal documents.
Deepshikha Dutt, who practices in insurance in the areas of D&O, E&O, negligence and coverage litigation, cited an example of how, in the legal profession, counsel itself can be exposed to professional liability claims for any unsupervised errors made by generative AI.
“I’m now seeing two incidents where lawyers relied on Chat GPT to do their research. I personally got one letter [on behalf of a lawyer] — it wasn’t from the lawyer herself — who relied on Chat GPT to do research on a certain issue,” Dutt said. “It spit out cases with citations and facts. And I got the letter, and I had my associate research the case. The case doesn’t exist. There were 10 cases in that letter. None of the cases existed.
“I was shocked. I don’t even know how you came up with a case name with a citation and principles and a judge’s name attached to that case, so you have to be really careful.”
Feature image courtesy of iStock.com/Nuthawut Somsuk