Trust no one. The expression defines the era of artificial intelligence (AI) deepfakes. Consider these real-world examples of things people thought they’d witnessed over the past two years.
Some think they heard U.S. President Joe Biden’s voice in a robocall to New Hampshire Democrats telling them not to vote in the 2024 election. And was that really Elon Musk schilling free cryptocurrency giveaways?
All of it’s false. Deepfake concoctions are creating misconceptions and causing harm. A recent Canadian Secret Intelligence Services (CSIS) report defines deepfake technology as “media manipulations…based on advanced artificial intelligence (AI), where images, voices, videos or text are digitally altered or fully generated by AI.”
The technology can now be used to place anyone or anything into situations in which they did not participate. It’s often used to misinform people, by inserting deepfake material into real media.
“Deepfakes are insidious and erode public trust,” says Neal Jardine, the global director of cyber risk intelligence and head of claims at BOXX Insurance. “And the damage they cause can be covered under a cyber insurance policy, in case brokers are wondering.”
Client safeguards
Protecting clients against deepfake losses is much like protecting them from ransomware attacks, Jardine says. “Education is key to making sure everyone is aware of the real possibility of them being exposed to deepfake scams. Clients need to be aware of the alternate digital realities they can be innocently exposed to any day and at any time.”
Jardine categorizes AI deepfake scams into three broad types.
“The first is a classic: ‘The grandparent’ scam,” Jardine tells Canadian Underwriter. “In this scenario, one of their grandchildren posts a recording on the internet with their voice in it. Or maybe the threat actor calls them and gets them to say a few things on the phone. Modern, advanced AI models need as little as 10 seconds of your voice to create a convincing deepfake replication.”
Next, the cybercriminal uses the deepfake voice to speak with the grandparents. It may tell them, “I’m in jail. I went to Mexico with my friends for the week. We drank too much. I got into trouble and [need] $5,000 and the police will let me go. Otherwise, I’m going to miss my flight and stand trial in Mexico.’”
The cybercriminal keeps the grandparent on the line, walks them through a money transfer and the $5,000 is gone. Knowing the victim will pay, cybercriminals will often call victims back and demand more money, escalating the situation.
Bogus business
This scam has morphed into the business world. Accounting team members are getting deepfake requests ‘from the CEO’ requesting a funds transfer. The CFO video calls the CEO to verify the request. The CEO’s face and voice on the phone are believed, and the transfer occurs; meanwhile, the CEO was an AI deepfake.
“A second type of deepfake is to influence people into believing false or alternate digital realities to advocate either taking action or not taking action, whichever is required to meet the creator’s purpose,” says Jardine.
He says the deepfake Biden robocall in 2024 is one example, but he adds that the influence scam extends into manipulating stock prices and other areas of financial fraud.
“Stock market manipulation has been achieved when cybercriminals make a deepfake video of a CEO of a publicly traded company,” Jardine explains. “The deepfake has the CEO making a statement that positively or negatively influences the stock of the company.
“The criminals have taken a position prior to the release of the video that earns them a significant profit. For example, theoretically, the video could announce ‘Company A experienced a major fire in its plant. It won’t be able to produce its computer chips for the next few years.’
“Within a minute, Company A’s stock price is going down. If you were a cybercriminal, you could short the stock and make money as it goes down, before the company has a chance to engage its communication plan to dispel the deepfake.”
Deepfake defences
A third deepfake example is to damage someone’s reputation by connecting them falsely with sexual imagery.
“Examples of deepfake porn are not uncommon,” CSIS says in its 2023 report, The Evolution of Disinformation: A Deepfake Future. “Over 90% of deepfakes available online are non-consensual pornographic clips of women; as of October 2022, there were over 57 million hits for ‘deepfake porn’ on Google alone…and current legislation offers victims little protection or justice.”
Is the answer to trust no one?
“It comes back to education,” Jardine says. “People have to know and appreciate that digital images and audio can be manipulated.”
There are weaknesses in deepfakes. And there are still plenty of telltale signs you’re dealing with a deepfake, Jardine says.
“Deepfakes are terrible at replicating people who wear glasses, because there’s light and reflection from glasses, and deepfakes can’t reproduce that well,” he says.
Plus, he adds: “AI is not good at copying hairlines. They tend to blur right where the hair meshes into the background. You can spot a deepfake by noticing they tend not to get the hairline around the ear correct.”
Finally, watch for faces that stay still in video imagery while the person is talking. When cybercriminals are using deepfakes, “the face tends to be very still and stays straight the whole time,” he says.
This story is excerpted from Canadian Underwriter‘s 2025 February/March print edition.
Feature image by iStock.com/Mininyx Doodle