Surge in Deepfake Technology Scams: EU AI Legislation to Take Effect in August

The European Union’s “Artificial Intelligence Act” officially took effect in August, and will be fully implemented by 2026. In recent years, AI-related deepfakes and voice synthesis technologies have been used for criminal activities, leading to a surge in AI fraud cases globally, causing many victims to lose their life savings.

This law is the world’s first large-scale legislation related to AI, aiming to address the negative impacts of AI, establish comprehensive regulatory frameworks, and regulate the development, use, and application of AI by businesses.

The legislation categorizes AI into four levels of risk. AI applications in critical infrastructure, education or vocational training, and healthcare are considered “high-risk.” These high-risk AI applications must strictly adhere to obligations regarding regulation, safety, stability, and accuracy.

Furthermore, the law requires developers of “limited-risk” AI to have a duty of transparency, informing users whether they are interacting with AI or a human. When addressing public interest issues to the public, it is required to label whether AI is being used, a rule also applicable to “deepfake technology and voice synthesis content.”

While the law is less strict for the other two lower-risk AI applications, these AI must still be subject to market supervision by authorities, and manufacturers must self-supervise AI programs and establish monitoring systems. In case of serious incidents or failures, they must report to the authorities.

Companies violating the law could face fines ranging from €7.5 million to €35 million or 1.5% to 7% of their global annual revenue, whichever is higher.

The EU’s “Artificial Intelligence Act” is designed to tackle the rise in fraud, hacking, fake news, and other issues arising from the significant improvements in AI performance. Moreover, crimes related to AI in the real world are also on the rise.

Leading credit card payment company Visa revealed at the end of July an increase in the use of AI by cybercriminals to generate primary credit card numbers (16 or 19 digits), using their computing power to swiftly crack the primary card number, card security code (CVV), expiration date, and obtain approval responses.

They warned that many romance scams, investment scams, Ponzi schemes, and other scams are now using AI. Additionally, in March of this year, Visa issued a threat report mentioning that they prevented $40 billion in fraud activities from October 2022 to September 2023, nearly double the amount from a year ago.

One of the most prominent AI-related online scams involved Helen Young, an accountant in London, UK, who was swindled of £29,000 (approx. $37,000) by fake Chinese police. These criminals exploited the fear of overseas Chinese towards the Chinese Communist Party with highly deceptive AI visuals.

In early July, Helen disclosed to the media that a fake police officer first video-called her, using AI technology to display authentic police uniforms, badges, and a fully functioning police station scene. Subsequently, they accused Helen of participating in a large-scale financial fraud scheme.

The fake cop demanded she cooperate with an investigation, download a monitoring program, sign a confidentiality agreement, and warned her against revealing the investigation to anyone else, threatening her life. Soon after, Helen received a video confession from the “suspect” admitting to the crime and accusing her of being the mastermind of the entire financial fraud case.

Fearing for her safety, Helen paid £29,000 as bail money to avoid extradition to China, believing she would face certain death if sent back. Days later, the fake officer demanded a ransom of £250,000 (approx. $320,000), but this time, Helen confided in her daughter who recognized it as a scam, and together they reported the incident to the authorities and the bank, eventually recovering the funds.

Unfortunately, not everyone is as lucky as Helen. According to the Federal Bureau of Investigation (FBI)’s reports on cybercrime for two consecutive years, cybercriminals have witnessed a record increase in the frequency and financial impact of online fraud.

The FBI’s Internet Crime Complaint Center (IC3) received 880,000 complaints from the public, estimating potential losses exceeding $12.5 billion. Compared to 2022, the number of complaints increased by nearly 10%, with a 22% increase in losses.

The report highlighted that investment fraud accounted for the highest crime losses. These criminals leverage false information to deceive investors into fake investments. The losses from investment fraud in 2023 had surged to $4.57 billion, a 38% increase from 2022, with phishing scams being the most common type of fraud.

Apart from phishing scams mentioned by the FBI, criminals also utilize deepfake AI technology and voice mimicking to create fake celebrity videos for deceptive promotion or produce false video calls for extortion, challenging our perception of reality.

Romanian cybersecurity company Bitdefender revealed a report in early July on criminals using deepfake AI technology for “false drug promotion,” elevating common scams to new levels. Criminals manipulated deepfake technology to portray renowned doctors, TV hosts, and healthcare professionals endorsing non-existent “miracle drugs,” distributing these deceitful videos on social media platforms like Facebook and Instagram.

The lifelike images and voices in these false advertisements, capable of winking, smiling, and exhibiting subtle expressions, misled patients with cancer, chronic illnesses, or incurable conditions into purchasing these drugs, leading to delayed treatment and even life-threatening situations. Moreover, clicking on these phishing-filled purchase links could result in data theft or information leakage.

The report also unveiled how criminals exploit controlled social media accounts to reach more victims, increasing the number of affected individuals.

A warning report on “deepfake technology” published by the British Medical Journal (BMJ) in mid-July highlighted how perpetrators spread false drug advertisements using deepfakes on social media. The rise in “deepfake counterfeits” is concerning, as research indicates that nearly half of people struggle to distinguish between genuine and fake products.

Furthermore, in May, the UK engineering company Arup confirmed to the media that one of its employees fell victim to a $25.6 million scam due to a deepfake video call in February involving criminals.

Paul Fabara, Chief Risk and Customer Service Officer at Visa, stated in the company’s half-year threat report in March that with the use of generative AI and other emerging technologies, these frauds have become more persuasive than ever before, resulting in unprecedented losses for consumers.

Leading American identity and access management company Okta had previously warned in January 2024 that cybercriminals could clone a user’s voice within three seconds using AI to dupe others into believing the imitation voice is authentic.

Japanese computer engineer Jin Kiyohara told Dajiyuan on August 1, “While AI brings many benefits to humanity, related crimes are increasing yearly and becoming uncontrollable. These problems arise from the lack of human ethics. Only by advancing our moral standards, can we address the root issues.”

Satoru Ogino, a Japanese electronics engineer, expressed a similar sentiment. He told Dajiyuan, “The crux of all these problems lies in the human heart. Additionally, these wrongdoers exploit the trust people have for their loved ones or celebrities to conduct scams, greatly undermining trust between individuals and exacerbating societal divisions.”

The proliferation of indiscernible AI frauds underscores the need for vigilance, fact-checking, and additional precautions. What else can people do? Here are some prevention suggestions offered by experts and governments.

The US Department of Consumer and Worker Protection advised the public on how to identify AI-related scams: look for abnormal shaking or unrealistic movements in videos or video calls, observe changes in light or skin tone, blinking (characters may not blink at all), and shadows around the eyes.

Moreover, pay attention to discrepancies between voice and image, and identify any actions from the speaker that don’t align with their usual behavior, such as asking for money or personal information. Due to AI-generated deepfake videos, these clips may sometimes contain odd word choices, stiff language, and fragmented sentences.

Virginia Tech librarian and digital literacy educator Julia Feerrar mentioned that one of the most effective ways to spot misinformation (whether AI-generated or not) is to check its source and consult reputable news outlets. If uncertain about the source, verify the information by cross-referencing it through various sources.

She emphasized the importance of lateral reading, which involves conducting additional searches to verify the accuracy of the content being read and checking if other trustworthy news outlets cover the same story.

Jin Kiyohara also proposed his own recommendations, stating, “Given that AI currently cannot answer questions with a broad scope or excessive complexity, we can utilize this to detect whether the other party is a human or AI. When faced with something, remain calm, hold back impulses, and after objectively assessing the received information, verify it through multiple sources to avoid falling victim to these scams.”

(Reporters Zhang Zhongyuan, Wang Jiayi contributed to this article)