In the era of rapid development of big data, artificial intelligence (AI) has gradually become integrated into people’s daily lives as a double-edged sword. AI can make life easier, but it also opens the door to cyber scams and financial crimes, leading to victims suffering huge financial losses.
Recently, Chris Mattmann, Chief Data and Artificial Intelligence Officer at the University of California, Los Angeles (UCLA), pointed out in an interview with the university’s newsroom that AI is widely used in cyberattacks, with the threats becoming increasingly severe. Specifically, deepfake technology is making network security threats more complex.
Deepfake technology refers to using AI to generate realistic videos, images, or audio that highly mimic real people, making it difficult for individuals to distinguish between truth and fiction. More and more cybercriminals are using deepfake technology to deceive victims, enticing them to disclose personal information, click malicious links, or make transfers.
Mattmann mentioned that videos or emails generated by AI could fabricate words or actions that a person has never said or done. These impersonations are highly deceptive, and victims may inadvertently reveal personal data. Once data is leaked, it can be used by criminals for identity theft, bank fraud, or other online extortion crimes.
Scammers use AI-generated videos and voice recordings to impersonate celebrities, authoritative institutions, or individuals in phone scams. Recently, the South Korean police revealed a fake AI news video that impersonated the South Korean president endorsing a specific platform for investments. The platform lured users with AI fake news to visit its so-called “official website,” where they were asked to enter personal information such as name, email, phone number, and deposit a minimum of 350,000 Korean won.
Previously, scammers replicated the voice of a deputy manager of a company in the United Arab Emirates, deceiving a whopping $35 million.
Similar fake videos are increasingly flooding the internet. Investigations show that criminals can use AI to copy a person’s voice with just 3 seconds of audio for a perfect replication.
According to a survey by the Pew Research Center this spring, most people around the world are worried rather than excited about the increasing prevalence of AI in daily life.
Phishing attacks typically create urgency to force recipients to click on suspicious links or open attachments without much thought. Spear phishing, on the other hand, is more targeted, with scammers conducting in-depth research on target groups and crafting seemingly authentic, customized messages to enhance their deception.
During the interview, Mattmann pointed out that AI makes such attacks and deceptions more dangerous. This is because scammers can use AI to automatically generate customized scam messages and spread them massively across platforms such as emails, messages, calls, and social media, increasing the intensity and success rate of scams.
On the other hand, he mentioned that AI can also be used to detect phishing attacks effectively. Through analyzing language patterns, sender behaviors, and abnormal activities, potential scams can be identified and blocked in real-time, thereby enhancing network security protection.
AI makes “social engineering” more realistic, easily and rapidly spreading through various means to attack victims simultaneously.
“Social engineering” exploits human weaknesses through simple communication and deception to bypass cybersecurity defenses and steal accounts, social security numbers, or other sensitive data.
Mattmann stated that messages generated by AI and deepfake technology can precisely mimic someone trusted by the victim through tone, images, and emotional clues to gain trust. Bad actors analyze social media through AI, replicate someone’s identity, and launch a series of attacks simultaneously via emails, messages, and social platforms. Since these messages seem to come from familiar people, victims are more susceptible to falling for the scams, sometimes responding immediately due to a sense of urgency.
Mattmann also highlighted that AI and deepfake technology have infiltrated campus environments.
For example, students may use AI-created avatars to participate in online courses without physically attending to earn credits. He bluntly mentioned that such identity theft behaviors disrupt academic integrity.
DDoS (Distributed Denial of Service) attacks are a common hacking method designed to render target systems (such as websites, servers, or networks) inoperable. Attackers manipulate a large number of infected devices (referred to as “zombie networks” or “botnets”) to simultaneously send a massive volume of requests or data, exceeding the target’s processing capacity, resulting in system crashes or extreme slowness.
With the assistance of AI, DDoS attacks have evolved from simple flood attacks to become quicker, more covert, and harder to trace. These attacks can send thousands of requests per second, causing servers to overload and resulting in service interruptions.
As AI rapidly advances, experts and law enforcement agencies have pointed out that criminals are exploiting the most advanced technologies to deceive unsuspecting victims, making it crucial to understand AI scams.
Javier Simon, a columnist at English Epoch Times and a senior personal finance writer, mentioned in an article that scammers use various methods, including:
1) Cloning voices through deepfake technology:
You may receive a call from a robot claiming to be a family member in crisis and urgently needing money, but it could be a scam.
2) Fake video calls:
Creating urgency and requesting money, or directing you to visit malicious websites to trick you into providing sensitive financial information.
3) Fraudulent stores and market platforms:
Scammers create malicious websites generated by AI. In some cases, these sites act as fake retail stores or market platforms, luring people with cheap goods, or advertisements for apartments and houses. These malicious website links are rampant on social media, messages, or emails.
4) AI phishing:
Scammers often impersonate legitimate sources through emails, calls, or messages.
Simon advises that staying vigilant is crucial, being cautious in any communication involving transfers or providing sensitive personal information such as passwords, financial information, and social security numbers. Analyze the situation carefully, and consult trusted friends if necessary to help identify potential scams.
