Scammers Use AI Technology to Set Up Frauds, Nearly Swindle Ferrari Executives

With the advancement of artificial intelligence technology, scammers are using AI to deeply impersonate individuals and carry out new types of online fraud. Most recently, a senior executive of the Italian supercar manufacturer Ferrari nearly fell into a high-fidelity trap set by a fraud ring. Fortunately, his caution helped him in a crucial moment.

According to Bloomberg, on a Tuesday in July, a senior official of Ferrari NV began receiving a series of messages on WhatsApp from someone claiming to be the CEO Benedetto Vigna.

“Hey, have you heard about our large acquisition plan? I might need your help,” the message from the supposed Vigna read. “Get ready to sign a non-disclosure agreement; our lawyers will send it to you soon.”

“The Italian market regulator and Milan stock exchange have been notified. Be prepared, please maintain the utmost caution,” the message added.

Bloomberg reported that they verified these WhatsApp messages and found that the sender’s phone number was not the one commonly used by Vigna. The account used a photo of Vigna wearing glasses, a suit, and a tie, arms crossed, standing in front of the Ferrari prancing horse logo, but it was different from his actual personal profile picture.

Just as the Ferrari executive was at a loss, the scammer suddenly called, attempting to deeply impersonate Vigna’s voice to lower the executive’s guard.

According to the executive’s recollection, the imitation voice was convincing, perfectly mimicking Vigna’s southern Italian accent.

During the call, the scammer mentioned that a different phone was being used due to the sensitive nature of the discussion and claimed that the acquisition might face resistance from Beijing, necessitating a currency hedging transaction.

However, this explanation backfired, raising suspicions rather than convincing the executive, who began to notice a subtle mechanical tone in the voice.

Subconsciously, he asked, “Sorry, Vigna, I need to confirm your identity.”

He posed a question, asking if Vigna remembered the title of the book he recommended a few days ago. As soon as he finished speaking, the phone was immediately disconnected by the other party. Currently, Ferrari has initiated an investigation into this incident.

An increasing number of these fraudulent calls are being made, as scammers utilize artificial intelligence (AI) to synthesize fake messages, videos, voice recordings, and more, achieving realistic effects like “face-swapping” and voice modulation, making it difficult for people to distinguish between real and fake. Not only are ordinary individuals being deceived, but even top executives of major companies are finding it challenging to detect such fraud.

Robert Tripp, a supervisory special agent at the FBI, stated in a May announcement, “As technology continues to advance, the strategies of cybercriminals are also evolving. Attackers are using AI to create highly convincing voice or video messages and emails to carry out fraud schemes against individuals and businesses.”

“These sophisticated tactics can result in devastating financial losses, reputational damage, and sensitive data breaches,” he added.

According to a report by the British Broadcasting Corporation (BBC), with the increasing maturity of deepfake technology, some fraud groups are using this technology to impersonate Chinese police officers and scam overseas Chinese immigrants.

Recently, there have been cases of fraud calls impersonating relatives or friends, making the voice sound like or using video calls to show a resemblance to one’s own family members.

In February this year, a multinational company in Hong Kong fell victim to a scam, resulting in a loss of 26 million dollars. The scammers used deepfake technology to impersonate the company’s CFO, instructing employees to transfer a large sum of money via video call.

The FBI encourages individuals and businesses to mitigate the risk of fraud through the following measures: firstly, remain vigilant and be wary of urgent messages requesting money or credentials. Secondly, implement multi-factor authentication using solutions that add an additional layer of security, making it more difficult for cybercriminals to access accounts and systems without authorization.

Online netizens have also added two points: firstly, verify identities – when receiving calls or video calls with unusual requests, it is essential to confirm through other channels before taking any actions. Secondly, observe details – while these scams may appear realistic at first glance, there are still flaws. For example, videos may display unnatural blinking, lips out of sync with the voice, robotic speaking tones, sudden changes in lighting or skin color, unclear audio or video quality, etc. If you notice these minor features, immediately verify the person’s identity.