On Monday, October 27th, as the Category 5 Hurricane “Melissa” approached Jamaica with winds exceeding 180 miles per hour, social media was flooded with a large number of fake videos generated by artificial intelligence (AI). The content included scenes of severe flooding, collapsed buildings, and fictional rescue operations, garnering millions of views within hours.
According to Agence France-Presse (AFP), these videos were mainly seen on platforms like TikTok, X (formerly Twitter), Instagram, and WhatsApp, many of which bore the watermark of the OpenAI text-to-video model “Sora.” Some videos were pieced together from old disaster footage, while others were entirely AI-generated fictional scenarios.
The fake videos ranged from scenes of disaster areas to fictional TV news reports and even images of sharks roaming the streets. One video featured a heavy Jamaican accent voicing over, depicting local residents partying, boating, or surfing despite hurricane warnings, downplaying the imminent danger of the approaching storm.
Jamaica’s Minister of Information, Dana Morris Dixon, warned on Monday: “I have seen these videos circulating in several WhatsApp groups, and many of them are fake. Please pay attention to official sources of information.”
Experts pointed out that AI-generated videos spread rapidly, potentially causing people to overlook official warnings and underestimate the risks of the disaster. Amy McGovern, a meteorology professor at the University of Oklahoma, stated, “This hurricane is extremely powerful and could cause catastrophic damage, while false content diminishes the seriousness of the government’s warnings to prepare. Ultimately, such misinformation could lead to loss of life and property.”
AFP noted that these fake videos mainly spread on TikTok. Although the platform’s policy requires labeling AI-generated content, only a few videos were tagged as such.
After being reported by AFP, TikTok removed over twenty related fake videos and several accounts specializing in sharing these videos.
Hany Farid, a professor at the University of California, Berkeley, and co-founder of the cybersecurity company GetReal Security, stated that the latest text-to-video technology “has accelerated the spread of realistic fake videos,” making it easy for users to create scenes with lifelike characters, increasing the difficulty of detection.
Despite some videos clearly bearing the “Sora” watermark indicating AI generation, many viewers still believe in the authenticity of the content. For example, in one AI video on TikTok, an elderly man yelled at the hurricane, saying, “I’m not moving because of a little wind.” The comments section overflowed with prayers from netizens: “God, please protect grandpa’s home and his mango trees.”
Another AI video depicted a woman holding a baby crying for help on a roof that had blown off, eliciting numerous messages of comfort and prayers.
To this, Farid remarked, “The paradox of the information age is: the more information we have, the less we seem to understand the truth.”
