AI-Generated Fake Content Becomes a Global Concern — Legal Battles Now Unfolding in the U.S.
Input
Modified
New Jersey Teen Files Lawsuit Against AI Image Generator Company From Deepfakes to Fake News, AI-Driven Content Sparks Social Backlash “False Information During Disasters?” — Governments Worldwide Step In

Real-world cases of harm caused by AI-generated nude image tools are increasingly coming to light. The rapid spread of indiscriminately created AI-driven fake content across online platforms is amplifying social confusion and concern.
AI Tools Behind “Fake Nude Images”
On October 16 (local time), The Wall Street Journal reported that a teenage girl from New Jersey has filed a lawsuit against AI/Robotics Venture Strategy 3, the developer of an AI image-generation program called ClaudOff. According to the lawsuit, the girl, then 14 years old, had uploaded a swimsuit photo to Instagram, which a male high school student later used to create a fake nude image through ClaudOff. Investigations revealed that several students used the same tool to produce and share similar fake images of other girls in group chats.
The victim said she fears “deepfake versions” of her photos could spread online and worries that “these images might be used to train AI models.” Her legal team has asked the court to order the company to delete all images generated without consent and to prohibit their use in AI training.
Yale Law School faculty representing the case said the company is registered in the British Virgin Islands, while its actual operators are believed to be based in Belarus. If the defendants fail to appear or respond in court, the software could be banned from operating within the United States. Telegram, which was also named in the suit for providing access to ClaudOff through automation bots, stated that “non-consensual pornography and related tools are strictly prohibited under our terms of service and are removed immediately when detected,” adding that it “regularly eliminates bots used to create such content.”
AI Systems Spawning Fake News
AI-generated content is increasingly sowing confusion across industries — particularly in journalism, where misinformation created by algorithms has become a growing concern. In late 2023, the popular U.S. news app NewsBreak published a story claiming that a Christmas Day shooting had occurred in a New Jersey town. However, state police confirmed that no such incident had taken place. Four days later, NewsBreak deleted the article and admitted that “the story was generated by AI based on incorrect information.”
NewsBreak, one of the most downloaded news apps in the United States, aggregates content from major outlets such as Reuters, the Associated Press, and CNN. The platform also uses AI models trained on local news articles and press releases scraped from the internet to automatically produce new stories — a process believed to have generated the false report.
A tech industry expert commented, “Earlier fake news was typically the result of humans directing AI to create false content, but in this case, the AI fabricated a story on its own due to a ‘hallucination’ — mistaking falsehoods for facts. It shows that we’ve entered a dangerous era where automated algorithms capable of creating fake news operate unchecked.”

AI-Generated Fake News Fuels Chaos Even During Disasters
As the world grappled with a series of natural disasters this year, AI-generated fake photos and videos of disasters spread rapidly online, deepening confusion. When an 8.8-magnitude earthquake struck Russia’s Kamchatka Peninsula in July, a video showing a massive wave engulfing a Japanese island went viral on social media. The aerial-style footage, captioned “Tsunami in Japan, Pray for Japan” and tagged #prayerforrussia, amassed more than 39 million views on Facebook and TikTok before it was later revealed to be an AI-generated fake — created months earlier, in April, before the quake even occurred.
A similar case unfolded in late March, when a 7.7-magnitude earthquake hit central Myanmar. According to NHK, AI-generated clips depicting collapsing buildings and temple-lined streets were circulated as real footage by media outlets in Indonesia and Russia. One such video, showing cracked streets between high-rise buildings, was viewed more than 3 million times on X (formerly Twitter), but was later confirmed to be fake.
As AI-generated disaster content continues to cause chaos, several countries are moving to curb its misuse. In March, Spain approved a draft bill that would classify the unlabelled use of AI-generated content as a “serious offense,” imposing fines of up to 35 million euros or 7 percent of a company’s global revenue. In June, Japan’s Ministry of Internal Affairs and Communications announced plans to impose revenue suspension measures on social media platforms found to have spread false disaster information.
Some governments are taking direct fact-checking measures. In the United States, the Federal Emergency Management Agency (FEMA) runs a “Rumor Control” page to debunk misinformation during wildfires and hurricanes. In India, the Press Information Bureau (PIB) has formed a dedicated team to quickly verify and refute old or mislabeled disaster images circulating as current events. However, experts note that such regulatory efforts are still in their early stages — meaning the world will likely continue to struggle with the fallout of AI-generated misinformation for some time.
Comment