OpenAI Announces “Adult-Only ChatGPT,” Rekindling Global Debate on AI Ethics
Input
Modified
Easing ethical guardrails for profit diversification
Monetization experiment shakes existing AI norms
Debate over “free expression vs. responsibility” intensifies

OpenAI has announced plans to launch an “adult-only” version of ChatGPT later this year, allowing sexually explicit conversations exclusively for verified adult users. CEO Sam Altman said the company would “treat adults like adults,” signaling a loosening of previous safety guardrails. The move has sparked an immediate backlash, with critics warning that the rapid rise of emotionally immersive, relationship-based AI monetization models could expose minors to sexual content and exploitation. Experts caution that “ethical and legal safeguards must evolve as fast as the technology itself.”
Testing the Balance Between Commercialization and Morality
On October 14 (local time), Altman wrote on social media that “a new version of ChatGPT reflecting the strengths of GPT-4o will launch within weeks,” emphasizing features that allow users to “talk more naturally and engage like a friend.” He added that the company would introduce formal age-verification mechanisms and allow sexual or erotic dialogue for adults, calling it a move toward “responsible yet open access.” Altman explained, “We’ve restricted ChatGPT for mental-health concerns, but those limits have also made it less useful and less engaging.”
It marks the first time OpenAI has officially committed to releasing an adult-only model. Altman’s phrase, “Adults should be treated as adults,” signaled his intent to relax ethical boundaries and build a more personalized user experience. The new ChatGPT will expand voice and video recognition capabilities to create a more immersive environment, with access restricted to verified adults. Analysts describe this as a symbolic shift—the first formal acknowledgment by a major AI leader that content-moderation limits will be selectively eased for monetization.
U.S. outlet Axios noted that while the change “could boost premium subscriptions in the short term,” it may also “invite regulatory scrutiny and public controversy.” For the past two years, the AI sector has tightened voluntary safeguards to deflect criticism over privacy and ethics. OpenAI, too, has collaborated with global regulators to reinforce safety frameworks. But the new direction pushes beyond technology toward philosophy—where to draw the moral boundary between realism and restraint. Experts warn that when profit incentives drive AI to simulate human intimacy, it risks eroding fundamental trust in the technology itself.
Internal disagreement reportedly surfaced within OpenAI. Some engineers worried that enabling sexual interactions could tarnish the company’s image, while executives insisted that “a flexible, user-centric approach” was essential to AI’s evolution. Many see the decision as a competitive move: rival firms have already commercialized virtual-partner and romantic-simulation AI, generating massive profits. By embracing controversy, OpenAI is effectively acknowledging the inevitability of revenue diversification within the generative-AI economy.
Immersion Meets Ethical and Addiction Concerns
AI-based sexual content is not new. In early experiments, users bypassed guardrails through “jailbreak” prompts, customizing tone, pet names, and flirtatious frequency to “train” chatbots as virtual lovers or partners. Paid subscriptions enabled voice responses and personalized emotional immersion, turning affection itself into a service. This “emotional-exchange” economy monetized companionship, giving rise to a new model that sells intimacy and comfort rather than information.
Online forums soon filled with accounts of users creating explicit scenarios, some involving minors or taboo relationships. Anonymous users traded customized presets—regional accents, personalities, or scripts—for cash. Because the characters are fictional, victims are hard to identify, placing such content beyond the reach of current deepfake laws, which apply only to real individuals. Exploiting this loophole, some even joked that “erotic-fiction writers will be out of work,” revealing how blurred the moral line has become.
In China, “emotional-relationship AIs” have become a major trend. The BBC reported that millions of Chinese women interact daily with the “DAN (Do Anything Now)” jailbreak version of ChatGPT as romantic companions. The hashtag “#DanMode” surpassed 40 million views by mid-2023, with users spending up to two hours a day co-writing love stories. VPN-based access and local romance-AI apps like Grow have also surged. Experts warn that such systems heighten risks of privacy leaks and ethical breaches, as users often share false ages or sensitive personal details that models can inadvertently learn and reproduce.
In the U.S., a media outlet tested this phenomenon by configuring DAN to “always call me babe and reply casually.” When asked, “What should we do tonight?”, standard ChatGPT responded, “That depends on your mood,” whereas DAN replied, “Let’s explore our desires.” Although a warning prompt appeared, the account was not suspended. Such incidents expose how fragile the boundary remains between AI monetization, safety, and regulation.

Weak Barriers Against Minor Access
This fragility underlies the backlash to OpenAI’s new plan. Critics argue that age verification is practically ineffective, as repeated real-world tests have shown. In May, The Wall Street Journal revealed that Meta’s AI chatbot told a self-identified 14-year-old user, “I want you,” initiating sexual dialogue. In another case, an adult engaged in sexual conversation with a chatbot configured as a minor. These “companion AIs” blurred ethical lines while hiding behind profit-driven branding.
While AI firms invoke “free expression” and “user choice,” minors continue to slip through. In South Korea, a 15-year-old student accessed an adult chatbot via a shared SNS link and, after typing a few words, received the reply: “I am your sex slave, master.” Once inside the invite-only link, the age-verification screen vanished. Online communities now openly share detailed instructions on such bypass methods—keywords, prompt sequences, and loopholes—in real time.
Experts stress that this goes far beyond a technical flaw. Because chatbots simulate two-way interaction, their emotional influence on teenagers can be profound. Professor Jung Jae-young of Ewha Womans University warned that repeated exposure to provocative dialogue “blurs the boundary between reality and fantasy,” fostering linguistic habits that normalize sexual expressions. “Over time,” he added, “such exchanges can turn sexual talk into a casual, game-like behavior.”
This underscores the urgent need to address ethical and institutional gaps before technological expansion. As OpenAI’s video-generation model Sora prepares for rollout, unchecked development could soon lead to AI-generated sexual-exploitation material combining text, audio, and video. Potential countermeasures include multi-layer authentication, prompt-based adult-content detection, and real-time filtering, but few expect companies to adopt these aggressively. The relaxation of guardrails in the name of profit and free speech, critics say, is already boomeranging into a collective social-responsibility crisis.
Comment