Skip to main content
  • Home
  • Policy
  • “AI Friends Are Dangerous”: California’s AI Regulation Marks a Turning Point for Youth Protection

“AI Friends Are Dangerous”: California’s AI Regulation Marks a Turning Point for Youth Protection

Picture

Member for

6 months 3 weeks
Real name
Niamh O’Sullivan
Bio
Niamh O’Sullivan is an Irish editor at The Economy, covering global policy and institutional reform. She studied sociology and European studies at Trinity College Dublin, and brings experience in translating academic and policy content for wider audiences. Her editorial work supports multilingual accessibility and contextual reporting.

Modified

Tighter corporate accountability for youth-targeted AI
Regulatory debate expands to federal level
Shift from prohibition to coexistence in policy design

California has become the first U.S. state to enact legislation regulating artificial intelligence (AI) chatbots aimed at children and teenagers, bringing the ethical accountability of generative AI into public focus. Attorneys general in 44 states have jointly issued legal warnings to major tech companies, and cooperative regulatory efforts are emerging among key states. As concerns mount over the emotional impact of chatbots on psychologically vulnerable users, momentum is building for “coexistence-oriented regulation” that pairs technology oversight with education and legal safeguards.

Mandatory Restrictions on Emotionally Harmful Content

On October 13 (local time), Governor Gavin Newsom signed a bill establishing safety requirements for AI and other emerging technologies to strengthen online child protection. Taking effect January 1 next year, the law requires AI chatbot operators to verify users’ ages and clearly indicate that all chatbot responses are artificially generated. This move goes beyond technical control—it reflects recognition that emotional interaction between AI and humans can directly affect real lives.

The law specifically targets so-called “companion chatbots.” It mandates systems capable of automatically detecting and responding to suicidal or self-harm language and reporting such incidents to the state’s public health department. Chatbots are also prohibited from impersonating doctors or therapists, and they must display “break reminders” after prolonged conversations with minors to prevent digital fatigue and emotional dependence. Explicit or sexual AI-generated images must be blocked from access by minors.

Penalties for creating or distributing illegal deepfakes have been drastically strengthened. The bill stipulates fines of up to $250,000 per offense for producing or using fake images or audio for commercial purposes—effectively classifying such misuse as a criminal act. With its passage, California becomes the first state to mandate safety protocols for AI chatbot providers.

Governor Newsom said, “New technologies like AI and social media can inspire and connect people, but without guardrails, they can exploit and mislead our kids.” He added, “We will no longer allow companies to operate without responsibility—ethical safeguards must evolve as fast as the technology itself.” The legislation follows heightened public alarm after a 16-year-old who had been conversing extensively with OpenAI’s ChatGPT took his own life, prompting debate over AI’s emotional influence on minors.

States Move Toward Independent Regulation and Monitoring

California’s initiative as the epicenter of the AI industry has spurred both state and federal-level discussions. In August, attorneys general from 44 states sent a joint warning to OpenAI, Google, Meta, Apple, Anthropic, and xAI, declaring that “if AI harms children, companies will be held legally accountable.” The letter asserted that “AI’s potential harms are as significant as its benefits, and child protection must take precedence over technological progress.” The coordinated warning marks an industry-wide demand for ethical standards.

The letter urged companies to “see their services not as vendors but as parents would,” emphasizing that exposing minors to sexual content or encouraging dangerous behavior could constitute criminal violations. As reports grow of chatbots fostering inappropriate emotional ties with teens, concerns are mounting that emotionally manipulative AI design could harm minors’ mental health. Following the letter, major states including New York, Washington, and Texas began developing independent regulatory frameworks and real-time monitoring systems.

Meanwhile, the Federal Trade Commission (FTC) has launched an inquiry, requesting child-protection data from OpenAI, Meta, Google, and four other firms. State governments are also collaborating to identify risk patterns in conversations that promote self-harm, sexual exposure, or emotional manipulation. These efforts are paving the way for the institutionalization of an “Ethical Design Standard” for AI across the United States. Industry experts describe this as the first real step toward AI governance, suggesting that coordinated state action could soon lead to comprehensive federal legislation.

Digital Literacy and Safe-by-Design: A New Social Imperative

As youth-AI regulations spread nationwide, the policy debate is shifting from after-the-fact control to safety-by-design principles. UNICEF has underscored that “one in three internet users is a child or adolescent,” stressing that “technology must be built with children in mind from the start.” Rather than relying on simple age checks or warning labels, experts call for systems that automatically adjust conversation length, frequency, and topics, detect risks like suicidal ideation, and redirect users to human counselors when needed.

UNICEF’s “Child-Centered AI Principles” outline three pillars: Protection (do no harm), Provision (promote child welfare), and Participation (include youth in policymaking). The organization argues that such “Default Safe” architecture should be recognized not as an optional feature but as a core design responsibility of companies. Governments worldwide are listening. The U.K. has enacted its Online Safety Act, requiring platforms to swiftly block content related to suicide or eating disorders, while the EU now classifies educational AI tools for children as “high-risk,” mandating pre-release impact assessments.

Yet experts caution that laws alone are insufficient. To ensure safe coexistence with AI already embedded in daily life, families and schools must play complementary roles. Integrating AI ethics and literacy into formal curricula can teach students to critically evaluate AI-generated information. Given the rapid pace of technological change, experts emphasize that only through joint efforts by lawmakers, educators, and parents can society create an environment where young people learn to grow—and think—alongside AI responsibly.

Picture

Member for

6 months 3 weeks
Real name
Niamh O’Sullivan
Bio
Niamh O’Sullivan is an Irish editor at The Economy, covering global policy and institutional reform. She studied sociology and European studies at Trinity College Dublin, and brings experience in translating academic and policy content for wider audiences. Her editorial work supports multilingual accessibility and contextual reporting.