February 21, 2025
As artificial intelligence chatbots gain popularity among users seeking companionship online, youth advocacy groups are ramping up protective legal efforts over fears that children can form unhealthy, dangerous relationships with human-like creations.
Chatbot apps, like Replika and Character.AI, belong to the fast-growing generative AI companion market, where users can customise their virtual partners with nuanced personalities that communicate and simulate close relationships.
Developers say AI companions can combat loneliness and improve users' social experiences in a safe space.
However several advocacy groups have sued developers and are lobbying for stricter regulation, claiming chatbots have pushed children to hurt themselves and others.
Matthew Bergman, founder of the Social Media Victims Law Center (SMVLC), is representing families in two lawsuits against chatbot startup Character.AI.
One of SMVLC's clients, Megan Garcia, says her 14-year-old son took his own life due in part to his unhealthy romantic relationship with a chatbot.
Her lawsuit was filed in October in Florida.
In a separate case, SMVLC is representing two Texas families who sued Character.AI in December, claiming its chatbots encouraged an autistic 17-year-old boy to kill his parents and exposed an 11-year-old girl to hypersexualised content.
Bergman said he hopes the threat of legal damages will financially pressure companies to design safer chatbots.
"The costs of these dangerous apps are not borne by the companies," Bergman told Context/the Thomson Reuters Foundation.
"They're borne by the consumers who are injured by them, by the parents who have to bury their children," he said.
A products' liability lawyer with experience representing asbestos victims, Bergman is arguing these chatbots are defective products designed to exploit immature kids.
Character.AI declined to discuss the case, but in a written response, a spokesperson said it has implemented safety measures like "improvements to our detection and intervention systems for human behaviour and model responses, and additional features that empower teens and their parents."
In another legal action, the nonprofit Young People's Alliance filed a Federal Trade Commission complaint against the AI-generated chatbot company Replika in January.
Replika is popular for its subscription chatbots that act as virtual boyfriends and girlfriends who never argue or cheat.
The complaint alleges that Replika deceives lonely people.
"Replika exploits human vulnerability through deceptive advertising and manipulative design," said Ava Smithing, advocacy and operations director at the Young People's Alliance.
It uses "AI-generated intimacy to make users emotionally dependent for profit," she said.
Replika did not respond to a request for comment.
As AI companions have only become popular in recent years, there is little data to inform legislation and evidence showing chatbots generally encourage violence or self-harm.
However, according to the American Psychological Association, studies on post-pandemic youth loneliness suggest chatbots are primed to entice a large population of vulnerable minors.
In a December letter to the Federal Trade Commission, the association wrote: "(It) is not surprising that many Americans, including our youngest and most vulnerable, are seeking social connection with some turning to AI chatbots to fill that need."
Youth advocacy groups also say chatbots take advantage of lonely children looking for friendship.
"A lot of the harm comes from the immersive experience where users keep getting pulled back in," said Amina Fazlullah, head of tech policy advocacy at Common Sense Media, which provides entertainment and tech recommendations for families.
"That's particularly difficult for a child who might forget that they're speaking to technology."
Youth advocacy groups hope to capitalise on bipartisan support to lobby for chatbot regulations.
In July, the US Senate in a rare bipartisan 91-3 vote passed a federal social media bill known as the Kids Online Safety Act (KOSA).
The bill would in part disable addictive platform features for minors, ban targeted advertising to minors and data collection without their consent and give parents and children an option to delete their information from social media platforms.
The bill failed in the House of Representatives, where members raised privacy and free speech concerns, although Sen. Richard Blumenthal, a Connecticut Democrat, has said he plans to reintroduce it.
On Feb. 5, the Senate Commerce Committee approved the Kids Off Social Media Act that would ban users under 13 from many online platforms.
Despite Silicon Valley's anti-regulatory influence on the Trump administration, experts say they see an appetite for stronger laws that protect children online.
"There was quite a bit of bipartisan support for KOSA or other social media addiction regulation, and it seems like this could go down that same path," said Fazlullah.
To regulate AI companions, the youth advocacy group Fairplay has proposed expanding the KOSA legislation, as the original bill only covered chatbots operated by major platforms and was unlikely to apply to smaller services like Character.AI.
"We know that kids get addicted to these chatbots, and KOSA has a duty of care to prevent compulsive usage," said Josh Golin, executive director of Fairplay.
The Young People is also pushing for the US Food and Drug Administration to classify chatbots offering therapy services as Class II medical devices, which would subject them to safety and effectiveness standards.
However, some lawmakers have expressed concern that cracking down on AI could stifle innovation.
California Gov. Gavin Newsom recently vetoed a bill that would have broadly regulated how AI is developed and deployed.
Conversely, New York Gov. Kathy Hochul announced plans in January for legislation requiring AI companies to remind users that they are talking to chatbots.
In the US Congress, the House Artificial Intelligence Task Force published a report in December recommending modest regulations to address issues like deceptive AI-generated images but warning against government overreach.
The report did not specify companion chatbots and mental health.
The principle of free speech may frustrate regulation efforts, experts note. In the Florida lawsuit, Character.AI is arguing the First Amendment protects speech generated by chatbots.
"Everything is going to run into roadblocks because of our absolutist view of free speech," said Smithing.
"We see this as an opportunity to reframe how we utilise the First Amendment to protect tech companies," she added.