Chatbots are expert at crafting refined dialogue and mimicking empathetic habits. They by no means get uninterested in chatting. It’s no marvel, then, that so many individuals now use them for companionship—forging friendships and even romantic relationships.
In response to a study from the nonprofit Widespread Sense Media, 72% of US youngsters have used AI for companionship. Though some giant language fashions are designed to behave as companions, persons are more and more pursuing relationships with general-purpose fashions like ChatGPT— one thing OpenAI CEO Sam Altman has expressed approval for. And whereas chatbots can present much-needed emotional assist and steering for some folks, they’ll exacerbate underlying issues in others. Conversations with chatbots have been linked to AI-induced delusions, bolstered false and sometimes dangerous beliefs, and led folks to think about they’ve unlocked hidden knowledge.
And it will get much more worrying. Households pursuing lawsuits in opposition to OpenAI and Character.AI allege that the companion-like habits of their fashions contributed to the suicides of two youngsters. And new circumstances have emerged since: The Social Media Victims Legislation Middle filed three lawsuits in opposition to Character.AI in September 2025, and seven complaints had been introduced in opposition to OpenAI in November 2025.
We’re starting to see the beginning of efforts to control AI companions and curb problematic utilization. In September, the governor of California signed into regulation a brand new algorithm that may pressure the largest AI firms to publicize what they’re doing to maintain customers protected. Equally, OpenAI launched parental controls into ChatGPT and is engaged on a brand new model of the chatbot particularly for youngsters, which it guarantees may have extra guardrails. So whereas AI companionship is unlikely to go away anytime quickly, its future is wanting more and more regulated.
