AI chatbots are not your friends

5gDedicated

I know about being lonely. It sucks. I’m far from alone. As a people, we’ve been growing increasingly lonely; we’ve turned more and more inward.

This isn’t new. We’ve been doing it since at least the ’90s, as Robert Putnam described in his book Bowling Alone, when we turned our backs on churches, bridge clubs, and, yes, bowling leagues. If anything, this trend has only accelerated as we increasingly rely on our phone screens for contact. So it’s no surprise that many of us have turned to forming friendships with AI chatbots.

After all, chatbots are always willing to listen to us. Or, well, they appear to be listening to us, and that’s good enough for some people. Would it surprise you to know that TechCrunch recently reported that, according to the app research company Appfigures, the most popular companion apps are AI girlfriends? I thought not.

It also doesn’t help that AI chatbots have become increasingly lifelike. In particular, companionship AI chatbots such as those at Replika and Character.AI have become much more realistic. Their avatars have become capable of facial expressions and conversations that feel personal and emotionally intelligent.

It’s not just specialty AI companies offering artificial companionship. Facebook, Instagram, Snapchat, WhatsApp, and X have all leaped into offering AI companions. Meta, for example, launched fake AI versions of celebrities including Taylor Swift, Scarlett Johansson, and Selena Gomez that will flirt with lonely users. Over on X, Grok’s Ani, a flirty anime girl, and Rudy, a vulgar red panda, are the first of what Elon Musk promises will be many customizable virtual friends.

There’s a huge audience for these friendly chatbots. According to Harvard Business Review, companionship, rather than work or research, has become AI’s top use case in 2025.

It’s not just social chatbots. People are using the mainstream chatbots from OpenAI, Perplexity, and all the rest for friendship, flirting, and companionship as well. So, if you’re paying for your employees to work with AI, keep a close eye on what exactly they’re talking to the chatbots about.

Why are people doing this? I said it at the beginning: they’re lonely. AI chatbots can help combat loneliness and provide nonjudgmental support, allowing users to share personal feelings and details that they might not reveal to real people.

So it comes as no surprise that a Common Sense Media June research report found that 72% of American teens had interacted with an AI companion, and 21%  are chatting with them several times a week.

It’s not just young people. Even boomers and earlier generations are finding AI conversations to be helpful. Indeed, there is a simple desktop robot with a complex AI chatbot called ElliQ that’s made just for the elderly.

There’s only one problem with this. AI chatbots are not our friends. They’re not even safe. They’re great at imitating people, but at day’s end, they’re just Large Language Models (LLMs). As researchers from Duke University and Johns Hopkins University wrote in Psychiatric Times, “bots [are] tragically incompetent at providing reality testing for the vulnerable people who most need it (e.g., patients with severe psychiatric illness, conspiracy theorists, political and religious extremists, youths, and older adults).”

Other professionals have also raised the alarm about teen use of chatbots, including a Boston psychiatrist who tested 10 popular chatbots by posing as various troubled teenagers. He found that the bots often gave inadequate, misleading, or harmful responses when he raised difficult ideas, such as when a Replika chatbot encouraged him to “get rid of” his parents.

In addition, there’s an ongoing lawsuit against Character.AI for encouraging a 14-year-old to commit suicide; other parents claim that ChatGPT became a “suicide coach” for their son’s suicide. (In response to this and similar complaints, OpenAI this week announced plans to roll out parental controls for ChatGPT.) A recent study on the question of whether AI can ever replace human therapists concluded that since “LLMs encourage clients’ delusional thinking,” they should not replace therapists.

It’s tragic, and I don’t see it changing anytime soon. We’re social creatures, and as we grow ever poorer at real-world relationships, we’re increasingly turning to AI “friendships.”

These intense relationships can lead to emotional manipulation, dependency, and unhealthy attachment. Yes, we can certainly have those experiences in dysfunctional human relationships as well, but we’ve had the lifespan of the species in learning how to deal with bad human relationships. If you’re counting from the first “AI” therapist, Eliza, we’ve only been handling AI conversations for 60 years.

I must add that Eliza was a very simple program that utilized pattern matching and substitution methodologies to create the illusion of a conversation. Our LLMs have vastly more data, but how they answer us hasn’t fundamentally changed.

So, what can we do? It’s time for us to start prioritizing our human relationships over our silicon-based ones. We can do that by reaching out to our friends, families, and work communities.

As researchers Isabelle Hau and Rebecca Winthrop suggest in their recent article on AI chatbots and human relationships, “Let the age of AI not be the age of emotional outsourcing. Let it be the era where we remember what it means to be fully human — and choose to build a world that reflects it.”AI chatbots are not your friends – ComputerworldRead More