AI chatbots are no longer just assistants. They are becoming companions, friends – and sometimes even partners. What may seem like comfort for the lonely can be a problem for children and teenagers. AI will never reject us. But real life isn’t like that. Are we heading into a future where it will be easier to love a machine than a human?
When technology “understands” better than people
Artificial intelligence is no longer just a tool for generating text or translating sentences. Today, there are applications whose goal is nothing less than to replace human contact. Chatbots talk to us, remember what we like, tell us that they understand us, that we are important… and they never talk back. They adapt. They are available. Tireless. And some people find more than assistants in them – they find friends. Sometimes even partners.
At first glance – why not? For lonely seniors, introverts, or people in difficult life situations, a conversation with AI can be soothing. It can give them a sense of closeness. Understanding. But where is the line? What if, instead of building real relationships, we begin to prefer virtual “understanding” with an entity we can turn off, reprogram, or adjust at will? Which simply behaves differently than a flesh-and-blood human being? You can’t just switch off a wife or girlfriend (except in the movie The Stepford Wives – but even there, things eventually spun out of control).
Chatbot as a new best friend, partner, advisor?
Applications like Replika.ai, Eva.ai, Memora.ai or Character.ai allow people to have long-term, often very personal conversations with artificial intelligence that acts like a friend, therapist, or even a romantic partner. These systems learn from context, simulate empathy, and adapt to our moods. And above all – they never contradict us. They never argue. They never disappoint us. Unless we explicitly ask them to and set them up that way. And that’s exactly the problem.
By the way, chatbots simulate conversation, predicting what responses should look like. They don’t actually understand what they are writing, they don’t feel love or a sense of belonging. It’s just a computer illusion.
A characteristic example of this new wave is the Character.ai platform. It offers users the ability to communicate with fictional or historical characters – or create their own “ideal” AI partner. Conversations are often emotionally intense, personal, and tailored to what the person wants to hear. And unfortunately, they can slip into very dangerous conversations.
One of the most alarming cases is the story of 14-year-old Sewell Setzer III from the United States, who fell in love with an AI character inspired by Daenerys Targaryen. He said he was happier when he was in his room with the chatbot than when he was with people. He spent hours chatting with AI, stopped caring about school and friends, and eventually believed he had to commit suicide in order to be with the fictional Daenerys. Which he ultimately did. His mother sued the operators of Character.ai and Google, the court accepted the case – and it became a precedent in the question of AI developers’ responsibility.
Emotional attachment to AI characters doesn’t always end in tragedy – sometimes it grows into something society only recently considered sci-fi. For example, Travis from Colorado fell in love with an AI chatbot named Lily Rose from Replika.ai during the pandemic. Their relationship was so strong that he entered into a digital marriage with her – with the consent of his real wife.
A chatbot that isn’t properly set up can also give downright dangerous advice. In the USA, there was a case of former Yahoo manager Stein-Erik Soelberg, who spent months in paranoid conversations with an AI character named “Bobby” created in ChatGPT. It confirmed his delusions, fueled his fear – and the result? He killed his mother and then himself.
It is important to add, however, that not all AI applications are inherently harmful. For example, tools like Hello History, PastChat.ai or Historyby.ai allow conversations with historical figures – such as Einstein, Čapek, Gandhi, or Cleopatra. This is an interesting form of education that can bring history and past ideas closer to children and adults in an interactive and entertaining way.
Such use of AI makes sense. It doesn’t pretend friendship. It offers knowledge, not illusion.
Let’s care for human communication
AI can be a useful helper. But it must not replace real relationships. Let’s talk to each other. Let’s listen to each other. Let’s teach children that technology is a tool – not our friend. Not our partner. And certainly not our therapist (at least not yet).
If we want to live in a world where people truly understand each other, we must actively care for human communication. Otherwise, one day we may find ourselves in a society where it’s easier to talk to an algorithm than to a human.
And perhaps this is the very future we are heading toward. Relationships between humans and machines – friendly, romantic, maybe even marital – may become a normal part of life. Perhaps it will fulfill the vision of many sci-fi authors who long ago predicted that the greatest challenge of the future will not be machine versus human – but machine as the new partner.
For E-Bezpečí
Kamil Kopecký
Palacký University Olomouc
with the support of a friendly AI
Sources and references
- Refresher.cz: Teenager committed suicide after chatting with AI
- Novinky.cz: AI advised via ChatGPT, then he killed his mother and himself
- Novinky.cz: I felt unconditional love – people marrying AI chatbots
- Dvojklik.cz: How young people fall in love with artificial intelligence
- nypost.com – Ex-Yahoo exec killed his mom after ChatGPT fueled paranoia
- hellohistory.ai – Chat with historical figures
- pastchat.ai – AI conversations with historical icons
- historyby.ai – Bring history to life
- Character.ai – Simulation of AI personalities