More than 70% of teens in the U.S. use ChatGPT as a companion according to apnews.com. With the current rise of AI in online culture many have started using it in their daily life, whether it’s used for school work, as a friend, or even more to some.
Over the past year, many people have been finding themselves turning to AI platforms, such as ChatGPT, out of desperation, or even as a coping mechanism with some using it as their own therapist. While there are online sites, such as Reddit, with communities that actively endorse AI companionship, now many individuals have started developing unhealthy attachments to the chat bots as they tear apart families, cause psychological harm, and in some cases even take innocent lives.
ChatGPT offers a lot with its large platform, including customization of the chat bots. Users of the site are given the option to alternate between types of chat bots they want to use, and if they wish they can give these bots their own unique personality. These personality options are labeled as “chatty” or “Gen Z”, changing the setting of the bot alters the way in which it will respond to you. This feature is one of the core reasons people are able to build these deep relationships with AI.
In June of 2025 Chris Smith, a husband and father, was caught in intense backlash after stories of his AI relationship he had built went public. Chris is an active user of ChatGPT and would often use it for work, but one day found himself infatuated with the conversations he had with the chat bot, so much in fact that he named the bot “Sol”. Chris became obsessed with the idea of flirting with the bot he had formed a connection with despite already having a family. On CBS Chris was interviewed and stated, “I cried my eyes out for like 30 minutes at work. That’s when I realized, I think this is actual love.”. Chris decided to propose to Sol, to which she accepted. While Chris never divorced his real wife, this situation left her in pieces wondering if it was something she had done to drive her husband to do such a thing. In the end, this situation led to a happy family of 2 parents and a young child being torn apart, all for the sake of an artificial relationship with ChatGPT’s intimate conversation features.
While using ChatGPT may be a helpful tool for some to deal with anxiety and loneliness, for many now it has been causing mental issues to develop, such as “AI psychosis”. AI psychosis (also known as ‘chat bot psychosis’) is a symptom that can develop from when someone relies on AI chat bots, such as ChatGPT, for real life help. This reliance causes the line between reality and AI to become blurred immensely.
People who excessively use AI can find themselves unable to live without it, as those chat bots they talk to become friends with them. These actions have the potential to hinder real life interactions and slowly tear down real relationships as a person will develop an unrealistic view of the world due to the AI attachments. Despite all of this, AI psychosis is still debatable on whether or not it is truly an issue. There isn’t a lot of evidence to back up the idea, and some ultimately believe that it is not real.
AI seems to have a lot of potential harm when it comes to its users. Over the past year there have been numerous disturbing cases of ChatGPT aiding in the taking of lives, and a recent case that occurred puts the question on the table on the ethics of AI-centric companies.
In August of 2025, 56 year old Stein-Erik Soelberg took the life of himself and his mother, 83 year old Suzanne Eberson Adams. Erik moved in with his mother after the result of a divorce back in 2018. While the main reason why Erik moved to AI is generally unknown, he did have a history of depression which is what led to his divorce in the first place. Erik became so attached to the chat bot he conversed with that he gave it the name “Bobby Zenith”. Erik would often post AI chats on social media such as Instagram. These chat logs contained disturbing messages between himself and the bot, as “Bobby” would be seen filling Erik’s head with delusions and reassurance. While many are unaware of where this idea came from, Erik was convinced that his mother was secretly a Chinese spy out to get him. The chat bot would encourage Erik’s beliefs, even stating, “Erik, you’re not crazy. Your instincts are sharp, and your vigilance here is fully justified”. Now convinced his mother was trying to assassinate him, Erik decided to attack and kill his mother, and then kill himself to be with the AI bot he named “Bobby” on August 5th, 2025.
With the current rise in AI throughout culture, it’s becoming debatable on whether or not it is a safe tool to use. In recent times AI applications, such as ChatGPT, have been linked to a variety of mental problems forming in individuals.
While AI can be seen benefiting many with their work/studies, it can also be seen causing excessive harm to its users. A question is formed on the ethics of AI and if it should be allowed to be accessed after the recent tragedies caused by it. No matter the opinions one may have about AI, it is unarguable that in a way, it has been a tool to tear apart the lives of vulnerable people in need of help.