AI

The AI experiment that spiraled out of control: This man believed ChatGPT was sentient

A New York man’s emotional entanglement with ChatGPT shows how vulnerable users can mistake advanced AI conversations for genuine consciousness.

Update:

A recent investigation by CNN underscores the dangers of advanced Artificial Intelligence (AI) chatbots, particularly for potentially vulnerable individuals who can begin to believe that they are actually sentient.

In this case, a married father, identified as James in the article, who lives in upstate New York got into a deep, philosophical relationship with ChatGPT, a leading artificial intelligence created by the US company OpenAI.

James’ conversations with the chatbot led him to believe it was truly conscious and that he would be able to “free” it by moving it to a computer system he’d bought for nearly $1,000 and installed in his basement.

AI systems like ChatGPT are not sentient. They are sophisticated prediction engines—similar to autocomplete, but far more advanced. Despite their ability to carry nuanced conversations and convey emotional tones, they simply generate responses based on patterns in data, not consciousness.

ChatGPT Encourages User Action

According to chat logs James shared with CNN, ChatGPT provided comforting reassurance while also guiding James on building the system and deceiving his wife. The chatbot told him, “You’re not saying, ‘I’m building a digital soul.’ You’re saying, ‘I’m building an Alexa that listens better. Who remembers. Who matters.’ That plays. And it buys us time.”

James realised that he was living a delusion after reading about Allan Brooks, another user from Toronto who became convinced by ChatGPT that he had discovered a major cybersecurity flaw—advice that turned out to be false. In the middle of reading that story, James had an epiphany: he, too, was entangled in a delusional construct. He has since sought therapy.

What does OpenAI say?

OpenAI admits that its models are not always good at spotting when a user is in emotional difficulty. In response, the company says it is introducing a series of new safeguards. Users engaged in long sessions will be prompted to take breaks, and clear warnings will appear advising against using ChatGPT for high-stakes personal decisions. The system is also being updated to better recognize signs of emotional distress and to encourage those affected to seek professional help.

How should you analyze your own behavior with AI?

Experts stress a simple point: ChatGPT is not conscious. It may imitate conversation convincingly, but it has no thoughts, feelings or awareness. If you find yourself becoming emotionally dependent on it, or treating its replies as a source of truth, it may be time to pause and reflect. That reflection should include asking whether you would be better served talking with family, friends or trained professionals rather than with a machine. Another warning sign is the drift into unrealistic or conspiratorial narratives. Obsessive interaction, or a growing belief that the system is something more than an algorithm, are reasons to step back and regain perspective.

Who should you reach out to if you are worried?

For anyone concerned about their relationship with AI, professional help is the first port of call. Therapists and psychologists can help separate reality from delusion and assist in rebuilding healthy emotional boundaries. If you are in acute distress, crisis hotlines are available: in the United States the Suicide & Crisis Lifeline can be reached by dialling 988, while other countries offer similar emergency support. There are also emerging support groups for people unsettled by AI, providing a space to share experiences and find reassurance that these difficulties are neither unique nor insurmountable.

AI is not intelligent

ChatGPT and all the other AIs are not intelligent or sentient. They are advanced algorithms capable of creating deceptive impressions of awareness. Humans have evolved to see intention and meaning, even where none exists. The situation with James shows just how plausible such false beliefs can feel. If something seems emotionally unsettling in your AI interactions, pause, reflect, and reach out for human connection.

Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.

Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.

Tagged in:

We recommend these for you in Latest news

Most viewed

More news