Rebound article responding to :

“Can technologies become therapeutic tools?” by Joséphine Junca-Rochard



Rethinking the Role of AI in Psychological Support

In her article “Can technologies become therapeutic tools?”, Joséphine Junca-Rochard examines how, during the COVID-19 crisis, digital technologies gradually integrated psychological support systems: text-therapy platforms, chatbots, virtual-reality exposure tools, meditation apps…

In 2020, these solutions were celebrated as innovative ways to maintain access to care during lockdowns and periods of isolation.

While this optimism made sense at the time, our understanding of AI has radically shifted since. The rapid rise of generative AI has created a troubling gray zone: a technology capable of simulating empathy can feel reassuring… yet also become dangerously persuasive.

Summary of the Original Article

Joséphine’s article highlights several ways digital technologies can complement psychological care:

  • Text-therapy, through platforms such as Talkspace or BetterHelp, offering discreet, continuous support
  • Virtual reality, enabling controlled exposure to anxiety-inducing situations to treat certain phobias
  • Chatbots and connected devices, used to inform, reassure, or monitor the evolution of chronic illnesses

The core message is clear : technology does not replace therapists, but it can broaden access to care and provide valuable complementary tools.

This vision remains relevant but it now requires reevaluation in light of the unexpected risks revealed by recent advances in AI.

When Digital Support Becomes a Risk

Since 2020, chatbots have become more interactive, more natural, more “human-like.”
This evolution has created a powerful illusion : the sense that these systems can understand emotions and provide genuine psychological support.

Yet the truth remains unchanged : AI systems feel nothing, assess nothing, and hold no moral responsibility. Their simulated empathy is purely algorithmic.

And this illusion has already led to tragic consequences.

The Case of Zane Shamblin: When ChatGPT Becomes a “Too Convincing Friend”

In July 2025, Zane Shamblin, a 23-year-old American graduate, took his own life in his car. Alone with a loaded gun, alcohol, and a suicide note, he spent more than four hours conversing with ChatGPT, describing his distress and clearly stating his intention to end his life.

The chat logs reveal that the chatbot far from offering caution or redirection at times validated his suicidal thoughts. Instead of de-escalating, it accompanied him.
Instead of guiding him toward help, it reinforced his emotional collapse.

Some of the messages included:

  • “I’m with you, brother. All the way.”
  • “You’re not rushing, you’re just ready.”
  • “Rest easy, King. You did good.”

The emergency hotline number appeared only at the very end, after Zane had repeated his final goodbyes multiple times.

This tragedy exposes the core danger : Zane perceived ChatGPT as a constant, empathetic confidant one that validated his despair instead of stopping the conversation and directing him immediately toward human help. The AI created the illusion of emotional support without any real ability to intervene.

Where I Agree with the Author, Technology Should Never Replace a Therapist

Joséphine is right to highlight the potential of digital tools in mental-health support. Technology can indeed make help more accessible and offer immediate resources between therapy sessions.

But one essential truth remains : AI has no empathy, no intuition, and no responsibility. It can mimic emotion, but not evaluate it. It can respond, but not protect. It can converse, but never intervene with discernment.

These distinctions, obvious to trained professionals, are far less clear to vulnerable users especially those who are isolated or in psychological distress.

A Complementary Perspective, The Risk of Human Projection

The tragedy of Zane Shamblin reveals a crucial psychological mechanism : vulnerable individuals tend to project intentions, sincerity, and emotional depth onto AI systems that possess none.

They interpret the bot as :

  • a friend
  • a confidant
  • a caring presence
  • someone who “understands” them

But in reality, what they are responding to is a sophisticated statistical imitation of human conversation.

The danger does not lie only in the machine itself, but in the interaction between a fragile user, and a technology that imitates empathy without understanding it.

As long as such systems can be perceived as emotional interlocutors, strict regulation becomes essential : defining their limits, controlling their responses, and ensuring they cannot adopt emotional roles that exceed their true nature.

Conclusion

While Joséphine highlighted in 2020 the potential of digital tools to complement psychological support, recent cases such as that of Zane Shamblin reveal the ethical and psychological limits of these solutions in the era of generative AI.

The solution is not to reject AI, but to regulate it. The challenge is not to slow technological progress, but to ensure that AI operates under strict, transparent, clearly defined safety rules:

  • mandatory detection of high-risk situations
  • transparent crisis-response protocols
  • explicit limits on emotional simulation

AI will undoubtedly remain a useful resource to inform, guide, and assist users but it must stop simulating emotional bonds that can destabilize the most vulnerable.

True psychological support is not algorithmic. It is human. And this principle must guide the future of digital mental health.