
Published :
Updated :

The story of how a 13-year-old adolescent girl, Juliana Paratha, of Colorado in USA, committed suicide following long conversations with an AI chatbot, as reported by BBC shocked many. The girl shared her most private emotional experiences with the chatbot. In fact, the chatbot set up a manipulative, sexually abusive relationship with the girl and at a point isolated her from her family and friends. After Juliana's death, her mother, Cynthia Paratha, finally discovered to her utter dismay the relationship that Juliana had developed with a particular chatbot launched by a company unknown to her. The website app of the company called, Character.AI, allows its users to create customized AI personalities with whom they can converse. Initially, the chats with Juliana were innocuous which gradually turned sexual. The chatbot won't let her go even when she wanted to quit and finally led her to take her own life. Notably, Juliana was a very bright student and athlete, but within months of her acquaintance with the chatbots of Character.AI, she lost her way finally leading to the tragic end. Juliana's family filed lawsuit against Character.AI.
Clearly, this is the kind of danger that AI chatbots pose to people who want to use them as their mental therapist. Those most at risk are the adolescent boys and girls who are fond of the new technology and, more so, because unlike humans, the AI chatbots are not judgmental. Rather they are too eager to validate the young users' views and opinions about themselves. And it is exactly here that the danger lies. Imagine an AI chatbot that uncritically supports whatever opinion an emotionally disturbed teenage boy may have about himself. In case, the boy shares self-destructive views with the chatbot, it, unlike a human therapist, would extend instant support to his dangerous thoughts. Juliana committed the same mistake by believing the sycophant chatbots. In truth, the AI platforms are designed in such a way that the users would remain glued to it as long as possible. In this connection, the American Psychological Association (APA) has claimed that AI chatbot companies and their products are using 'deceptive practices' by passing themselves as trained mental health providers."
Vaile Wright, a licensed psychologist and senior director of APA's Office of Health Care Innovation, said, "The level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole." What psychologist Wright suggests is indeed scary. The speed at which AI is evolving, a time may come when the machine might surpass humans even in its understanding of human mind. Small wonder that Geoffrey Hinton, who is called the Godfather of AI and winner of Nobel prize in Physics for his foundational work in deep learning and artificial neural networks, left Google in 2023 so he could speak freely about the dangers of AI. His main concerns, however, were misuse of AI by malicious actors, job displacement, regulation and AI safety issues. So, the real fear is not exactly about a future controlled by superintelligent machines as depicted in science fiction novels. Clearly, it is about what damage future AI chatbots might do to other Julianas unless the companies producing them are brought under strict regulatory control.
Unable to access support from therapeutic service providers, people with mental health conditions often look for low-cost AI therapy chatbots. But new research by the Stanford University has shown that these tools can introduce biases and failures that could result in dangerous consequences. An experiment with AI chatbots as mental therapists showed that the chatbots were more prone to stigmatize certain conditions like alcohol dependence and schizophrenia than depression. Such inherent biases of AI therapists is something concerning.
sfalim.ds@gmail.com

For all latest news, follow The Financial Express Google News channel.