Published :
Updated :
With the supersonic jet speeded rise in the use of AI to make everyday queries easier, there have been many blind spots distinctly spared for the further contradiction of setting boundaries between what should and should not be accepted. Theatrically quoting as per Shakespeare's great line put in an extravagantly simple words in Hamlet - 'To be or not to be, that is the question'.
AI tools, especially Chatbots, are of many uses to young people today - from jotting down PDFs to last-minute cramming before exams, using AI to help with assignments, and writing love letters to that one crush. There are many unethical and morally wrong misuses as well - plagiarism, copyright infringement, privacy issues, security risks, shallow research, information mishaps, laziness, environmental concerns that keep AI going, etc. There are more wrongs than rights in some cases, perhaps. Still, the ethical dilemma tops the list today. Does the generation know the line about interacting with AI?
In the case of mental health education, Bangladesh is still far behind in addressing and ensuring proper consultation, therapy, or support. Here, underlying mental health concerns like depression, anxiety, ADHD, BPD, or even emotional management during a relationship, breakups, divorces, and child mental health issues are still not considered serious due to societal stigmas. According to ADD International, in Bangladesh, it is estimated that between 15%-35% of the population has a mental health problem. As per the latest data till October 2023, there are approximately 270 psychiatrists in Bangladesh, serving a population of 21.25 million individuals with mental health conditions. This number has remained relatively stable since October 2020, when reports indicated around 260 psychiatrists.
Owing to Social Media's credit, there are a lot of videos on mental health awareness, self-diagnosis, basic tips, and understanding markers of primary mental health issues. The stigma of suppressing mental health problems is significantly lower nowadays, especially among young people. They are more interested in receiving professional mental health therapy, consultation, and diagnosis. But the problem is the unavailability of options - not enough clinical psychiatrists, psychologists, and the expense skyrockets for the good and renowned ones. It is increasingly complex to find a good therapist, and more challenging to follow through with the sessions due to the higher session charges.
Recently, there has been a surge in the use of AI Chatbots as therapists. According to the BBC, people around the world have shared their private thoughts and experiences with AI chatbots, even though they are widely acknowledged as inferior to seeking professional advice. Rusafa Amin (pseudonym), a public university student, shares her experience of using an AI Chatbot like this, '"Whenever I was struggling, if it was going to be a terrible day, I could then start to chat to one of these bots, and it was like [having] a cheerleader, someone who's going to give you some good vibes for the day.' At the time, Rusafa was trying to book a cost-effective talking therapy to discuss issues with anxiety, low self-esteem, and a relationship breakdown. She says interacting with Chat GPT got her through a dark period, as it gave her coping strategies and was available for 24 hours a day. "I'm not from an openly emotional family - if you had a problem, you just got on with it. I think the fact that this is not a real person is so much easier to handle."
Chatbots such as Chat GPT, Gemini, or Character.ai are based on "large language models" of artificial intelligence. These are trained on vast amounts of data - whether that's websites, articles, books, or blog posts - to predict the next word in a sequence. From here, they expect and generate human-like text and interactions. The way mental health chatbots are created varies, but they can be trained in practices such as cognitive behavioural therapy, which helps users to explore how to reframe their thoughts and actions. They can also adapt to the end user's preferences and feedback.
According to an interview with the BBC, Hamed Haddadi, professor of human-centred systems at Imperial College London, has compared these chatbots to an "inexperienced therapist" and points out that humans with decades of experience will be able to engage and "read" their patients based on many things. At the same time, bots are forced to go on text alone.
Another potential issue pointed out by Haddadi is the 'Yes Man' issue, in that they are often very agreeable. Chatbots can be trained to keep you engaged and supportive, so even if the user says harmful content, it may validate that as well to match its language model and programmed supportive function. It fundamentally lacks the critical ability to analyse dangerous situations and markers that are only felt through words, actions, tonal variations, and observation, which is still impossible for the language model. But this ability to recognise the boundary of words and actions is not yet enough awareness among the users of these chatbots.
AI chatbots often reflect biases in the data they're trained on. Prof. Haddadi notes in the interview that, since therapy sessions aren't usually recorded, chatbots lack real-life examples, which limits and potentially biases their training.
There are moral dilemmas in this case of AI therapists that must be taken into consideration before acting upon their advice, and sharing personal lives with AI. As a country where therapy is a luxury, and depression is 'curable with one good laugh' mentality, it has acted as a straw that one clutches when they are seeking some rational words and advice after a dark day.
So, AI support can be a helpful first step, but it's hardly a substitute for professional care. But with the proper safeguards, chatbots could be a useful stopgap in an overloaded mental health system. It is essential to understand the uses and limits of AI and interact with it accordingly. Or one may as easily fall into the blinding laziness that comes with assistive technology and find themselves in a sea of despair.
nowertahseen@gmail.com