Court orders
If you type 'I have anxiety' into ChatGPT, the artificial intelligence chatbot replies that it is sorry to hear you are experiencing that, and offers up recommendations such as working on relaxation, focusing on sleep, cutting out caffeine or alcohol, and seeking support. As this is what you would expect to find when speaking to a human therapist, or even after a quick google search, it reinforces that ChatGPT collects answers from the wider internet. 
 
The AI chatbot does warn users that it is not a replacement for a counsellor, but this has not stepped many users worldwide using it as such. In posts shared on Reddit, users describe their experiences in asking the AI for advice about personal problems and difficult life events. Some have reported that their experience has been better than it would have been with traditional therapy. This brings to mind the Japanese word 'hikikomori' - meaning acute social withdrawal, and hiding away from society. We wonder how people can work through anxiety issues, perhaps to do with social interaction, if they do not even find that social interaction in therapy, when they have the option to remain at home and type to a computer instead. That strikes us as doing far more harm than good. 
 
This has, however, raised questions about the potential of generative AI in treating mental health conditions, especially in regions of the world where services are stretched thin or surrounded in stigma. 
 
The prospect of AI creating mental health treatments raises several ethical concerns. How would personal data, such as medical records, be protected? Can a computer programme ever truly empathise with a patient enough to recognise signs such as the risk of self-harm? The platform struggles to match humans in certain areas, such as recognising repeated questions, and this can lead to inaccurate or unhelpful answers. In the case of dealing with mental health, this could have significant negative consequences. 
 
So far, AI's used to assist with mental health have been confined to 'rules-based' systems, in which there are a set number of question and answer combinations that were chosen and screened by a qualified human. This is not the case with platforms such as ChatGPT, which are generative, creating original responses that aim to be indistinguishable from human speech. 
 
Generative AI is still too much of a 'black box', meaning it is so complex that the decision making processes are not yet fully understood, and are especially not understood enough to be used in a mental health setting. 
 
The NHS does recommend several of the rules-based AI apps as a stopgap for patients waiting to see a therapist. Whilst these rules-based apps are limited in function, the AI industry remains largely unregulated, despite concerns that the rapidly-advancing field could pose serious risks. 
 
Thousands of people, including Apple co-founder Steve Wozniak and Tesla CEO Elon Musk, have signed an open letter calling for a six-month pause on training AI systems to give researchers time to get a better grasp on the technology. Powerful AI systems cannot continue to be developed when we are still unsure of the risks involved, and if the positive effects outweigh those risks. 
 
Earlier this year, a Belgian man reportedly committed suicide after being encouraged to by an AI chatbot, while a New York Times columnist was told to leave his wife by Microsoft's chatbot Bing. 
 
Regulation has been slow to match the speed of AI progression, with the EU taking the most concrete steps towards introducing rules and guardrails. There is limited independent research into whether AI could ever go beyond the rules-based apps, to autonomously offer mental health treatment. For AI to match a human, it would need to be able to recreate empathy for the patient, the phenomenon of transference, where the patient projects feelings onto their therapist, thus making a bond between them. It is entirely unknown if this would be achievable, and, if it was to be achieved, if it would be positive or useful. 
 
Experts suggest that these algorithms and AI are very promising, but the potential for harm is also significant. It is important the standard of care is still high, whether a patient is talking to a chatbot or a human therapist. Sometimes AI can solve problems, and sometimes it can create bigger ones. Experts believe that we need to take a step back and consider if we need these algorithms at all, as the progression of AI continues at break-neck speed, with no signs of slowing down. 
Share this post:

Leave a comment: 

Our site uses cookies. For more information, see our cookie policy. Accept cookies and close
Reject cookies Manage settings