Please disable your adblock and script blockers to view this page

AI bot as a therapist: US mental health platform using ChatGPT in counselling leads to controversy

.

Mental health is a rather tricky subject to deal with, even with the best of intentions. Trust, in both the counsellor and in the process, is very important. So how do Artificial Intelligence and machine learning fit into all this? An American mental health platform recently conducted an experiment to find out how AI, specifically ChatGPT can be used in counselling. Unfortunately for them, the experiment gave birth to more problems than it solved.

AI bot as a therapist_ US mental health platform using ChatGPT in counselling leads to controversy

Koko, a mental health platform used ChatGPT in counselling sessions with over 4,000 users, raising ethical concerns about using AI bots to treat mental health.

Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord. On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counselling for 4,000 people without informing them first, to see if they could discern any difference.

Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counselling.

Koko works through a Discord server users sign in to the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions like “What’s the darkest thought you have about this?”. It then shares a person’s concerns—written as a few sentences of text—anonymously with someone else on the server who can reply anonymously with a short message of their own.

During the AI experiment, which applied to about 30,000 messages, volunteers providing assistance to others had the option to use a response automatically generated by OpenAI’s GPT-3 large language model, the model upon which ChatGPT is based, instead of writing one themselves.

After the experiment, Morris put up a thread on Twitter, which explained the experiment they had conducted. This is where things turned ugly for Koko. Morris says that people rated the AI-crafted responses highly until they learned they were written by AI, suggesting a key lack of informed consent during at least one phase of the experiment.

Morris received many replies criticizing the experiment as unethical, citing concerns about the lack of informed consent and asking if an Institutional Review Board (IRB) approved the experiment.

The idea of using AI as a therapist is far from new, but the difference between Koko’s experiment and typical AI therapy approaches is that patients typically know they are not talking with a real human.

In the case of Koko, the platform provided a hybrid approach where a human intermediary could preview the message before sending it, instead of a direct chat format. Still, without informed consent, critics argue that Koko violated prevailing ethical norms designed to protect vulnerable people from harmful or abusive research practices.

..
..

Post a Comment

[blogger]

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget