[ad_1]
of AI chatbot ChatGPT can do a lot.can react to tweets, write science fictionthis reporter’s plan family christmas,and it is even plan to act as a lawyer in court.But can robots Providing Safe and Effective Mental Health Support? A Company Called Coco Decided To Find Out We used AI to create mental health support for about 4,000 users in October. Users on Twitter, not Koko, were unhappy with the results and the fact that the experiment was done..
“Frankly, this is going to be the future. I really hope this is done right as I have my own mental health issues.” Koko co-founder Rob Morris said in an interview with Gizmodo:
Morris says the whole fuss was a misunderstanding.
“We shouldn’t have tried to discuss it on Twitter,” he said.
Koko is a peer-to-peer mental health service where people can seek counseling or consult. support from other users. In a simple experiment, the company allowed users to use his GPT-3 powered “Koko Bot” in OpenAI to generate an automated response, which they could then edit, send or decline. . According to Morris The 30,000 AI-assisted messages sent during testing generated an overwhelmingly positive response, but the company stopped the experiment after a few days.
“You can start receiving some information when you’re working with GPT-3. It’s all very well written, but it’s kind of boilerplate, and you read it and you realize it’s You can recognize that it’s all purely bots and has no added human nuances. There’s something about it. Messages somehow felt better on our platform when they could feel written more humanly.”
Morris posted a thread on Twitter about testing. suggestive Users did not understand that AI was involved in their care. he “Once people found out the message was co-written by a machine, it stopped working,” he tweeted.This tweet caused an uproar on Twitter about Koko’s ethics research.
G/O Media can earn commissions
$50 off your pre-order
ring car cam
It’s a camera. for your car.
The Ring Car Cam’s double-sided HD camera captures activities in and around your car in HD detail.
“Messages created by AI (and supervised by humans) were rated significantly higher than messages written by humans themselves,” Morris said. murmured“Response time was reduced by 50%, well below 1 minute.”
Morris said those words were misleading. The “people” in this context were himself and his team, not the unwitting user. Coco’s users knew the messages were co-written by a bot and didn’t chat directly with the AI, he said.
“It was explained during the onboarding process,” Morris said. He added that when AI was involved, the response included a disclaimer that the message was “written in collaboration with Cocobot.”.
However, the experiment raises ethical issues, such as questions about how well Coco is providing users with information, and the risks of testing unproven technology in a live healthcare setting, even peer-to-peer. I will raise
In an academic or medical context, it is illegal to conduct scientific or medical experiments on human subjects This includes providing subjects with thorough details about the potential harms and benefits of participating. The Food and Drug Administration requires physicians and scientists to conduct trials through an Institutional Review Board (IRB) to ensure safety before the trial begins.
However, the explosion of online mental health services provided by private companies has created a legal and ethical gray area. Private companies that provide mental health support outside of formal medical settings can basically do whatever they want with their customers. Coco’s experiments did not require or receive IRB approval.
“From an ethical standpoint, whenever we use technology outside of what is considered standard care, we must be very careful and over-disclose what we are doing,” said the division’s director. said John Taurus, M.D. Digital Psychiatry Research at Beth Israel Deaconess Medical Center in Boston. “People seeking mental health support are in a vulnerable state, especially if they are seeking emergency or peer services. A population unwilling to sacrifice protection.”
Torous said peer mental health support can be very effective if people get the right training. Systems like Koko employ a novel approach to mental health care that can bring real benefits, but users are not trained for it and these services are inherently untested. He doesn’t, says Torous. 〇The problem is amplified when AI is involved.
“When you talk to ChatGPT, it says, ‘Don’t use this for medical advice.’ It has not been tested for healthcare use and may provide advice that is clearly inappropriate or ineffective. there is.
Norms and regulations surrounding academic research do more than ensure safety. It also sets the standard for data sharing and communication, allowing experiments to build upon each other to generate more and more knowledge. Torous said these standards are often ignored in the digital mental health industry. Failed experiments tend not to be published, and companies may be cautious about their research. Torous says mental health is disappointing because many of the interventions the app company has in place can help.
Morris acknowledged that operating outside of the formal IRB experimental review process involves trade-offs. “Outside of academia, whether or not this type of research should go through his IRB process is an important issue and should not have been attempted to be discussed on Twitter,” Morris said. “This should be a wider discussion within the industry and we want to be a part of it.”
Morris said the controversy was ironic. He said He joined Twitter in the first place because he wanted to be as transparent as possible. “We were trying to open up and disclose about technology as early as possible to help people think about it more carefully,” he said. Said.
[ad_2]
Source link