ChatGPT outperforms human doctors in empathy and quality of medical advice

A recent study showed that the linguistic model of artificial intelligence, chatIt outperforms human physicians in the quality and empathy of their written adviceAnd reports Watchman.

This study suggests that AI assistants have the potential It plays an important role in medicine And they can Help improve communication between doctors and their patients.

the study, published In the journal JAMA Internal Medicine, I looked at data from Reddit’s AskDocs community, where certified healthcare professionals answer medical questions from netizens.

Researchers randomly sampled 195 AskDocs exchanges in which a verified physician answered a general question. The original questions were then put to ChatGPT, who was tasked with answering them.

A panel of three licensed healthcare professionals, who were unsure whether the response was from a real doctor or a ChatGPT, rated the quality and empathy of the responses.

In fact, the researchers attempted a Turing test equivalent to testing an AI chatbot in the medical field.

Amazing results for ChatGPT

Before looking at the results, it should be noted that OpenAI’s ChatGPT has already undergone similar evaluations. In January, ChatGPT drew attention for its ability to obtain a B/B grade in its MBA exam. In February, ChatGPT made significant progress in the AI ​​field by successfully passing the initial stages of a job interview for a L3 Level Software Engineer position.

This is a significant achievement, as the L3 position is typically filled by recent higher education graduates looking to launch their career in development.

In the same month, a new study revealed that OpenAI’s ChatGPT scored nearly a 60% pass mark on the United States Medical Qualification Examination (USMLE), indicating that it could pass the exam nearly as well.

Returning to the test of quality and empathy, the Guardian tells us The panel preferred ChatGPT answers to human doctor answers in 79% of cases.

ChatGPT responses were also rated good or very good 79% of the time, compared to 22% of clinicians’ responses, and 45% of ChatGPT responses were rated sympathetic or very sympathetic, compared to only 5% of clinicians’ responses.

ChatGPT promises healthy improvements

John Ayers of the University of California, San Diego, one of the study’s authors, said the findings highlight the potential of AI assistants in improving the healthcare industry. “The potential for improving healthcare with AI is enormousHe didn’t say.

Dr. Christopher Longhurst of UC San Diego Health also commented on the findings, saying that the study indicates that tools like ChatGPT can effectively write high-quality personalized medical advice that clinicians will review. He added that the process of using ChatGPT has already begun at UCSD Health.

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *