

“It didn’t sound like someone talking to me. “It turns out ChatGPT is sort of chatty,” he said.

He found the responses to be thorough, but verbose. David Asch, a professor of medicine and senior vice dean at the University of Pennsylvania, asked ChatGPT how it could be useful in health care.

“Without controlling for the length of the response, we cannot know for sure whether the raters judged for style (e.g., verbose and flowery discourse) rather than content,” wrote Mirella Lapata, professor of natural language processing at the University of Edinburgh.Įarlier this month, Dr.

All clinicians evaluating these responses preferred ChatGPT’s response.Īs in this example, experts note that responses from ChatGPT were typically much longer than those from physicians, which could affect perceptions of quality and empathy. Meanwhile, one physician responded with “sounds like you will be fine,” followed by the phone number for Poison Control. ChatGPT started its response by apologizing for the scare, followed by seven more sentences of advice and encouragement about the “unlikely” result of going blind. In one example provided in the study, a patient posed a question to a social media forum about the risk of going blind after a splash of bleach in the eye. On average, ChatGPT scored 21% higher than physicians for the quality of responses and 41% more empathetic, according to the study. Conversely, nearly half of responses from ChatGPT were considered to be empathetic (45%) compared with less than 5% of those from physicians. More than a quarter of responses from physicians were considered to be less than acceptable in quality compared with less than 3% of those from ChatGPT. AI? What ChatGPT and artificial intelligence could mean for the future of medicine Responses from ChatGPT were “preferred over physician responses and rated significantly higher for both quality and empathy,” according to a study published Friday. Still, new research published this week suggests that physicians may have some things to learn from the chatbot when it comes to patient communication.Ī panel of licensed health care professionals assessed responses to about 200 different medical questions posed to a public online forum, including patient inquiries about medical diagnoses, need for medical attention and more. “While I am a language model that has been trained on a vast amount of information, I am not a licensed medical professional and I am not capable of providing medical diagnoses, treatments, or advice,” the chatbot wrote in response to a question from CNN. ChatGPT can be a useful tool for patients who are seeking medical information and guidance, but the artificial intelligence tool can’t fully replace the value of a human physician – it says so itself.
