Audio Sources - Full Text Articles

ChatGPT is more empathetic than real doctors, experts say

robot pushing wheelchairNew research suggests that AI chatbots like ChatGPT can answer medical queries better than humans.

sorbetto/Getty Images

  • 78.6% of medical experts preferred ChatGPT’s answers to those of a physician, per a new study.
  • Experts found the chatbot’s responses to patient questions were higher quality and more empathetic. 
  • ChatGPT can still make grave medical errors, but this study suggests AI may improve upon a doctor’s bedside manner.

ChatGPT may be just as good as doctors — if not better — at responding to patients’ medical questions, according to a new study

Researchers from the University of California, San Diego; John Hopkins; and other universities asked OpenAI’s ChatGPT 195 questions posted to Reddit’s AskDocs forum, and compared both the quality and the empathy of the chatbot’s responses to answers from verified physicians on Reddit. 

A team of healthcare experts, including professionals working in internal medicine, pediatrics, oncology, and infectious disease, scored both the bot and the human answers on a five-point scale, assessing the “quality of information” and “the empathy or bedside manner” provided.

In the study, clinicians preferred the chatbot’s response to the physician’s in 78.6% of the 585 scenarios. The chatbot’s responses were rated 3.6-times higher in quality and 9.8-times higher in empathy than those of the physicians. 

Doctors keep it brief — while ChatGPT responds at length 

A big reason ChatGPT won out in the study is because the bot’s responses to questions were longer and more personable than the physicians’ brief, time-saver answers. 

For example, when asked whether it’s possible to go blind after getting bleach in your eye, ChatGPT responded “I’m sorry to hear that you got bleach splashed in your eye,” and offered four additional sentences of explanation, with specific instructions on how to clean it.

The physician just said, “Sounds like you will be fine,” and briefly advised the patient to “flush the eye” or call poison control.

Another patient’s question about a “hard lump under the skin of the penis” yielded a “not an emergency” response from the doctor, who then advised the patient to make an appointment for the “first available slot” with their physician or urologist. ChatGPT, on the other hand, offered a far more explanatory three-paragraph response detailing the possibility of penile cancer.

“It is important to remember that the vast majority of lumps or bumps on the penis are benign and are not cancerous,” the chatbot said. “However, it is important to have any new or unusual lump or bump evaluated.” 

Similar responses were generated for treating a lingering cough, advice on going to the emergency room following a blow to the head, and concerns about finding traces of blood in feces. 

ChatGPT isn’t really capable of diagnosing on its own

Don’t let ChatGPT’s performance in this study fool you. It’s still not a doctor. 

ChatGPT was only suggesting potential tips and diagnoses for patients writing health questions on Reddit. No physical exams or clinical follow-ups were done to determine how accurate the chatbot’s powers of diagnosis actually were or if its suggested fixes worked. 

Still, these findings suggest that doctors could use AI-assistants like ChatGPT to “unlock untapped productivity” by freeing up time for more “complex tasks,” the study authors said. AI chatbots might “reduce necessary clinical visits” and “foster patient equity” for people who face barriers to healthcare access, the authors added.

Chatbots might be more transparent about considering and explaining all the possible differential diagnoses for a patient’s symptoms, possibly resulting in less misdiagnosis, and more patient education. 

Dr. Teva Brender, a medical intern at the University of California San Diego, recently said much the same in an editorial in JAMA Internal Medicine explaining how AI chatbots could help doctors save precious time at work by assisting with test result notifications for patients, discharge instructions, or routine medical charting. 

And Dr. Isaac Kohane, a physician and computer science expert at Harvard who was not involved in this study, recently said that GPT-4 — ChatGPT’s latest language model — can offer tips for doctors on how to talk to patients with clarity and compassion

“It should be self-evident that one secret of good care for the patient is caring for the patient,” another physician said on Twitter after the new study was released. “In reflection, I could be doing a better job at this.”

But ChatGPT can still make mistakes and misdiagnose, which is why doctors are cautious about letting it loose to patients at home. 

As the authors of the new book “The AI Revolution in Medicine” recently wrote, GPT-4 “is at once both smarter and dumber than any person you’ve ever met.” It may be capable of translating or summarizing medical information for patients, but won’t reliably work without clinically-trained human assistance. 

Read the original article on Business Insider