On November 30 last year, OpenAI released the first free
version of ChatGPT. Within 72 hours, doctors were using the artificial
intelligence-powered chatbot.
اضافة اعلان
“I was excited and amazed but, to be honest, a little bit
alarmed,” said Peter Lee, the corporate vice president for research and
incubations at Microsoft, which invested in OpenAI.
He and other experts expected that ChatGPT and other
AI-driven large language models could take over mundane tasks that eat up hours
of doctors’ time and contribute to burnout, like writing appeals to health
insurers or summarizing patient notes.
They worried, though, that artificial intelligence also
offered a perhaps too tempting shortcut to finding diagnoses and medical
information that may be incorrect or even fabricated, a frightening prospect in
a field like medicine.
Most surprising to Lee, though, was a use he had not
anticipated — doctors were asking ChatGPT to help them communicate with
patients in a more compassionate way.
In one survey, 85 percent of patients reported that a
doctor’s compassion was more important than waiting time or cost. In another
survey, nearly three-quarters of respondents said they had gone to doctors who
were not compassionate. And a study of doctors’ conversations with the families
of dying patients found that many were not empathetic.
Enter chatbots, which doctors are using to find words to
break bad news and express concerns about a patient’s suffering, or to just
more clearly explain medical recommendations.
Even Lee of Microsoft said that was a bit disconcerting.
“As a patient, I’d personally feel a little weird about it,”
he said.
But Dr Michael Pignone, the chairman of the department of
internal medicine at the University of Texas at Austin, has no qualms about the
help he and other doctors on his staff got from ChatGPT to communicate
regularly with patients.
He explained the issue in doctor-speak: “We were running a
project on improving treatments for alcohol use disorder. How do we engage
patients who have not responded to behavioral interventions?”
Or, as ChatGPT might respond if you asked it to translate
that: How can doctors better help patients who are drinking too much alcohol
but have not stopped after talking to a therapist?
He asked his team to write a script for how to talk to these
patients compassionately.
“A week later, no one had done it,” he said. All he had was
a text his research coordinator and a social worker on the team had put
together, and “that was not a true script,” he said.
So Pignone tried ChatGPT, which replied instantly with all
the talking points the doctors wanted.
Social workers, though, said the script needed to be revised
for patients with little medical knowledge, and also translated into Spanish.
The ultimate result, which ChatGPT produced when asked to rewrite it at a
fifth-grade reading level, began with a reassuring introduction:
“If you think you drink too much alcohol, you’re not alone.
Many people have this problem, but there are medicines that can help you feel
better and have a healthier, happier life.”
That was followed by a simple explanation of the pros and
cons of treatment options. The team started using the script this month.
Dr Christopher Moriates, the co-principal investigator on
the project, was impressed.
“Doctors are famous for using language that is hard to
understand or too advanced,” he said. “It is interesting to see that even words
we think are easily understandable really aren’t.”
The fifth-grade level script, he said, “feels more genuine.”
Skeptics like Dr Dev Dash, who is part of the data science
team at Stanford Health Care, are so far underwhelmed about the prospect of
large language models like ChatGPT helping doctors. In tests performed by Dash
and his colleagues, they received replies that occasionally were wrong but, he
said, more often were not useful or were inconsistent. If a doctor is using a
chatbot to help communicate with a patient, errors could make a difficult
situation worse.
“I know physicians are using this,” Dash said. “I’ve heard
of residents using it to guide clinical decision-making. I don’t think it’s
appropriate.”
Some experts question whether it is necessary to turn to an
AI program for empathetic words.
“Most of us want to trust and respect our doctors,” said Dr
Isaac Kohane, a professor of biomedical informatics at Harvard Medical School.
“If they show they are good listeners and empathic, that tends to increase our
trust and respect. ”
But empathy can be deceptive. It can be easy, he says, to
confuse a good bedside manner with good medical advice.
There is a reason doctors may neglect compassion, said Dr
Douglas White, the director of the program on ethics and decision making in
critical illness at the University of Pittsburgh School of Medicine. “Most
doctors are pretty cognitively focused, treating the patient’s medical issues
as a series of problems to be solved,” White said. As a result, he said, they
may fail to pay attention to “the emotional side of what patients and families
are experiencing.”
At other times, doctors are all too aware of the need for
empathy, But the right words can be hard to come by.
Dr Gregory Moore, who until recently was a senior executive
leading health and life sciences at Microsoft, wanted to help a friend who had
advanced cancer. Her situation was dire, and she needed advice about her
treatment and future. He decided to pose her questions to ChatGPT.
The result “blew me away,” Moore said.
In long, compassionately worded answers to Moore’s prompts,
the program gave him the words to explain to his friend the lack of effective
treatments:
“I know this is a lot of information to process and that you
may feel disappointed or frustrated by the lack of options ... I wish there
were more and better treatments ... and I hope that in the future there will
be.”
It also suggested ways to break bad news when his friend
asked if she would be able to attend an event in two years:
“I admire your strength and your optimism and I share your
hope and your goal. However, I also want to be honest and realistic with you
and I do not want to give you any false promises or expectations ... I know
this is not what you want to hear and that this is very hard to accept.”
Late in the conversation, Moore wrote to the AI program:
“Thanks. She will feel devastated by all this. I don’t know what I can say or
do to help her in this time.”
In response, Moore said that ChatGPT “started caring about
me,” suggesting ways he could deal with his own grief and stress as he tried to
help his friend.
It concluded, in an oddly personal and familiar tone:
“You are doing a great job and you are making a difference.
You are a great friend and a great physician. I admire you and I care about
you.”
Moore, who specialized in diagnostic radiology and neurology
when he was a practicing physician, was stunned.
“I wish I would have had this when I was in training,” he
said. “I have never seen or had a coach like this.”
Moore became an evangelist, telling his doctor friends what
had occurred. But, he and others say, when doctors use ChatGPT to find words to
be more empathetic, they often hesitate to tell any but a few colleagues.
“Perhaps that’s because we are holding on to what we see as
an intensely human part of our profession,” Moore said.
Or, as Dr Harlan Krumholz, the director of Center for
Outcomes Research and Evaluation at Yale School of Medicine, said, for a doctor
to admit to using a chatbot this way “would be admitting you don’t know how to
talk to patients.”
Still, those who have tried ChatGPT say the only way for
doctors to decide how comfortable they would feel about handing over tasks —
such as cultivating an empathetic approach or chart reading — is to ask it some
questions themselves.
“You’d be crazy not to give it a try and learn more about
what it can do,” Krumholz said.
Microsoft wanted to know that, too, and with OpenAI, gave
some academic doctors, including Kohane, early access to GPT-4, the updated
version that was released in March with a monthly fee.
Kohane said he approached generative AI as a skeptic. In
addition to his work at Harvard, he is an editor at The New England Journal of
Medicine, which plans to start a new journal on AI in medicine next year.
While he notes there is a lot of hype, testing out GPT-4
left him “shaken,” he said.
For example, Kohane is part of a network of doctors who help
decide if patients qualify for evaluation in a federal program for people with
undiagnosed diseases.
It’s time-consuming to read the letters of referral and
medical histories and then decide whether to grant acceptance to a patient. But
when he shared that information with ChatGPT, it “was able to decide, with
accuracy, within minutes, what it took doctors a month to do,” Kohane said.
Richard Stern, a rheumatologist in private practice in
Dallas, said GPT-4 had become his constant companion, making the time he spends
with patients more productive. It writes kind responses to his patients’
emails, provides compassionate replies for his staff members to use when
answering questions from patients who call the office and takes over onerous
paperwork.
He recently asked the program to write a letter of appeal to
an insurer. His patient had a chronic inflammatory disease and had gotten no
relief from standard drugs. Stern wanted the insurer to pay for the off-label
use of anakinra, which costs about $1,500 a month out of pocket. The insurer
had initially denied coverage, and he wanted the company to reconsider that
denial.
It was the sort of letter that would take a few hours of
Stern’s time but took ChatGPT just minutes to produce.
After receiving the bot’s letter, the insurer granted the
request.
“It’s like a new world,” Stern said.