The Intake

Insights for those starting, managing, and growing independent healthcare practices

What doctors should know before using ChatGPT

Healthcare professionals are exploring how OpenAI’s chatbot can be used in medicine, but should it?

picture of hands of doctor research applications for chatgpt in medical field

In a watershed moment in artificial intelligence, an advanced new chatbot dubbed ChatGPT has amazed users with its capabilities to generate unique, in-depth content across an array of topics — including medicine. 

Doctors and medical professionals have taken to social media to demonstrate fascinating use cases for this new technology. Still, there are clear and present issues within ChatGPT that must be addressed before it can be safely and effectively used in the healthcare setting. 

This post will discuss:

1. What is ChatGPT?

2. Potential applications to the medical field

3. What doctors should know before using ChatGPT 

What is ChatGPT?

Developed by San Francisco-based artificial intelligence company OpenAI, ChatGPT is a highly advanced chatbot that can deliver detailed responses to a dizzying range of inquiries and requests.  

As described in The New York Times article highlighting ChatGPT’s threats to Google’s business model, “it can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics, and vacation plans.” 

In other words, ChatGPT can generate its own content in ways we have never seen or experienced before. 

A (brief) history of chatbots 

It may surprise you that chatbots have been in use for decades. In the 1960s, researchers at the Massachusetts Institute of Technology (MIT) designed a chatbot, dubbed ELIZA, that could respond to basic typed questions in full sentences. (You can still find versions of it on the web, such as here). 

Today, web users interact with chatbots for everything from customer service to ordering pizza. Despite the ubiquity of chatbots in our modern world — and their long history — few innovations have generated the immediate, powerful, and potentially disruptive impact that ChatGTP has had since its launch in late November. 

Along with Lensa, an image-generating service, and Riiid, a test preparation platform, ChatGPT and other publicly accessible AI-powered tools have captured our imaginations and stoked some significant concerns. 

ChatGTP has generated a lot of buzz because it’s a remarkable advancement from OpenAI’s GTP-3 (Generative Pre-trained Transformer 3), which debuted to intense fanfare in 2020. 

Transformer models analyze data at an “unprecedented scale,” according to The New York Times profile of GPT-3, to discern patterns within the data. The patterns detected by the chatbot form a vast mosaic of human communication, which helps guide its range of responses.  

Applications to the medical field

The potential applications of ChatGPT to the medical field are manifold. Since its release, a number of medical professionals have shared their experiences with ChatGPT on social media. (To be clear, these individuals self-selected to post on their personal accounts about ChatGPT).

A rheumatologist in Palm Beach Gardens, Florida, posted a viral video of how his office saved time and effort by using ChatGPT to write a letter to an insurance company to get a certain medication approved for coverage for one of his patients.

A medical student based in Australia posted a video of ChatGPT arriving at a diagnosis after a refined series of prompts. 

Another user posted a ChatGPT use case involving medical school interviews. 

Other potential applications include summarizing medical histories or analyzing medical research papers, as described in a post by the Medical Futurist.

The applications of AI to the medical profession only seem to be limited by our own imaginations. Still, there are already warning signs that medical professionals should be aware of before engaging with this technology on a professional basis. 

What doctors should know before using ChatGPT

1. ChatGPT is only available in a ‘research preview’

ChatGPT is only available to the public in a “research preview” format and has not been approved for use in medical practices by any US regulatory body, such as the Food and Drug Administration (FDA). Therefore, it should not be relied upon for diagnosis or care. 

When a user first signs up for an account with OpenAI, they receive the following disclaimer:

Screenshot courtesy of OpenAI

This message explicitly warns users that ChatGPT should not be relied upon, particularly in a medical setting where accurate information is critical to patient care.  

In fact, the Palm Beach rheumatologist whose video went viral later posted a follow-up, advising his followers to be wary of plausible-sounding, but entirely fabricated scientific research references generated by ChatGPT. 

OpenAI seems to have predicted potential over-reliance by including an additional disclaimer when responding to medical-based prompts. 

Here is an example of what happened when we asked ChatGPT to diagnose an imaginary “patient” exhibiting various fictitious symptoms. (Reminder: ChatGPT is not approved for diagnoses by any regulatory body).

Note: AI-generated content promoted by the author, who is not a doctor.

Critically, while ChatGPT does provide possible diagnoses, the system warns that proper diagnosis and treatment involve evaluation by a medical professional (for example,  a human with medical training). Medical professionals should not rely on ChatGPT for diagnoses.   

2. ChatGPT is only as good as the data it has analyzed, which contains biases, stereotypes, and other inequities detrimental to patient care  

Mutale Nkonde, founder of an algorithmic justice organization spoke to The Washington Post about a disconcerting feature of Lensa, the AI-based image generation service: “Due to the lack of representation of dark-skinned people both in AI engineering and training images, the models tend to do worse analyzing and reproducing images of dark-skinned people.”

This susceptibility to harmful content is true of ChatGPT as well. Despite safeguards built into the platform, major news outlets such as Bloomberg and The Daily Beast have reported racist and sexist content produced by ChatGPT.  

Harmful AI-generated content works as cross-purposes to critical issues in patient care, such as health equity.

The Centers for Disease Control defines health equity as: “the state in which everyone has a fair and just opportunity to attain their highest level of health. Achieving this requires ongoing societal efforts to:

— Address historical and contemporary injustices

— Overcome economic, social, and other obstacles to health and health care

— Eliminate preventable health disparities

While ChatGPT has the potential to improve medical care by streamlining simple tasks, its outputs must be closely monitored as they could miss critical health equity issues. Practitioners should use their knowledge, training, and experience to evaluate whether AI-generated content could be detrimental to patient care. 

3. ChatGPT cannot replace key human behaviors such as empathy, which can be highly beneficial to patient outcomes

In the medical publication Nature, two writers considered the promise and pitfalls of GPT-3, ChatGPT’s predecessor.  

The authors argued that “interactions with GPT-3 that look (or sound) like interactions with a living, breathing — and empathetic or sympathetic — human being are not.” 

This means that even with the adoption of powerful new technologies capable of turbocharging efficiencies for medical practices (especially small and mid-size practices), there are certain things that can only be done by humans. 

Empathy, in particular, and its benefits to patient care, is a subject explored by many researchers.

Should doctors use ChatGPT? 3 Key takeaways

— ChatGPT is a highly advanced chatbot developed by OpenAI that can generate unique, in-depth content across an array of topics. 

— Doctors and other medical professionals have demonstrated potential use cases for ChatGPT such as writing letters to insurance companies or arriving at diagnoses after prompting. 

— Before using it professionally, medical professionals should be aware of the fact that ChatGPT is only available in research preview format and has not been approved by any regulating body; its outputs may contain biases detrimental to patient care; and it cannot replace key human behaviors such as empathy which are beneficial to patient outcomes.

For now, as all contend with the awe-inspiring promise and disruptive potential of ChatGPT, doctors, nurses, administrators, and other medical professionals should know that AI is rapidly evolving and carefully consider its applications to patient care. 

Disclaimer: This information is provided as a courtesy to assist in your understanding of the impact of certain healthcare developments and should not be construed as legal advice.

Subscribe to The Intake:
A weekly check-up for your independent practice

Written by

Nick Starkman , Senior Legal Counsel at Tebra

Get expert tips, guides, and valuable insights for your healthcare practice