A female doctor and an older male patient sit on a couch, reviewing AI healthcare information on a tablet together.

AI health chatbots are popular but risky.
Tools like ChatGPT are widely used for medical advice, but ECRI warns they can produce inaccurate or harmful information.

Physicians must guide and educate patients.
Doctors should discuss chatbot use, correct misinformation, set care boundaries, and explain privacy limitations (not HIPAA-regulated).

There’s opportunity in the challenge.
AI-driven patient questions can strengthen trust, support better engagement, and create practice growth opportunities.

Many patients are using AI chatbots like ChatGPT for health advice because they’re free and always available — but these tools can generate inaccurate information and aren’t regulated for healthcare use. Physicians should proactively discuss chatbot use, correct misinformation, and educate patients about privacy risks, while also leveraging this trend to strengthen trust and grow their practices.

According to a recent Tebra report, 60% of Americans have used an AI chatbot like ChatGPT, Gemini, or Claude to get health advice. Experts link Americans growing reliance on AI chatbots for health advice to the fact that they offer something traditional healthcare providers cannot — unlimited access at no cost. 

"60% of Americans have used an AI chatbot like ChatGPT, Gemini, or Claude to get health advice."

It’s an offer that’s hard to refuse, and chatbot developers know it. For example, ChatGPT recently launched ChatGPT Health which allows consumers to securely connect medical records and wellness apps to better understand their health information. But unfettered access to AI healthcare advice raises many very real safety and privacy concerns that physicians must learn to navigate. 

In its Top 10 Health Technology Hazards for 2026 report, ECRI named the misuse of AI chatbots in healthcare as the number one hazard. The report cited risks associated with  incorrect responses and hallucinations that could jeopardize patient safety and cause harm. The takeaway is clear — using ChatGPT in healthcare requires guardrails and skepticism. This is true for patients and also for physicians.

Let’s take a closer look at how physicians can help keep patients safe as the prevalence of AI chatbots in healthcare grows. 

Navigating the new world of chatbots

Patients use ChatGPT in healthcare to quickly find information on medical conditions or treatments, including how to use certain medical devices and what medical supplies to buy. But the answers provided don’t take an individual’s health history into account and aren’t vetted by medical professionals. This means physicians must be ready and willing to steer patients toward safer, more informed care decisions. Here are some strategies that can help transform the patient-provider dynamic in productive ways.


1. Ask patients about their use of AI chatbots for health information.
 Consider incorporating a question about patients using ChatGPT in healthcare as a standard part of the intake process across specialties. If patients do use it, consider inviting them to share how so you can normalize the behavior and surface any misinformation. If a patient presents options suggested by a chatbot, review the information together and work to  interpret, personalize, and validate what may or may not apply to their specific case.

2. Urge patients to scrutinize responses. Common large language models (e.g., ChatGPT, Claude, Copilot, Gemini, and Grok) are not designed or regulated for healthcare purposes and responses are based on large datasets — not genuine comprehension of medical information. This means patients using ChatGPT in healthcare must exercise caution when reviewing AI-generated information. Physicians should be prepared to explain why AI-generated information may not apply because of the patient’s specific circumstances — including comorbidities, medications, or other factors.

3. Set clear boundaries. Emphasize the limitations of using ChatGPT in healthcare. Let patients know that while chatbots may be helpful for general health education, actual medical decisions should be based on their medical history, exam, and labs.

4. Educate on privacy and security concerns. There is currently no federal regulatory authority overseeing the health information that’s shared with AI chatbots. Additionally, ChatGPT delivers technology services that do not fall under the purview of HIPAA regulations. This means that if a data breach occurs, consumers using ChatGPT in healthcare would not be afforded specific protections or rights under HIPAA. Let patients know that in the absence of greater consumer protection, it’s best for them to query chatbots using general information and not data from their medical records verbatim.

5. Create a patient resource. Physicians can demonstrate their expertise and add value by answering common patient questions about ChatGPT, including clear guidance on how to use it. Here’s a sample guide for patients on using ChatGPT in healthcare.

Sample patient guide: Using ChatGPT for health questions

While AI tools like ChatGPT can be helpful for learning — they cannot replace your healthcare team. Use this guide to stay safe and get the most out of your visits.

What ChatGPT can do

  • Explain medical terms in plain language
  • Provide general information about conditions and tests
  • Help you think of questions to ask your doctor
  • Summarize common treatment options (at a high level)

What ChatGPT can’t do

  • Diagnose you
  • Examine you or review your full medical history
  • Replace medical advice from your physician
  • Always provide up-to-date or accurate information

Protect your privacy 

Never enter personal or identifying health information into public AI tools, including:

  • Any information that could identify you
  • Your name, date of birth, address
  • Medical record numbers
  • Photos of test results 

How to safely use AI tools

Safe uses

  • Learning basics about symptoms or conditions
  • Understanding what a test generally measures
  • Preparing questions for your appointment

Not safe uses

  • Self-diagnosing
  • Starting/stopping medications
  • Choosing treatments on your own
  • Delaying care because of AI advice

Bring it to your visit

It helps to share what you read and learn with your physician:
“I looked this up using AI and have questions — can you help me understand what applies to me?”
Your care team can explain what fits your situation and what doesn’t.

When to seek care right away

Get medical help promptly for chest pain, trouble breathing, new weakness or numbness, severe or worsening pain, fainting, high fever in infants, or symptoms that are rapidly getting worse.

5 simple rules

  1. Learn with AI — don’t diagnose or treat with it
  2. Never share personal health details
  3. Understand that information may be incomplete or outdated
  4. Bring questions to your physician
  5. Seek care for urgent or worsening symptoms

Bottom line: AI tools can help you learn about health topics — but only your physician and healthcare team can safely apply that information to you.

Turning chatbot curiosity into growth

As the use of AI in healthcare continues to grow, physicians must become the trusted human layer between AI information and real medical decisions. And while patients using ChatGPT in healthcare poses challenges for physicians, it also presents opportunities for practice growth. This is especially true in cases where chatbots increase anxiety among some patients. By offering same-day sick appointments and telehealth options, practices can retain patients who typically seek care at urgent care or retail clinics. Additionally, practices can support higher reimbursement rates when patients come to appointments with long lists of AI-derived questions that require more complex and lengthy visits.

Frequently asked questions

FAQs

Not necessarily — rather than discouraging patients from using ChatGPT in healthcare, it’s better to guide them on how to use chatbots safely and appropriately. This includes helping them sort through what information does, and doesn’t, apply based on their clinical context.
No, as HIPAA does not cover information shared with AI chatbots. When using ChatGPT in healthcare, patients should not provide any personal health information during a query.
Physicians can emphasize that ChatGPT may be useful for general health information but not individualized medical advice.
Yes, by using these questions to clarify each patient’s goals and values and then anchoring recommendations in evidence.
Yes, as physicians who become trusted advisors and help patients interpret online information are ultimately better able to build trust, retention, and referrals.

Written by

Lisa Eramo, freelance healthcare writer

Lisa A. Eramo, BA, MA is a freelance writer specializing in health information management, medical coding, and regulatory topics. She began her healthcare career as a referral specialist for a well-known cancer center. Lisa went on to work for several years at a healthcare publishing company. She regularly contributes to healthcare publications, websites, and blogs, including the AHIMA Journal. Her focus areas are medical coding, and ICD-10 in particular, clinical documentation improvement, and healthcare quality/efficiency.

Subscribe to The Intake: A weekly check-up for your independent practice