With the rise of AI, more and more people are turning to tools like ChatGPT to assist with everyday needs. According to data from OpenAI, that includes healthcare advice. Over 40 million people worldwide use ChatGPT for healthcare assistance — which makes up 5% of all ChatGPT messages globally.
Americans are increasingly using ChatGPT as an “ally” for navigating the complex U.S. healthcare system. In some cases, this is a positive. However, relying on ChatGPT for healthcare assistance leaves the potential for inaccuracies and, sometimes, even harm.
Common Uses of ChatGPT for Healthcare
The American healthcare system isn’t the easiest to understand. And users frequently use ChatGPT for healthcare system-related assistance. Some of the most common uses include:
- Decoding medical bills and identifying overcharges
- Appealing insurance denials
- Comparing health insurance plans
- Managing claims, billing issues, and coverage questions
Beyond helping them navigate the system, many users also use ChatGPT to self-diagnose or manage care — especially when access to doctors is limited. In fact, in underserved rural communities, users send nearly 600,000 healthcare messages to ChatGPT weekly!
Risks of Using ChatGPT for Healthcare Needs
There are definitely benefits of using ChatGPT for healthcare. It can help users assess symptom severity and urgency, even guiding decisions on waiting for care versus seeking emergency treatment. However, it doesn’t come without risk.
ChatGPT and other AI tools rely on information found on the internet for information, and that information may not always be accurate. There’s always the potential for incorrect or dangerous advice, particularly when it comes to mental health topics.
There have been multiple lawsuits against OpenAI (the creator of ChatGPT) alleging harm linked to chatbot interactions, and several states have already restricted AI chatbots from providing mental health advice.
OpenAI notes that accuracy does improve when responses are grounded in patient-specific context, like if the user uploads insurance documents or clinical instructions. The company has also been working closely with clinicians to reduce harmful responses and improve safeguards. However, it’s difficult to completely eliminate the risks.
The Future of AI in Healthcare
The role of AI in the healthcare industry only seems to be expanding. And with the potential expiration of enhanced ACA subsidies, even more uninsured or underinsured patients will likely turn to AI for health guidance.
Key concerns moving forward when it comes to using ChatGPT for healthcare include accuracy, liability, and patient data access. Though AI companies like OpenAI are constantly improving algorithms and tools, risks still remain. That’s why it’s essential to work with a local insurance broker to find the right care to fit your needs.
Human brokers provide personalized, state-specific guidance to interpret plan nuances, advocate during claims and appeals, and ensure that you make the most informed coverage decisions.
If you or someone you know needs assistance choosing the right health insurance plan, don’t rely solely on ChatGPT for healthcare needs. Reach out to a local Health Benefits Associate advisor by calling 775-828-1216 today. We’ll walk you through the options to find the best coverage options to fit your unique situation.