Step aside, WebMD — health advice has become the most common way people use ChatGPT.
The chatbot’s parent company, OpenAI, reported that 40 million people query ChatGPT daily to decode convoluted medical bills, appeal unfair insurance claims, or manage their own treatment. According to a February Gallup poll, nearly 16 percent of U.S. adults already use AI or social media to find medical information.
Meanwhile, Americans owe over $220 billion in medical debt, according to 2024 figures. The country’s health workforce currently faces widespread shortages, with high turnover rates for first-year nurses and a need for 114,000 more physicians by 2028 to meet demand. Around half of Americans reported struggling to afford healthcare last year, as the federal government narrowed Affordable Care Act subsidies.
In the eyes of many, the healthcare system has broken.
Meanwhile, widespread AI adoption has been touted as a solution for an overburdened medical system. Narrowly-designed, clinical-grade AI, trained for specific tasks, could potentially revolutionize imaging, patient charting, and insurance processing. But AI developers aren’t stopping there — they want AI in the patient’s hands, too.
In January, OpenAI launched ChatGPT Health, the company’s free, consumer-facing solution for those seeking health guidance — and anyone willing to upload their medical histories for the chatbot to digest.
Digital doctor or privacy nightmare?
ChatGPT Health, which incentivizes users to upload their personal medical records for tailored medical assistance, was announced on Jan. 7, promising to “securely” link your health information with ChatGPT’s brain. In the months since, other tech companies have followed suit, including the recently announced Amazon Health AI assistant and Microsoft Copilot Health.
Not everyone sees Health GPTs and other AI-related health tools as a net positive.
“Generative AI chatbot products starting to spin off into these healthcare-adjacent submarkets is deeply concerning,” Melodi Dinçer, senior staff attorney for the Tech Justice Law Project, told Mashable.
In the hours following ChatGPT Health’s launch, Dinçer published a scathing statement characterizing OpenAI’s release as a strategic business move to access more personal data while jeopardizing the privacy of struggling Americans. The Tech Justice Law Project is currently representing individuals suing OpenAI over mental health concerns with ChatGPT.
You’re creating a larger ecosystem in this non-HIPAA covered space.
Other privacy watchdogs said their alarm bells went off, too.
“We don’t have a comprehensive federal privacy law in the United States,” explained Andrew Crawford, senior policy counsel for the Center for Democracy and Technology‘s Data and Privacy Project. At least, he said, none that puts real limits on how companies handle consumer data, especially sensitive data sets.
Tech companies, including Meta and OpenAI, have lobbied to keep robust privacy laws off the books, and government officials like Secretary of State Marco Rubio have pushed for less regulation of American tech companies.
In the absence of federal regulation on data, Americans are provided a patchwork of state-level laws and industry-specific regulations, including protections under the Health Insurance Portability and Accountability Act, or HIPAA.
A new Mashable series, AI + Health, will examine how artificial intelligence is changing the medical and health landscape. We’ll explore how to use AI to decipher your blood work, effectively prompting chatbots when it comes to health questions, and learn how two women are using AI to detect a dangerous form of heart disease, and much more.
Passed in 1996, HIPAA established a federal standard for protecting patient medical data and related identifying information in cases where data is shared with or without patient consent. Its Privacy Rule has also become a benchmark for assessing a medical product’s privacy standards.
HIPAA, however, isn’t a failsafe. Its protections aren’t attached to data itself, explained Crawford, but to the institutions that process and store it. Consumer data is shielded only when it’s in the hands of an institution bound by HIPAA laws, not when it exists in other marketplaces or is stored elsewhere online.
Institutions bound by HIPAA laws are known as covered entities. This includes health insurance companies, HMOs, company health plans, and other coverage providers like Medicaid and Medicare; most (but not all) care providers like doctors, dentists, psychologists, nursing homes, and even chiropractors; and, finally, clearinghouses, or businesses that process and transmit health data. Anyone that does business with one of those entities, like a lawyer or billing company, is also under HIPAA’s oversight.
Oura, Apple, Strava: Personal wellness apps and ChatGPT Health
Most popular health apps are not covered by HIPAA, according to the HIPAA Journal. Not your Oura ring, Apple Health app, or running buddy Strava. When you share your data with something like ChatGPT Health, even if you use it to inform your conversations with a covered entity later, that information is not legally bound by anything outside of the company’s privacy policy.
Mashable Trend Report
But many, like OpenAI, promise that data is being treated carefully.
We are buying into this idea that something so complex as health can be reduced to numbers on a screen.
Covered entities are blocked by law from using your data for things like targeted advertising or user behavioral profiles, without authorization. But any other companies that get a hold of your medical information can do whatever they please, in accordance with their own privacy policies, Crawford says.
Lily Li, a data privacy and AI risk management attorney and founder of Metaverse Law, explained company privacy policies often include reasonable security protocols and opt-out features, but aren’t required to include HIPAA oversights like specific authorization, time limitations on storing data, or disclosure obligations.
Take the case of DNA processing site 23andMe, which, upon filing for bankruptcy, announced it would be selling itself and its library of DNA samples to a company that users hadn’t consented to consult with. Medical information, Dinçer explained, is one of the most valuable markets for data brokers online.
Many AI companies have erected walls between versions of their product that are compliant with laws like HIPAA and those that aren’t, including the “enterprise level” products touted by OpenAI and its competitors. These aren’t the same products being marketed for use by the general public. For example, OpenAI launched ChatGPT for Healthcare, a HIPAA compliant version for health professionals not to be confused with ChatGPT Health, one day later. That same week, Anthropic announced HIPAA-compliant Claude for Healthcare.
Much like ChatGPT Health, Microsoft’s Copilot Health is not HIPAA compliant but guided by internal privacy policies. The company explains, “data in Copilot Health is protected with industry leading safeguards, including encryption at rest and in transit, strict access controls, and the ability to manage and delete your information when you choose.”
Amazon Health AI, on the other hand, is automatically looped into HIPAA compliance as an offering underneath Amazon One Medical.
The situation starts to get real confusing, real fast for the average consumer.
This muddled privacy grey area is where fitness and wellness apps have thrived, hinging their marketplace clearance on the distinction between a product that seeks to provide treatment versus one that operates merely as a health “assistant.” It’s why you will almost always see a note emblazoned across the app: Consult with your doctor.
Now enter LLM products, which not only gather data from users’ chats, but also emphatically encourage uploading your personal medical records and linking third-party apps — like MyFitnessPal, Weight Watchers, or Apple Health and its wearables — to get the “best” results from your chatbot. Many of these fitness apps have previously come under fire for tracking users without consent and illegally collecting data.
Copilot Health, for example, is compatible with more than 50 wearable wellness devices, Microsoft says, including Oura rings and Fitbit watches. Amazon initially incentivized Amazon One Medical users to upload their personal medical information by offering early Health AI access to those who consented. “You do not have to allow One Medical to access your health records to use Health AI. However, to ensure the best experience, we are prioritizing early access to Health AI to those who do,” wrote Amazon in early versions of the product’s FAQ.
“You’re creating a larger ecosystem in this non-HIPAA covered space, where health data is being shared and used by lots of companies,” Crawford said. “That’s going to create large troves of sensitive health data that all these companies will be in possession of.”
Opting Out vs. Opting in
Dinçer also flagged that ChatGPT Health isn’t being piloted to people in the European Union or the UK — places with more robust consumer data privacy laws and, specifically, requirements that data collection is opt-in.
Most U.S. law is an opt-out system, Dinçer explained, which places the onus on users to be aware of privacy laws and pay attention to the minutiae of a non-HIPAA product’s terms of service. Often U.S. consumers are up against intentionally deceptive design, like confusing language and complicated interfaces referred to as dark patterns, that make rules on data storage difficult to parse.
“We see these endemic, horrible practices around actually safeguarding our personal information when in the hands of these kinds of companies,” Dinçer said. “There’s no indication to me that that’s suddenly going to change just because the technology looks a little different or you’re disclosing it to something that feels like an intelligent conversation partner.”
Over the years, state laws have started to catch up, Li said. California recently expanded its Confidentiality of Medical Information Act (CMIA), outlining unlawful uses of sensitive data and requiring a patient’s written authorization to disclose medical information. Washington state passed the My Health My Data Act in 2023, considered one of the strongest consumer data privacy laws in the country.
Even so, there are exceptions across state and federal laws.
One day before ChatGPT Health launched, the FDA announced it would be limiting its regulation of wearable technology and associated software designed to foster “healthy lifestyles.” These technologies and others like fitness trackers are considered “low-risk non-medical devices,” and as long as they don’t make any diagnoses or treatment claims, they fall out of the FDA’s strict oversight.
Two weeks after the ChatGPT Health announcement, OpenAI announced it was in the early design stages of its first AI wearable device.
Medical “partners” in the era of AI
A recent report by healthcare research nonprofit ECRI argued that AI chatbots are the “most significant health technology hazard” heading into 2026, citing risks of AI models perpetuating bias and exacerbating existing health disparities.
Similarly, many experts warn that LLMs aren’t yet robust enough to effectively curb misinformation. A recent Guardian investigation found that Google’s AI overviews often spat out inaccurate, gender-biased medical answers and could pose a public health risk. A study published in Nature Medicine in February found that ChatGPT Health failed to effectively triage medical emergencies and make appropriate care recommendations when compared to real-world physicians.
And the expansion of tech companies into the medical sphere poses additional concerns about the law. Will companies like OpenAI be subject to further inquiry from law enforcement requesting personal health or chat log data? What would that mean for people with stigmatized health conditions or precarious legal statuses, including people seeking reproductive healthcare, abortions, and gender-affirming care?
“We’re already conditioned to think it’s OK or normal to go to the internet with our health inquiries, sharing really intimate information online and with commercial products,” Dinçer said. “We are buying into this idea that something so complex as health can be reduced to numbers on a screen.”
________________________________________________________________________________________________________
The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a physician or other qualified health propeovider regarding any questions you may have about a medical condition or health objectives.
Disclosure: Ziff Davis, Mashable’s parent company, previously filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.


