Microsoft has launched Copilot Health, a dedicated, secure space within its Copilot AI assistant that aggregates personal health data from wearables, electronic health records, and laboratory results, then applies AI to surface what the company calls a “coherent story” of a user’s health.
The product opened its waitlist on 12 March 2026 and is rolling out in phases, initially to English-speaking adults in the United States.
The launch marks Microsoft’s most direct entry into consumer health AI and places it alongside OpenAI, which introduced ChatGPT Health in January 2026, and Anthropic, which unveiled Claude for Healthcare the same month.
In the words of Dominic King, VP of Health at Microsoft AI: “2026 feels like an important year for consumer health.”
He told press briefing attendees that Microsoft’s consumer AI products, Copilot and Bing, already field more than 50 million health-related questions a day.
Copilot Health appears as a dedicated tab in the Copilot web interface and mobile app. Users create a health profile by entering basic details such as age and sex, then optionally connect data sources.
From there, the tool can analyse lab results, interpret wearable readings, surface connections across data streams, and help users prepare questions ahead of clinical appointments.
The data plumbing
Three connectors power the platform’s personal health layer. Wearable data, covering activity levels, sleep patterns, and vital signs, flows in from more than 50 devices, with Apple Health, Oura, and Fitbit cited as examples.
Electronic health records come through HealthEx, a US health data infrastructure provider whose network spans more than 52,000 healthcare organisations via direct FHIR-endpoint exchange, as well as TEFCA individual access services across more than 12,000 organisations. Lab results connect through Function, a venture-backed medical testing provider.
HealthEx confirmed the partnership in a separate press release issued on the same day. The company’s co-founder and CEO, Priyanka Agarwal, M.D., described the integration as giving users access to their health history “across labs, medications, conditions, clinical notes and more” with the ability to revoke access at any time.
Microsoft itself confirmed that users can disconnect any connector instantaneously and that health data in Copilot Health is not used for AI model training, a point the company has repeated prominently in all communications around the product.
For general health information, as opposed to personal data, Microsoft says it has elevated content from credible health organisations across 50 countries, with source selection verified by its clinical team using standards set by the National Academy of Medicine.
Responses include citations and source links. The platform also serves expert-written answer cards from Harvard Health and connects to real-time US provider directories, allowing users to search for clinicians by specialty, location, languages spoken, and insurance coverage.
The AI roadmap: towards ‘medical superintelligence’
Microsoft is framing Copilot Health as a step toward a longer-term goal it describes as “medical superintelligence”, a term the company has been using since at least late 2025. The vision is AI that can combine the breadth of a general physician with the depth of a specialist.
The vehicle most cited for this ambition is the Microsoft AI Diagnostic Orchestrator (MAI-DxO), a research-stage system the company says has produced strong results in clinical evaluation environments.
Microsoft says forthcoming publications will detail how MAI-DxO can be applied across a wider range of cases and conditions. The company states that any new AI features drawing on these research capabilities will only be released into Copilot Health after rigorous clinical evaluation and with clear labelling, a commitment that reads as a regulatory buffer as much as a product design principle.
“We truly believe we’re on the path to medical superintelligence that brings together both the wide-ranging knowledge of a family doctor or general physician as well as the deep domain expertise of a specialist,” said Dominic King, VP of Health, Microsoft AI.
Privacy, governance, and the HIPAA question
Microsoft has been careful on data governance. Copilot Health data and conversations are stored separately from general Copilot interactions, encrypted at rest and in transit, subject to stricter access controls, and not used for model training.
The product has achieved ISO/IEC 42001 certification, the international standard for AI management systems, which requires third-party verification of how an organisation builds, governs, and improves its AI services.
The platform has also been developed with an external advisory panel of more than 230 physicians from more than 24 countries, alongside consumer advocacy organisations including AARP, which serves 38 million older Americans, and the National Health Council, which represents over 180 patient advocacy groups.
However, a significant regulatory caveat emerged during press briefings. King confirmed that Copilot Health is not subject to HIPAA, the US federal law governing the privacy and security of patient health data, because it operates as a direct-to-consumer service where users are sharing their own data, rather than as a covered healthcare entity.
King said: “HIPAA is not required for a direct-consumer experience like this when you’re using your own data,” while adding that Microsoft intends to announce updates on its HIPAA controls. He declined to specify what those updates would entail.
This distinction matters. HIPAA compliance obligates healthcare organisations to strict data handling, breach notification, and minimum necessary use standards.
Consumer health platforms that fall outside HIPAA, as Copilot Health does at launch, are not subject to the same enforcement regime. The FDA’s relaxation of rules around wearable clinical decision support at the start of 2026 adds further regulatory complexity: it means more AI-enabled health tools can reach consumers without pre-market FDA review.
The clinical reception
Initial expert reaction has been broadly cautious rather than hostile. Arjun Manrai, assistant professor of biomedical informatics at Harvard Medical School, told Healthcare Brew that the approach makes strategic sense, describing the use of personal context in AI health interactions as likely to become a defining trend in 2026. He called helping people prepare for doctor’s appointments a good target for large language models.
Physicians interviewed by the New York Times acknowledged that AI-assisted health tools could help people access health information at a time when care is becoming increasingly expensive and clinicians increasingly stretched.
But the same physicians flagged concerns about privacy risks from sharing records with large technology companies, and the potential for tools like Copilot Health to prompt unnecessary clinical visits by making users anxious about data patterns that may be clinically insignificant.
Microsoft’s standard disclaimer sits at the bottom of every Copilot Health communication: the product is not intended to diagnose, treat, or prevent diseases, and is not a substitute for professional medical advice.


