VOLUME 45
Companies Expand AI Health Offerings, Even as Accuracy Questions Remain
Highlights
Five technology companies have launched dedicated consumer-facing AI health tools so far in 2026, reflecting the demand for what some users see as a convenient source of health information, even as questions about AI’s reliability remain unresolved.
A decades-old World Health Organization classification has been misrepresented online to suggest that hormonal birth control pills were recently found to cause cancer, illustrating how false and misleading health claims can spread even in the absence of outright falsehoods.
AI & Emerging Technology
More AI Companies Move into Consumer Health
Last month, three major technology companies launched or expanded availability of dedicated consumer-facing AI health applications of their large language model (LLM) chatbots, allowing users to connect medical records, lab results, and wearable data to receive personalized health guidance. The March launches of Copilot Health and Perplexity Health followed Amazon Health AI, which launched in January for One Medical members before expanding more broadly in March. OpenAI and Anthropic also announced health offerings earlier this year: ChatGPT Health, which lets consumers connect medical records and wellness apps directly to the chatbot, and Claude for Healthcare, which includes offerings for providers and payers as well as a set of personal health integrations for individual subscribers. The companies largely position these features as a complement to, rather than a replacement for, professional care.
Reliability and Accuracy
Users may think that accessing personal health data allows these tools to offer more accurate and personalized responses than generic AI searches. Even as these tools incorporate more personalization features, though, the underlying models they’re built on may still struggle with fundamental reliability challenges. A study published earlier this year in Nature Medicine found that participants using earlier models of some AI chatbots to identify relevant conditions and determine the appropriate course of action in common medical scenarios performed no better than a control group that used their own resources at home, such as online searches, without AI assistance. Researchers observed instances in which users describing the same symptoms received conflicting advice, in part because of how users phrased their questions, but also because the chatbots themselves sometimes misinterpreted prompts and gave inconsistent or incorrect responses. The study authors noted that newer models may provide higher performance on medical benchmarks but said that it remained unclear whether these improvements would translate to real-world performance gains. More recently, a study from Mount Sinai found that ChatGPT Health under-triaged more than half of medical emergencies in structured clinical testing, potentially directing patients with serious medical conditions toward routine follow-up rather than emergency care.
Subscription Models and Access
Even as reliability concerns persist, KFF’s March 2026 Tracking Poll on Health Information and Trust showed that about a third (32%) of adults have turned to AI for health information and advice, and four in ten of these users say they have uploaded personal medical information to get personalized advice. Cost and access, though, may shape who is able to rely on these tools. ChatGPT Health is currently available on all membership tiers, including the free plan, while Perplexity Health, Claude for Healthcare’s personal health integrations, and Amazon Health AI require paid subscriptions or memberships. Copilot Health is currently available for free, though Microsoft has indicated it will eventually move to a paid subscription model with pricing not yet announced.
KFF polling has shown that the cost of seeing a provider is a motivator for some turning to AI, with about one in five (19%) saying that a “major reason” for using AI for health advice was because they could not afford the cost of seeing a provider, rising to three in ten (29%) among users ages 18 to 29. The tools offering the most personalized features through direct integration with medical records are increasingly behind a paywall, potentially making them inaccessible for those who are already struggling to afford health care.
Why It Matters
As consumer-facing AI health tools expand, the gap between the personalization that these tools offer and their reliability may shape the quality of health information that people receive, while concerns about cost may further limit the utility of these tools.
What We’re Watching
AI Chatbots Spread a Fictional Disease Diagnosis, Experiment Finds
A team of researchers invented a fictional skin condition called “bixonimania” and uploaded two fake academic papers about it to a preprint server to test whether AI chatbots would treat the fabricated condition as real. Within weeks, major AI systems including Microsoft Copilot, Google Gemini, Perplexity, and ChatGPT were describing the nonexistent condition to users as if it were legitimate, in some cases advising them to see an ophthalmologist, according to a Nature news feature. The fake papers included clear signs of fabrication, with acknowledgements thanking “The Starfleet Academy” and “Professor Sideshow Bob,” along with explicit statements within the text that “this entire paper is made up.” Still, when users asked about it directly or described symptoms matching those described in the fraudulent papers, the chatbots treated the condition as real. Nature reports that the models have since been corrected and no longer reference bixonimania as a real condition.
The problem extended beyond chatbots: at least one peer-reviewed journal published a paper that cited the fake preprints as legitimate research. The paper has since been retracted, but researchers involved in the experiment say that its publication points to a broader issue in which some academics are using AI-generated references without reading the underlying papers.
What To Watch Out For: KFF’s March Tracking Poll on Health Information and Trust found that among adults who used AI for physical health advice (29% of adults), about seven in ten (69%) expressed at least “a fair amount” of trust in these tools to provide reliable health information, though few (6%) said they trust these chatbots “a lot.” As people turn to AI chatbots for health information, how these systems decide what counts as credible health information and how this may impact trust remains an open question.
People Who Use AI and Social Media for Health Information Rate Convenience Higher Than Accuracy, and Many Say It’s Difficult to Judge Which Information to Trust, Polls Show
Health care providers remain the most common and trusted source of health information, according to a new Pew Research Center survey. About two-thirds (65%) of those who get health information from health care providers rate them as “extremely” or “very” accurate, more than any other source, including government health agencies, news organizations, social media, or AI. Users of both AI chatbots and social media ranked these sources higher in convenience than accuracy, pointing to a gap between why people say they use these sources and how much they trust them.
With some adults turning to social media or AI for health information, Pew’s survey also found that many adults struggle to know whether health information they come across is accurate, with half of the public saying it’s at least “somewhat difficult” to judge the accuracy of health information they see. Additionally, most adults (76%) say they hear health information that seems to conflict with other health information they have received, and when they do, just over half (54%) say it’s difficult to know which information to trust.
What To Watch Out For: The findings add context to KFF polling, which similarly finds that health care providers are the most trusted source of health information, even as trust in government health agencies has declined amid changing partisan views. As people turn to AI and social media for health information more for convenience than trust in their accuracy, their willingness to use sources they don’t fully trust may create openings for false or misleading health claims to spread. At the same time, doctors and other providers remain in a unique position as trusted health messengers among most of the public.
False Claims About Birth Control and Cancer Omit Context to Overstate Risk
A claim that the World Health Organization recently labeled birth control pills as a Group 1 carcinogen has circulated widely online, including in some social media posts viewed more than 2 million times.
This is an example where information without context fails to provide a complete picture of the risks and benefits of contraception. While the classification is real, it is not new: oral contraceptives were placed in that category in 1999, based on evidence that they can increase the risk of certain cancers, including breast and cervical cancer. But some circulating posts omit the context of what that really means. A Group 1 classification indicates sufficient evidence of a link under some circumstances, not that cancer is a likely outcome.
A large 2025 Swedish study tracking more than 2 million women found a small, short-term rise in breast cancer diagnoses among current or recent users, though absolute risk of getting cancer remained low. KFF Health News reported that the study itself was distorted on social media, with some posts citing a 24% higher rate of breast cancer diagnoses without noting that this translated to roughly 13 extra cases per 100,000 women per year. Other research has found that hormonal contraceptives can lower risk of ovarian and endometrial cancer, a finding that was not included in the online posts. These posts illustrate how decontextualized scientific information and data omissions can function to spread misleading claims, even without containing outright falsehoods.
What To Watch Out For: KFF’s July 2025 Tracking Poll on Health Information and Trust found that about one in five (22%) adults reported seeing content in social media related to birth control in the past month, including higher shares of adults ages 18-29 (39%). Across platforms, though, less than half of social media users said they trusted most or some of the health information and advice they saw. CDC data show that women’s contraceptive options change throughout their reproductive lifespans, with some people opting for more long-term methods like intrauterine devices (IUDs) and implants in their later years. However, oral contraceptive pills continue to be the most commonly used method of reversible contraception in the US. Ongoing social media activity that distorts the risk of hormonal contraceptive methods may affect conversations about contraceptive safety and use, particularly among younger women.
More From KFF
Support for the Health Information and Trust initiative is provided by the Robert Wood Johnson Foundation (RWJF). The views expressed do not necessarily reflect the views of RWJF and KFF maintains full editorial control over all of its policy analysis, polling, and journalism activities. The data shared in the Monitor is sourced through media monitoring research conducted by KFF.


