VOLUME 39
Abortion Pill Safety Decisions by FDA Were Science-Based, New JAMA Study Finds
Highlights
A new study found that Food and Drug Administration (FDA) decisions about the abortion pill mifepristone consistently followed scientific evidence, even as misleading claims about the drug’s safety continue to shape public understanding.
And Google removed some health AI summaries after a Guardian investigation reported that AI-generated summaries for search results about multiple health topics, including cancer screening, liver disease, and mental health conditions, shared false and potentially dangerous health information. While the full extent of inaccurate health information in these AI-generated summaries is unclear, patient advocacy organizations described the examples as “dangerous” and “alarming.”
What We’re Watching
Claims That the FDA Failed to Properly Evaluate Mifepristone Persist as New Study Finds Decisions Were Science-Based
As FDA leadership initiates a new safety review of mifepristone following claims by abortion opponents that the drug was not adequately evaluated before it was granted approval, a new study published in JAMA examining more than 5,000 pages of internal FDA documents from 2011 to 2023 finds that agency decisions were consistently driven by scientific evidence, not politics. The study found that agency leaders almost always followed the recommendations of career scientists, repeatedly reviewed safety data, and reaffirmed that mifepristone is safe while making cautious changes to access. Despite this detailed analysis documenting the rigor of the FDA’s review process, the Senate’s Health, Education, Labor, and Pensions (HELP) Committee held a hearing this month framed as an inquiry into the abortion pill’s safety, with statements describing mifepristone as putting women in “serious danger.” Evidence continues to demonstrate that mifepristone is a safe medication. KFF polling shows that while twice as many adults say mifepristone is “safe” (42%) than say it is “unsafe” (18%) when taken as directed by a doctor, four in ten express uncertainty over the pill’s safety. Perception of the abortion pill’s safety has declined since 2023 among the public overall (42% view as safe now v. 55% in 2023) and among women ages 18 to 49 (41% view as safe now v. 59% in 2023).
Polling Insights:
KFF’s November 2025 Health Tracking Poll found that the public is largely divided over the intention underlying the FDA’s review of mifepristone. Just over half (53%) of adults and a similar share of women of reproductive age say that Secretary Kennedy’s decision to have the FDA review the safety of the abortion pill is mostly to “make it more difficult to access abortion pills,” while a somewhat smaller share of the public say the decision is mostly to “protect the health and safety of women” (46%).
Views on FDA’s review of mifepristone are largely shaped by partisanship, with most Democrats (81%) saying the decision is largely about curbing access to abortion pills and most Republicans (73%) saying the decision is mostly about protecting the health of women.
U.S. Withdraws from International Health Organizations as Trust in Public Health Institutions Declines
The U.S. withdrawal from the World Health Organization took effect this month, with WHO Director-General Tedros Adhanom Ghebreyesus warning that the decision “makes the U.S. unsafe” and “makes the rest of the world unsafe” by cutting access to disease surveillance and emergency response systems. The withdrawal is part of broader U.S. disengagement from international health efforts, including the recent announcement that the U.S. is withdrawing from 31 U.N. entities such as the U.N. Population Fund, the lead U.N. agency focused on global population and reproductive health. Public opinion data suggests the decision lands amid declining and polarized public confidence in the WHO itself. According to an April 2024 poll from Pew Research, about six in ten U.S. adults believed the U.S. benefitted from its membership in the WHO, fewer than the share who said the same in 2021, including an 8 percentage point decrease in the share who say the U.S. benefitted a “great deal.” These concerns reflect institutional and diplomatic trust and intersect with broader trust challenges in health. The withdrawal occurs as the U.S. public’s trust in federal health agencies has continued to erode. As global health partnerships change, health communicators may benefit from tracking changes in trust in health agencies to better understand where audiences turn to for health information.
Fraudulent Ads on Social Media Continue Despite Enforcement Measures
Fraudulent advertising on social media continues to expose users to misleading and dangerous health claims, impacting how people assess and trust health information online. In early January, the Better Business Bureau (BBB) issued a “scam alert” about fraudulent ads using AI-generated videos of celebrities to promote fake weight-loss products, including unauthorized endorsements for supplements claiming to be GLP-1 medications. The BBB reported receiving more than 170 reports about one such product, with customers spending hundreds of dollars after seeing the fraudulent ads. The use of celebrity likenesses and medical terminology may increase the perceived credibility of these claims, even when the products are not legitimate treatments. The persistence of these false health advertisements is part of broader challenges platforms face in content moderation, with recent investigations finding thousands of deceptive ads remaining active despite prior enforcement. A Reuters investigation also reported that Meta allowed a high number of ads from Chinese partners, including ads for fake health supplements, prioritizing revenue while some enforcement measures were delayed or paused. As misleading health advertising continues, KFF will continue monitoring the types of health information that the public reports seeing and trusting on social media to better inform health communicators of when to intervene.
AI & Emerging Tech
Google’s AI Overviews in Search Results May Give Harmful Health Information
What’s happening?
An investigation conducted by The Guardian found that the artificial intelligence (AI)-generated summaries that appear at the top of Google search results, called “AI Overviews,” at times provided inaccurate and misleading information about health topics, potentially giving users false reassurance about serious illness. The Guardian found that the overviews wrongly advised people with pancreatic cancer to avoid high-fat foods, provided misleading information about liver blood test results, and incorrectly identified pap smears as screenings for vaginal cancer. Since the investigation was published, Google removed some AI health summaries tied to specific search queries, but similar prompts can still trigger AI-generated results and broader risks from AI-produced health information remain.
How often do people encounter and trust these overviews?
- July 2025 polling from the Annenberg Public Policy Center found that nearly two-thirds of Americans who search for health information online had seen AI-generated responses at the top of search results, and most who see these responses consider them at least somewhat reliable, though just 8% consider them “very reliable.” Among adults who have seen AI-generated responses when searching for health information online, about three in ten said the AI responses provided them the answer they needed either “always” or “often.” At the same time, most adults who see these AI-generated responses to health inquiries said they either always or often continue searching by following links to specific websites or other resources.
- A qualitative study published in the Journal of Medical Internet Research found that participants often skipped these overviews in favor of traditional search results, with some expressing skepticism about them because of a lack of sourcing. Even participants who read the AI-generated summaries continued scrolling to review other results rather than stopping their search, suggesting that some users are adopting a “trust but verify” approach to AI for health information.
Why this matters
The continued prevalence of AI-generated health information, which can contain misleading and harmful advice, suggests a need for both better safeguards from technology companies and clear guidance from health communicators about how to critically evaluate AI-generated health information. Even as research indicates that some users may skip these overviews or try to independently verify their contents, communicators should be aware that patients may be using these AI overviews as starting points for health research.
More From KFF
- KFF Quick Take: Senators’ Questions About Mifepristone Could Further Spread Confusion About the Abortion Pill’s Safety Record
- KFF Policy Brief: State Recommendations for Routine Childhood Vaccines: Increasing Departure from Federal Guidelines
- KFF Quick Take: MAGA and MAHA Parents Are More Likely Than Others To Think the CDC Was Recommending Too Many Childhood Vaccines
- KFF Quick Take: New Federal Guidelines for Alcohol Use Come as Alcohol Deaths Remain Above Pre-Pandemic Levels
- KFF Health News: GOP Promotes MAHA Agenda in Bid To Avert Midterm Losses. Dems Point to Contradictions
Support for the Health Information and Trust initiative is provided by the Robert Wood Johnson Foundation (RWJF). The views expressed do not necessarily reflect the views of RWJF and KFF maintains full editorial control over all of its policy analysis, polling, and journalism activities. The data shared in the Monitor is sourced through media monitoring research conducted by KFF.


