VOLUME 34

Social Media Content Moderation Policies and False Claims that COVID-19 Vaccines Cause Cancer


Summary

This volume examines recent policy changes and proposals related to social media platforms and artificial intelligence (AI) that may affect online health information and explores contradictory studies on COVID-19 vaccines and cancer. Additionally, it shares other updates relevant to health communicators, including ACOG guidance on contraceptive misinformation, an alternative to the CDC’s MMWR, and the Texas AG’s lawsuit against Kenvue and Johnson & Johnson over Tylenol use during pregnancy. Lastly, it highlights a release from a recent KFF Tracking Poll, which finds that use of health care apps or websites to manage health care is widespread, but most adults do not trust apps that use AI chatbots to access their medical records and provide health information.


Recent Developments

Social Media and AI Policy Roundup

KFF periodically investigates regulatory actions and platform changes that may influence how health information is shared online. Across September and October, social media platforms, states, and federal regulators have implemented or proposed policies that could affect how social media content and AI are regulated, impacting what people see online.

YouTube changes its moderation approach and reinstates access to formerly banned users

  • YouTube announced in September that users previously banned under the platform’s older COVID-19 or election misinformation policies may create new accounts and potentially re-upload content that contributed to their termination if it doesn’t violate current rules, provided they completed an appeals process by November 9. The change reflects recent shifts in YouTube’s moderation policies toward what it characterizes as “free expression,” although the platform will continue to prohibit content that “contradicts local health authority guidance about specific health conditions and substances.”
  • The move follows Republican investigations into whether the Biden administration pressured tech companies to remove content, potentially infringing on First Amendment rights. In 2024, the Supreme Court reviewed these allegations in Murthy v. Missouri, but ruled that the plaintiffs lacked standing without determining whether the content removals in question violated free speech. Many social media platforms have since rolled back their moderation policies, but KFF polling has found that 68% of adults say health misinformation is a bigger problem than “people being prevented from sharing alternative “viewpoints”  on social media (31%). 
  • The Stop Hiding Hate Act went into effect in New York in October, requiring social media companies operating in the state with more than $100 million in revenue to publicly report their content moderation policies and provide users with a way to report violations. These companies must also submit biannual reports to the state Attorney General detailing the number of posts flagged, posts acted upon, and actions taken.

Proposed federal AI legislation aims to prevent future regulation

  • Two bills introduced in Congress aim to set national rules for how artificial intelligence (AI) is regulated, replacing the mix of state and local laws that currently exist. Sen. Ted Cruz’s “SANDBOX Act” would give AI companies two-year exemptions from existing federal rules, allowing them to experiment while policymakers assess what new rules are needed. A second bill, from Rep. Michael Baumgartner, would prevent most states and cities from creating their own AI rules for five years in order to keep regulations consistent across the country while Congress develops a unified federal approach.  The proposals follow a failed attempt earlier this year to establish a ten-year moratorium that the Senate struck from the One Big Beautiful Bill Act.
  • These bills could provide regulatory clarity, but they could also leave gaps in consumer protections by shielding these companies from liability if their products share harmful health information. States including Tennessee and California have already passed some AI regulations protecting against unauthorized use of voice, image, and likeness, and requiring safety frameworks and transparency reports. These state laws could be curbed or eliminated under the proposed federal legislation, leaving people vulnerable to false AI-generated health information online.

New Research Finds COVID-19 Vaccines May Have Anti-Cancer Effects, But Some Circulate Contradictory Study

What’s the recent research?

  • A study published in Nature in late October found that mRNA COVID-19 vaccines may prolong the lives of people with cancer receiving immunotherapy. The research builds on decades of work exploring mRNA’s potential cancer-fighting properties. Based on their findings, the researchers theorized that the mRNA in the vaccine helped activate immune cells throughout the body, making them more likely to recognize and attack tumors. The study has been covered widely in mainstream media, and researchers plan to launch a Phase 3 clinical trial to confirm the results.
  • While mainstream news coverage has focused on these potential protective effects, a separate study examining health insurance records for more than 8 million people reported associations between COVID-19 vaccination and increased cancer risks.  Epidemiologists say the study contains methodological flaws, including failing to account for differences in healthcare-seeking behaviors between vaccinated and unvaccinated people. The journal has since added a notice acknowledging concerns.

What’s happening in online conversation?

  • Most discussion about COVID-19 vaccines and cancer centers on research suggesting the vaccines may extend cancer patients’ lives. A smaller group, however, cites the study that reported a positive association to claim the vaccines cause cancer. While these claims come from fewer sources, several of those promoting them hold positions that could influence vaccine policy. For example, Children’s Health Defense, founded by HHS Secretary Robert F. Kennedy Jr., posted an article claiming the study showed that “All COVID Vaccines Increase Cancer Risk.” Multiple members of the CDC’s Advisory Committee on Immunization Practices (ACIP) also shared the study on their social media accounts. Other prominent accounts, including one X account with nearly 2 million followers, shared results of the study and said that it showed a statistically significant higher risk of cancer among those who were vaccinated.
  • Multiple news and fact-checking organizations published detailed analyses identifying concerns with the research methods. However, the study also received uncritical media attention, some of which was later corrected. The Daily Mail initially published an article about the study that claimed researchers said they had found “proof” that COVID-19 vaccines caused cancer, but later clarified that they had only found an association. One political commentator with nearly 5 million YouTube subscribers also featured a guest who discussed the study on his show.

What does the evidence say?

There is no credible evidence that COVID-19 vaccines cause cancer. Most cancers typically take several years to develop, but the study that reported a correlation between vaccination and cancer only followed people for one year after vaccination, making it highly unlikely that any cancers observed were caused by the vaccines. The Global Vaccine Data Network has said there is no credible mechanism through which COVID-19 vaccines could cause cancer. As noted above, the Nature study, which builds on decades of research, shows that mRNA vaccines may help immune cells recognize tumors.

Why This Matters

False claims that COVID-19 vaccines cause cancer persist beyond social media. During ACIP’s September meeting, the committee invited speakers to present on potential correlations between COVID-19 vaccines and cancer, giving these concerns an official platform despite the lack of scientific support. The Nature study suggested potential protective benefits of COVID-19 vaccines against cancer, but contradictory claims supported by flawed peer-reviewed research and amplified by federal health officials may continue to spread and gain unwarranted credibility.

What We Are Watching

Several recent developments may influence how people access and engage with health information, without necessarily advancing a specific health narrative. Health communicators and researchers may find it useful to monitor how these changes affect public discourse and trust in health information. KFF will continue to track these developments.

ACOG Releases Guidance on Addressing Contraception Misinformation

The American College of Obstetricians and Gynecologists (ACOG) released updated clinical guidance recommending that clinicians combat birth control misinformation and advocate for contraceptive access. The guidance emphasizes renewed importance following the Dobbs decision and cites online misinformation, Medicaid cuts, and defunding of reproductive health clinics as threats to access. ACOG recommends that physicians oppose actions imposing contraception barriers and address misinformation through shared clinical decision-making and community education.

NEJM and Public Health Group Launch Alternative to CDC’s MMWR

The New England Journal of Medicine (NEJM) and the Center for Infectious Disease Research and Policy (CIDRAP) will begin publishing “Public Health Alerts” on an as-needed basis as an alternative to the CDC’s weekly epidemiology publication, Morbidity and Mortality Weekly Report (MMWR). The MMWR has served as a primary source for public health data and outbreak information, but trust in the CDC continues to decline, staffing changes have resulted in the MMWR’s staff being laid off and later reinstated, and communications pauses and the government shutdown have interrupted the journal’s regular schedule for the first time in its 73-year history.

Texas Attorney General Sues Tylenol Makers, Echoing Trump Administration Claims

Texas Attorney General Ken Paxton filed a lawsuit against Kenvue and Johnson & Johnson, alleging that the drugmakers hid the supposed risks of prenatal use of Tylenol causing autism. The lawsuit, filed one month after President Trump warned pregnant people not to take Tylenol despite a lack of conclusive evidence, could contribute to ongoing confusion about safety by claiming that the government “confirmed” prenatal use “likely causes” autism. In late October, HHS Secretary Kennedy softened his warnings, saying there is not sufficient evidence that Tylenol definitively causes autism, though he continued to recommend limiting its use during pregnancy.


AI & Emerging Technology

KFF Poll Shows Most Adults Do Not Trust Health Care Apps That Use AI To Access Medical Records and Provide Health Information

A recent KFF poll explored how adults use health care apps and websites to manage their health care. The poll found that while use of health care apps and websites is widespread, the public is largely uncomfortable with the idea of health care apps or websites using AI to provide personalized health information. A majority of the public (56%) say they would have not much trust or no trust at all in an online health tool that uses AI to access their medical records to provide personalized health information and advice. Overall, about one in ten adults (8%) say they would have “a great deal” of trust in a health care app that uses AI chatbots to access their medical records and provide personalized information, while about one in four (24%) say they would have a “fair amount” of trust in an app that used AI for this purpose. The full poll report shares more findings.

Few Adults Express High Levels of Trust in Health Care Apps That Use AI To Provide Personalized Health Information

About The Health Information and Trust Initiative: the Health Information and Trust Initiative is a KFF program aimed at tracking health misinformation in the U.S., analyzing its impact on the American people, and mobilizing media to address the problem. Our goal is to be of service to everyone working on health misinformation, strengthen efforts to counter misinformation, and build trust. 


View all KFF Monitors

The Monitor is a report from KFF’s Health Information and Trust initiative that focuses on recent developments in health information. It’s free and published twice a month.

Sign up to receive KFF Monitor
email updates


Support for the Health Information and Trust initiative is provided by the Robert Wood Johnson Foundation (RWJF). The views expressed do not necessarily reflect the views of RWJF and KFF maintains full editorial control over all of its policy analysis, polling, and journalism activities. The data shared in the Monitor is sourced through media monitoring research conducted by KFF.