This piece is part of Gizmodo’s ongoing effort to make the Facebook Papers available to the public. See the full directory of documents here.

Meta didn’t choose to become a global distributor of medicinal snake oil and dangerous health advice. But it did decide it could tolerate it.

From the onset of the covid-19 pandemic, Facebook understood the outsized role its platform would plays in shaping public opinion about the virus and the safeguards that governments would inevitably institute in hopes of containing it. Ten months before the first reported U.S. infection, Facebook’s head of global policy management, Monika Bickert, had laid out in a company blog a plan for “Combatting Vaccine Misinformation.” And while the title alludes to efforts to reduce the spread of misinformation — namely, by curtailing its distribution in the News Feed — what the blog really reveals is that, at some point, Facebook made a conscious decision to continue hosting vaccine misinformation rather than aggressively purge it. It was a missed opportunity, given that, at the time, the groups and pages promoting “anti-vaxxer” sentiment were relatively few in number. Very soon, that would all change.

In our latest drop of the Facebook Papers, Gizmodo is publishing 18 documents that shed light on the internal discussions within Facebook on covid-19. The papers, only a handful of which have ever been shown to the public, include a number of candid conversations among mid- and high-level employees; researchers, managers, and engineers with appreciably different views on the company’s moral obligations. Facebook declined to comment.

In retrospect, Meta’s attitude toward medical misinformation should have evolved months before “coronavirus” became a household name. In September 2019, top infectious disease experts had warned that measles was coming back in New York, an occurrence one long-time advisor to the US Centers for Disease Control and Prevention described as nothing short of “embarrassing.” Dr. Nancy Messonnier, director of the agency’s center for immunization and respiratory disease, said the resurgence of the virus was “incredibly frustrating… because we do have a safe and effective vaccine.” Social media bore the brunt of the blame.

Ironically, in some ways Facebook’s own plan mimicked the “free speech” arguments of the anti-vaxxers. Despite the public health threat, the groups and pages spreading medical hoaxes were to be given carte blanche to continue doing so. Moderation would be limited to “reducing” their ranking, excluding them from recommendations, not surfacing “content that contains misinformation” in searches. None of these tactics would prove effective. Soon after, global health authorities would begin rejecting offers of free advertising from Meta. Spreading authoritative medical advice on the platform was a waste of time, they said. The comment section of every post promoting vaccines proved to be a magnet for disinformation. The World Health Organization realized offering advice on Facebook was ultimately doing more harm that good.

Documents leaked by former Facebook product manager Frances Haugen have shown that whatever upper hand the company may have had before the death tolls began to skyrocket in 2020 was ultimately squandered. The internal materials tell a familiar story: Relatively low-level researchers at Facebook identify a problem and are gung-ho about solving it. At higher levels, however, the company weighs the consequences of doing the right thing — adopting solutions that might actually save lives — against the possible political ramifications.

Broadly, the documents show Meta employees understood well the staggering levels of health and medical misinformation surfacing in user feeds during the earliest weeks and months of the crisis. They show, definitively, there was an awareness at the company of activity “severely impacting public health attitudes,” that it was widespread, and that misinformation discouraging vaccine acceptance had, to quote one employee, the “potential to cause significant harm to individuals and societies.”

As the number of people in the U.S. who had died from the virus surpassed 100,000 in May 2020, an employee of Facebook’s integrity team acknowledged the site’s role in creating a “big echo chamber,” driving the false narrative that medical experts were purposefully misleading the public. The loudest, most active political groups on the platform had “for weeks,” they said, been those dedicated to opposing quarantine efforts. It was clear at least some of the groups had swelled in size not because people had sought them out, but because they were artificially grown by a small number of users who employed automated means to invite hundreds and thousands of users every day.

The plan laid out by Bickert the year before, to contain the misinformation rather than eliminate it, was failing. Miserably.

These covid denial groups, one employee noted, were getting “a lot of airtime” in the News Feeds of “tens of millions of Americans who are now members of them.” The question they put to their colleagues: “do we care?”

One internal study dated March 2021 (not included below) detected at least 913 anti-vax groups on the platform comprised of 1.7 million user, a million of whom, the study said, had joined via what Facebook calls “gateway groups” — user-created groups Facebook researchers have observed encouraging people to join “harmful and disruptive communities.

As the 2020 elections approached in the latter half of the year, the company began to consider other factors beyond the health and wellbeing of its users: Namely, its own reputation, as elected officials and candidates for office began predictably wielding the platform’s flailing enforcement efforts as a political bludgeon. Documents show the company pondering what it calls strategic risks — the potential consequences of clamping down too quickly or too hard on misinformation, prompting even more public allegations of “censorship” that, by then, had become reliable catnip for right-wing media audiences.

Facebook had decided that what its users considered “harmful misinformation” was really a matter of opinion, broadly tied to an individual’s political leanings; a “subject of partisan debate.” One document suggests decisions of integrity were reached based on consideration of this relative truth, as opposed to actual recommendations of infectious disease experts. Political blowback from cracking down on covid-19 misinformation too strenuously — relying on methods that might ensnare some content inaccurately flagged as “misinformation” — was a major factor in integrity enforcement decisions, according to one proposal.

Members of one of Facebook’s “cross-functionality” teams — which are designed to incorporate input from across the company — ultimately recommended that “widely debunked COVID hoaxes” not be removed from the platform, but instead merely demoted in users’ feeds. Demotion would occur automatically when the content was gauged to be at least a 60% match with known hoax-related content. (This approach is “analogous,” it said, to the process used to filter out harmful content in countries at high-risk for hate-speech and violent incitement.)

While the team suggested “harmful” content be removed from the platform, it recommended against doing so automatically. Any posts deemed harmful enough to be removed from the site should require manual review, either by a full-time employee or a specialized contractor. It’s unclear which of the team’s recommendations, if any, were adopted.

In the week following the 2020 election, roughly a million new cases of covid-19 were reported inside the U.S. By the end of the year, the virus was estimated to have killed more than 318,000 Americans. Since then, nearly 700,000 more have died in the U.S. alone.

October 20, 2022: Covid-19 and Vaccine Misinformation

Vaccine Hesitancy in Comments: C19D Lockdown Update

  • A document outlining Facebook’s shortcomings in clamping down on the “rampant” anti-vax rhetoric happening in the comments of people’s posts across the platform—along with some potential fixes. “We’ve heard that [legitimate health authorities] like UNICEF and WHO will not use the free ad spend we’re providing to help them promote pro-vaccine content, because they don’t want to encourage the anti-vaccine commenters that swarm their pages.”

Identifying and Comparing Pro- and Anti-COVID-19 Vaccine Comments

  • A test that actually tries to quantify how much antivax nonsense is happening in people’s comments as opposed to original posts. On a sample post, pro-vax comments were 20% more likely to be algorithmically flagged as “problematic” than their antivax counterparts. When taking a random sample of two weeks’ worth of COVID/vaccine related comments from across the platform, 67% of those gathered skewed anti-vax. (Tl;dr: The study suggests that “anti-vax sentiment is overrepresented in comments on Facebook relative to the broader population.”)

Vaccine Hesitancy Is Twice as Prevalent in English Vaccine Comments Compared to English Vaccine Posts

  • Another study, similar to above.

COVID Containment Week 2: Ideas Pipeline: Global Health Commons

  • A detailed proposal for a Facebook-hosted “Global Health Commons” for anonymized, high-res public health data gleaned from users by the company.

“Harmful Non-Violating Narratives” Is a Problem Archetype In Need of Novel Solutions

  • Signs of QAnon supporters trying to kill people: “Belief in QAnon conspiracies took hold in multiple communities, and we saw multiple cases in which such belief motivated people to kill or conspire to kill perceived enemies.”
  • A proposal for some ways the company can tackle posts that are “harmful,” but that don’t break the platform’s policy rules (most “vaccine hesitant” posts seem to fall under this umbrella). “It is normal to express uncertainty or doubt about the relevant topic, and so we agree that removing individual content objects is not defensible.”
  • The original poster states that content “consistent with vaccine hesitancy” is rampant on-platform, according to past internal studies finding: Between 25% and 50% of vaccine content users see on platform is “hesitant”; 50%+ comments that users view on said content are “hesitant”; “Hesitant” content “may comprise as much as ~5%” of all content viewed in-feed (measured by VPV’s).
  • OP points out that “we know that COVID vaccine hesitancy has the potential to cause severe societal harm,” but that the company historically approached problems in this vein reactively: taking limited/no action at the start, and only cracking down on that content once the public revolted.

Potential Vaccine Hesitancy Product Solutions

  • Exactly what it says: proposals for way to tweak the overall product design to make Facebook less “rewarding” for folks posting antivax content. Offers a breakdown of the parts of the platform that make this kind of content so “rewarding” — and mostly unmoderated.

XFN Covid Recommendations

  • A post detailing the “political risks” that inform the company’s approach to handling COVID misinfo. A few examples: “What constitutes ‘harmful misinfo’ is quickly becoming the subject of partisan debate,” even if both sides agree that harmful COVID misinfo should be removed, and misinfo about voting that doesn’t qualify as direct “voter suppression.” “We automatically [demote] content that likely contains a claim about COVID that’s been widely debunked,” but the company “may be criticized” if that bunk claim ends up being from a prominent political figure.

A Covid Multi-Language Facebook Post Classifier

i18n Covid Classifier Refresh

  • An announcement of an upcoming product launch for a classifier designed to detect covid-19 content in posts across multiple languages, including French, Arabic, Russian and Urdu. Second document is an announcement of a subsequent update to the classifier aimed at making non-English post detection a bit (a lot) more accurate.

COVID-19 Vaccine Offense HPM 3/10

  • Announcing the launch of new products meant to normalize getting a vaccine; these include vaccine-focused picture frames on Facebook, vaccine-themed stickers on Instagram, and putting “Covid Info Centers” at the top of people’s feeds.

Revamping the Antivax Searchability Query Set with the Signal-Based Method

  • A 2019 post testing out some potential models meant to better detect antivax content.

Health Integrity Feedback Example

  • An internally shared example post showing an English-language post laden with COVID-vaccine misinfo that skirted by the company’s detection systems (because it was accidentally detected as Romanian).

COVID-19 Vaccine Risks Appear to be Concentrated Among a Few Subpopulation Segments

  • A quick study into whether specific user segments are more likely to heavily post anti-vax content (turns out, the answer is yes!). Also found some heavy overlap between anti-vax posters and pro-QAnon posters (“It may by the case that [antivax] belief in these segments often orients around distrust of elites and institutions”).
  • “In the top [anti-vax] segment, 0.016% of authors earn 50% of [anti-vax]-classified [content views].”
  • Upon finding at a good percentage of anti-vax content could be traced back to a handful of users hyper-posting it—“users with many followers achieve page-like feed distribution without comparable integrity scrutiny.”
  • Researchers found vaccine hesitancy to be “rampant” in comments on Facebook: They mentioned groups like UNICEF and WHO choosing not to use free ad space because they did not want to have anti-vaxxers in comment section of posts.

Health Integrity Sample Link 1 / Link 2

  • Two links to external research papers on the effects of social media on vaccine misinformation.

Facebook Creating a Big Echo Chamber—Do We Care?

  • A post from one employee wondering aloud whether the company will do anything about the active (albeit non-violating) anti-quarantine groups spreading across the US. As the original poster points out, these are “the most active” politically-classified groups at the time.
  • “At least some of these groups got so large not by people seeking them out and opting to join them, but by a relatively small number of people using API’s to send invites to hundreds or thousands of users per day.”

Ghost Posts—“Remdesiver is a Cure”

  • An internal flag noting that people’s “Facebook Memories” might be surfacing COVID-19 misinfo from the previous year

COVID Misinfo Discussion

  • An internal post asking whether covid-19 misinformation being shared on Instagram by a prominent Indian celeb can perhaps, maybe, possibly be labeled as misinformation. (Per the comments, this post was later taken down.)

Shoshana Wodinsky contributed reporting.

Read More

President

View all posts