Internal Facebook documents detail how misinformation spreads to users

Facebook chose profit over safety, whistleblower says

Ahead of the 2020 election, Facebook implemented safeguards to protect against the spread of misinformation by prioritizing safety over growth and engagement. It rolled back those defenses after the election, allowing right-wing conspiratorial content to fester in the weeks leading up to the January 6 riot at the U.S. Capitol, according to a whistleblower. 

Frances Haugen, a former Facebook employee, filed at least eight separate complaints with the Securities and Exchange Commission, alleging that the social network "misled investors and the public about its role perpetuating misinformation and violent extremism relating to the 2020 election and January 6th insurrection," including removing "safety systems" put in place ahead of the 2020 election. 

"And as soon as the election was over, they turned them back off or they changed the settings back to what they were before, to prioritize growth over safety," Haugen said in an interview with "60 Minutes" correspondent Scott Pelley. 

Facebook disputes that and says it maintained necessary safeguards, adding in a statement that it has "expressly disclosed to investors" the risk of misinformation and extremism occurring on the platform remains. 

In 2019, a year after Facebook changed its algorithm to encourage engagement, its own researchers identified a problem, according to internal company documents obtained from the source.

The company set up a fake Facebook account, under the name "Carol," as a test and followed then-President Trump, first lady Melania Trump and Fox News. Within one day, the algorithm recommended polarizing content. The next day, it recommended conspiracy theory content, and in less than a week, the account received a QAnon suggestion, the internal documents said.

By the second week, the fake account's News Feed was "comprised by and large" with misleading or false content. In the third week, "the account's News Feed is an intensifying mix of misinformation, misleading and recycled content, polarizing memes, and conspiracy content, interspersed with occasional engagement bait," the internal documents said.

Facebook says it used research like the test account to make safety improvements and in its decision to ban QAnon. The company added that the amount of hate speech users actually encounter has declined in each of the last five quarters.

While speaking to "60 Minutes," Haugen explained how the polarizing content reaches users.

"There were a lotta people who were angry, fearful. So, they spread those groups to more people. And then when they had to choose which content from those groups to put into people's News Feed, they picked the content that was most likely to be engaged with, which happened to be angry, hateful content. And so, imagine you're seein' in your News Feed every day the election was stolen, the election was stolen, the election was stolen. At what point would you storm the Capitol, right?" Haugen said.

"And you can say, 'How did that happen?' Right? Like, 'Why are we taking these incredibly out-there topics? QAnon, right, crazy conspiracies. Why are these the things that Facebook is choosing to show you?' And it's because those things get the highest engagement," Haugen said, comparing it to "gasoline on a fire."

Haugen is testifying to the Senate Commerce Committee on Tuesday. "Facebook, over and over again, has shown it chooses profit over safety," she said. 


Facebook statement on suggestion it has mislead the public and investors: 

"As is evident from the news and our numerous public statements over the past several years, Facebook has confronted issues of misinformation, hate speech, and extremism and continues to aggressively combat it.  Unsurprisingly, we expressly disclose to investors that these risks have and do and may in the future occur on our platform." 

Claim removing safety systems after the 2020 election allowed divisive content to spread:  

"In phasing in and then adjusting additional measures before, during and after the election, we took into account specific on-platforms signals and information from our ongoing, regular engagement with law enforcement. When those signals changed, so did the measures. It is wrong to claim that these steps were the reason for January 6th -- the measures we did need remained in place through February, and some like not recommending new, civic, or political groups remain in place to this day. These were all part of a much longer and larger strategy to protect the election on our platform -- and we are proud of that work."  

Carol's (the fake Facebook account) journey:   

"While this was a study of one hypothetical user, it is a perfect example of research the company does to improve our systems and helped inform our decision to remove QAnon from the platform."  

Role in January 6th: 

"The notion that the January 6 insurrection would not have happened but for Facebook is absurd. The former President of the United States pushed a narrative that the election was stolen, including in person a short distance from the Capitol building that day. The responsibility for the violence that occurred on January 6 lies with those who attacked our Capitol and those who encouraged them. We have a long track record of effective cooperation with law enforcement, including the agencies responsible for addressing threats of domestic terrorism." – FB spokesperson

Internal FB research that found only 3-5% of hate speech and less than 1% of Violence/ITV speech prompts action from the platform: 

"When combating hate speech on Facebook, bringing down the amount of hate speech is the goal. The prevalence of hate speech on Facebook is now 0.05 percent of content viewed and is down by almost 50 percent in the last three quarters, facts that are regrettably being glossed over. We report these figures publicly four times a year and are even opening up our books to an independent auditor to validate our results. This is the most comprehensive, sophisticated and transparent effort to remove hate speech of any major consumer technology company; and there is not a close second."   


f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.