This site is reader-supported. When you click through links on our site, we may be compensated.
After being fired by Facebook this month, a data scientist published a 6,600-word memo to the company's internal communication systems breaking down 2.5 years of her experiences on the “fake engagement team.” The resulting stories, largely centered on misinformation campaigns with both subtle and clear links to government staffers and political parties around the world, were shared with BuzzFeed News and reprinted with various redactions on Monday, prompting the reporters to describe the memo as “a damning account of Facebook's failures.”
Former Facebook data scientist Sophie Zhang pointed to activity across the world in nations such as Azerbaijan, Honduras, India, Ukraine, Spain, Bolivia, and Ecuador. Some of these stories include metrics for how many fake accounts Zhang purged, with one story in particular, about the potential spread of COVID-19 misinformation to United States users, linked to a ring of 672,000 accounts in Spain.
“I was the one who made the decision”
Arguably more egregious than the numbers was the silo that Zhang allegedly operated within, without institutional support, to take responsibility for whether particular rings of accounts were moderated. “Individually, the impact was likely small in each [country's] case, but the world is a vast place,” Zhang wrote in her memo. “Although I made the best decision I could based on the knowledge available at the time, ultimately I was the one who made the decision not to push more or prioritize further in each case, and I know that I have blood on my hands by now.”
Part of this issue, Zhang explained, came from internal pressure to focus on security issues that might merit coverage in Western media outlets like The New York Times and The Washington Post. “It's why I've seen priorities of escalations shoot up when others start threatening to go to the press, and why I was informed by a leader in my organization that my civic work was not impactful under the rationale that if the problems were meaningful they would have attracted attention, became a press fire, and convinced the company to devote more attention to the space,” Zhang wrote.
BuzzFeed News points to Zhang's example: in February 2019, a NATO researcher tipped Facebook off about apparent Russian interference with US politics, which Zhang resolved before the whistleblower followed through on a promise to report it to US Congress. Having noticed that the issue was only temporarily fixed, the same NATO staffer chronicled that same inauthentic behavior's return, held it for months, then sent it to the press, “finally causing the PR fire,” Zhang wrote.
One element is missing from BuzzFeed News' otherwise sweeping look at the memo: the issue of Facebook Free Basics and Facebook Discover, a pair of international initiatives designed to provide free or low-cost Internet devices and data plans to citizens of developing countries… with the catch that Facebook services are “zero-rated” in terms of data caps.
Zhang points to the issue of unresolved problems with politicians and governments running widescale interference on news propagation on Facebook—but she doesn't remind her former Facebook colleagues in the memo that such interference may be compounded by users in those nations having even less access to news content outside Facebook and its partners. Conveniently, those very nations, according to Zhang's memo, are apparently less interesting to Facebook's PR-centric approach to “inauthentic” user management.
BuzzFeed News compared the political forces and government employees in Zhang's memo to the Internet Research Agency, a Russian misinformation group that dominated infosec headlines in 2017. In Azerbaijan, “millions of comments” were created by an apparent staff of “dedicated employees” to target opposition viewpoints on all corners of Facebook. As University of Washington researcher Katy Pearce told BuzzFeed:
One of the big tools of authoritarian regimes is to humiliate the opposition in the mind of the public so that they're not viewed as a credible or legitimate alternative. There's a chilling effect. Why would I post something if I know that I'm going to deal with thousands or hundreds of these comments, that I'm going to be targeted?