September 05, 2018

Harris Presses Facebook COO on Company’s Revenue from Russian Propaganda, Hate Speech

WASHINGTON, D.C. – At an open hearing of the Senate Select Committee on Intelligence today, U.S. Senator Kamala D. Harris questioned Facebook COO Sheryl Sandberg on Facebook’s revenue from Russian propaganda on their platform aimed at influencing the 2016 presidential election. Harris’ exchange with Sandberg followed a line of questioning by Harris from previous committee hearings with Facebook CEO Mark Zuckerberg and separately with other representatives from Facebook, Google, and Twitter.

Harris stated, “I asked Facebook CEO Mark Zuckerberg on April 10, 2018, and he said, ‘The Internet Research Agency, the Russian firm ran about $100,000 worth of ads.’ Following the hearing, I asked Facebook the same question in writing and on June 8, 2018, we received a response that said, ‘We believe the annual revenue that is attributable to inauthentic or false accounts is immaterial.’ So my question is what did you mean by immaterial?”

Sandberg was unable to clarify the exact amount that Facebook profited from Russian propaganda campaigns in the 2016 presidential election, however stated that “Any amount is too much.”

Harris also raised the issue that although Facebook has claimed hate speech is not permitted on the platform, a 2017 ProPublica report on Facebook’s training materials revealed that content reviewers were instructed to remove hateful content towards white men, but not Black children.

Harris continued, “So a concern that many have is how you can reconcile an incentive to create and increase your user engagement when the content that generates a lot of engagement is often inflammatory and hateful. So for example, Lisa-Maria Nuerdert, a researcher at Oxford Internet Institute, she says, ‘The content that is the most misleading or conspiratorial, that’s what’s generating the most discussion and the most engagement, and that’s what the algorithm is designed to respond to.’ So my concern is that according to Facebook’s community standards, you do not allow hate speech on Facebook, however contrary to what we have seen, on June 28, 2017, a ProPublica report found that Facebook’s training materials instructed reviewers to delete hate speech targeting white men but not against Black children because Black children are not a protected class. Do you know anything about that and can you talk to me about that?”

Sandberg acknowledged the rule was a “bad policy” that Facebook has since changed, however was unable to provide details on the timing of the current policy.

Full transcript of Harris’ questioning below:

Harris: Thank you, Mr. Chairman, for accommodating me, I’m in another hearing as you know. Good morning, and to the invisible witness, good morning to you. So I have a few questions. For Ms. Sandberg, on November 2, 2017, your company’s general counsel testified in front of this Intelligence Committee on Russian interference and I asked a few questions. I asked how much money did you make – and this is of the representative from both Facebook and Twitter, both of your general counsels were here – and I asked how much money did you make from legitimate advertising that ran alongside the Russian propaganda. The Twitter general counsel said, “We haven’t done the analysis but we’ll follow up with you and work on that.” And the Facebook general counsel said the same is true for Facebook. Again, I asked Facebook CEO Mark Zuckerberg on April 10, 2018, and he said, “The Internet Research Agency, the Russian firm ran about $100,000 worth of ads.” Following the hearing, I asked Facebook the same question in writing and on June 8, 2018, we received a response that said, “We believe the annual revenue that is attributable to inauthentic or false accounts is immaterial.” So my question is what did you mean by immaterial? Because I’m a bit confused about the use of that term in this context.

 

Sandberg: Senator, thank you for the question. So again, we believe the total of the ad spending that we have found is about $100,000. So the question you’re asking is with the inorganic content, I believe, what is the possible revenue we could have made? So here’s the best way I can think of to estimate that, which is that we believe between 2015 and 2017, up to 150 million people may have seen the IRA ads or organic content in our service. And the way our service works is ads don’t run attached to any specific piece of content but they’re scattered throughout the content. This is equivalent to .004% of content and newsfeed, and that was why they would say it was immaterial to our earnings. But I really want to say that from our point of view, Senator Harris, any amount is too much.

 

Harris: But if I may, just so I’m clear about your response, so are you saying that then the revenue generated was .004% of your annual revenue? Because of course, that would not be immaterial.

 

Sandberg: So again, the ads are not attached to any piece of content, so it’s a difficult –

 

Harris: So what metric, then, if you can just help me with that. What metric are you using to calculate the revenue that was generated associated with those ads? And what is the dollar amount that is associated with that metric?

 

Sandberg: So the reason we can’t answer the question to your satisfaction is that ads are not…organic content…ads don’t run with inorganic content on our service so there is actually no way to firmly ascertain how much ads are attached to how much organic content, it’s not how it works. In trying to answer what percentage of the organic content –

 

Harris: But what percentage of the content on Facebook is inorganic?

 

Sandberg: I don’t have that specific answer but we can come back to you with that.

 

Harris: Would you say it’s the majority?

 

Sandberg: No, no.

 

Harris: Or an insignificant amount? What percentage? You must know.

 

Sandberg: If you ask whether inauthentic accounts on Facebook, we believe at any point in time it’s 3-4% of accounts but that’s not the same answer as inorganic content because some accounts generate more content than others.

 

Harris. I agree. So what percentage of your content is inorganic?

 

Sandberg: Again, we don’t know, I can follow up with the answer to that.

 

Harris: Ok, please. That’d be great. And then your company’s business model is obviously – it’s complex but benefits from increased user engagement and that results of course in increased revenue. So simply put, the more people that use your platform, the more they are exposed to third-party ads, the more revenue you generate. Would you agree with that?

 

Sandberg: Can you repeat? I just want to make sure I got it exactly right.

 

Harris: So the more user engagement will result – and the more that they are exposed to third-party ads, the more that will increase your revenue. So the more users that are on your platform –

 

Sandberg: Yes, yes.

 

Harris: Yes, ok.

 

Sandberg: But only I think when they are – when they see really authentic content, because I think in the short run and over the long run, it doesn’t benefit us to have anything inauthentic on our platform.

 

Harris: That makes sense. In fact, the first quarter of 2018, the number of daily active users on Facebook grows 13% I’m told, and corresponding ad revenue grew by half to $11.79 billion, does that sound correct to you?

 

Sandberg: Sounds correct.

 

Harris: Then would you agree, I think it’s an obvious point, that the more people that engage on the platform, the more potential there is for revenue generation for Facebook?

 

Sandberg: Yes, Senator, but again only when the content is authentic.

 

Harris. I appreciate that point. So a concern that many have is how you can reconcile an incentive to create and increase your user engagement when the content that generates a lot of engagement is often inflammatory and hateful. So for example, Lisa-Maria Nuerdert, a researcher at Oxford Internet Institute, she says, “The content that is the most misleading or conspiratorial, that’s what’s generating the most discussion and the most engagement, and that’s what the algorithm is designed to respond to.” So my concern is that according to Facebook’s community standards, you do not allow hate speech on Facebook, however contrary to what we have seen, on June 28, 2017, a ProPublica report found that Facebook’s training materials instructed reviewers to delete hate speech targeting white men but not against Black children because Black children are not a protected class. Do you know anything about that and can you talk to me about that?

 

Sandberg: I do, and what that was was I think a bad policy that’s been changed, but it wasn’t saying that Black children, it was saying that children, it was saying that different groups weren’t looked at the same way and we fixed it.

 

Harris: But isn’t that the concern with hate period, that not everyone is looked at the same way.

 

Sandberg: Well hate speech is against our policies and we take strong measures to take it down. We also publish publically what our hate speech standards are. We care tremendously about civil rights. We have worked closely with civil rights groups to find hate speech on our platform and take it down.

 

Harris: So when did you address that policy? I’m glad to hear you have. When was that addressed?

 

Sandberg: When it came out. And again, that policy was badly written, bad example and not a real policy.

 

Harris: The report that I’m aware of was from June of 2017. Was the policy changed after that report or before that report from ProPublica?

 

Sandberg: I can get back to you on the specifics of when that would have happened.

 

Harris: You’re not aware of when it happened?

 

Sandberg: I don’t remember the exact date.

 

Harris: Do you remember the year?

 

Sandberg: Well you just said it was 2017.

 

Harris: That was of the – so do you believe it was 2017 that the policy changed?

 

Sandberg: Sounds like it was.

 

Harris: Ok. And what is Facebook’s official stance on hate speech regarding so-called and legally-defined unprotected classes, such as children?

 

Sandberg: So hate speech is not allowed on our platform and hate speech is you know, important in every way. And we care a lot that our platform is a safe community. When people come to Facebook to share, they’re coming because they want to connect on the issues that matter [to] them.

 

Harris: So have you removed the requirement that you will only protect with your hate speech policy those classes of people that have been designated as protected classes in a legal context? Is that no longer the policy at Facebook?

 

Sandberg: So I know that our hate speech policies go beyond the legal classifications and they are all public and we can get back to you on any of that, it’s all publically available.

 

Harris: Ok, thank you so much. Thank you, Mr. Chairman.

 

###