Pushkin from Pushkin Industries. This is Deep Background, the show where we explored the stories behind the stories in the news. I'm Noah Feldman. This week, former President Donald Trump's second impeachment trial is taking place in the Senate. At the core of the trial is Donald Trump's speech. What did
the President say and when did he say it? From the standpoint of the House managers, donald Trump incited violence on January sixth when he gave the speech on the ellipse, encouraging his followers to march on the Capitol, which they then occupied. From the standpoint of Donald Trump's defense, there was no intimate connection, no causal connection, no connection at all, they say, between what the President had to say at
the time and what the rioters did. But impeachment was not the only, and possibly not even the most serious consequence to Donald Trump of his speech on January six In the aftermath of the attack on the Capitol, Twitter suspended Donald Trump permanently and Facebook suspended Trump indefinitely. Those two suspensions effectively blocked Donald Trump from social media, which had been the oxygen for his campaign and the main method that he used to communicate to his public during
his presidency. The consequences for the question of free expression and social media could not be greater, And indeed, after Joe Biden's inauguration, Facebook decided to refer the question of Trump's suspension to its newly created Facebook Oversight Board, a group of twenty plus experts from all over the world who are independent of Facebook and have the authority to decide whether its decisions can form with its stated values
and with Facebook's rules. Right now, the question of trumps suspension is pending before the Oversight Board, and Facebook has pledged that it will follow the conclusion that the Oversight Board reaches. They raise profound and difficult questions about what speech should be permitted and what speech should be restrained.
To discuss these pressing questions and the state of play in social media content governance more generally, I'm joined today by the Vice President of Content Policy at Facebook, Monica Bickert. Monica's job is to run all decisions about content policy at Facebook. She was intimately involved in the Trump process, the Trump decision, and the decision to pass along the
Trump case to the Facebook Oversight Board. Before we start this conversation, I want to begin by telling my listeners that when it comes to the Facebook Oversight Board, I am the very opposite of a disinterested observer. I helped come up with the idea for the oversight board in the first place. I worked as it paid consultant to Facebook during the entire three year process of getting the board going, and I still advise Facebook now on questions
of free expression. Indeed, that's how I met Monica during the process of working on the oversight board, and we became friends and subsequently taught a class together on social media and the law at Harvard Law School. With that relevant background in mind, I'm excited to turn into our
conversation Monica. Welcome to deep background, Monica. Let's start with the biggest ticket issue in the universe of content moderation right upfront, which is the suspension of Donald Trump from your platform as well as from Twitter in the wake of the January sixth attacks on the Capitol. And I guess I just want to begin by asking you, what did your internal process look like, how you know in the whole complies ecosystem that you're in charge of of
content policy at Facebook. Did you make your way towards this really historic decision. The first thing that happened was my attention was brought to a couple of posts by then President Trump. My team flagged for me, hey, a video has been posted by the President. We're reviewing it now to see if it violence any of our content policies, but it's something that you need to look at. One was a video and one was a text post, and
they happened during the attack on the Capitol. We saw in the president's video that he said I love you all and thank you or words to that effect, which to us constituted praise. So that was a violation of our policies. And then shortly thereafter there was a text post that had some of the same language. It called those who had reached the Capitol great patriots. So they flagged that video. I reviewed it along with some of my colleagues, and what we saw was what arose to
a violation of our policy against celebrations of violence. And this is a policy that you may have heard about before, in the context of us saying we don't allow anybody to praise terror acts or acts of violence. And you can think of that as if somebody, if there's a bombing somewhere and somebody says, oh, I'm glad that bombing happened. We would remove that as praise of a terror act, but we removed any praise of violent acts where a person is likely to be injured. And here the capital
attack we knew was a violent act. And this was sort of a normal part of our process throughout the run up to the election and then the run up to the inauguration was having a twenty four hour operation center where we were flagging and looking at content that potentially violated our policies. And of course that included from anyone one of the billions of people using our services, but it also did include looking at content that was
posted by high profile accounts, including the president's account. So, Monica, I want to ask first a question about that twenty four hour operations center and how they were functioning in this instance. How was that working? It was it literally that there's someone there in the op center looking to see what their president would do next. Well, there's always three ways that content can get flagged for our attention,
and one is a user report. Two is, as you mentioned, we use technology to try to identify likely violations, and three is we work with partners outside the company, and that could include depending on where you are in the world, safety groups or media groups that might want to flag
something for us. And when it came to the US election, we were working with a number of partners, including elections officials and safety groups, voters rights groups, etc. And then we also had our teams who were looking at high profile accounts, not just because content that violates our policies from those accounts would be really important to be on top of, but also because sometimes you'll see people who have high profile accounts being the subject of attempted hacks
or attempted abuse in comments, and so those are things that our teams do watch. So let me turn now to the remarkable observation that you first reacted to the Trump post because they were celebrating violence. That's a tiny bit different from what the House of Representatives alleged in its article of impeachment against Trump, not that he was celebrating violence, but that he incited violence. And the difference, I guess, is that celebration has when something of violence
has already happened. You're celebrating the fact that violent thing is happening, whereas incitement is the thing of violence hasn't happened yet. And you're doing something that is encouraging it to come about, and that involves a lot of prediction on the part of whoever's judging that, in this instance
by the House and then eventually by the Senate. So do you see those things as in some way distinct or was it just the case that when Trump gave his initial speech to the rally on January sixth, that that didn't ring any immediate bells because no violence had
happened yet. Because if so, that's kind of interesting really for the question of impeachment, right, if it didn't look like incitement when he said it, and you were all sitting there in your twenty four operam looking at it, that's sort of not a terrible defense for Trump to raise when he says I didn't incite any violence. Not a defense to raise with you, but a defense for Trump to raise in his impeachment trial. Well, one of the things that I would emphasize is with our celebration
of violence policy that is ultimately about preventing further violence. So, for instance, I mentioned earlier, if somebody says, oh, I'm glad that bomb went off in that city and killed all those people, the reason we remove that is not because it's distasteful, though it certainly is. It's because we think that people praising and celebrating violent acts glorifies that
and can lead to further violence. So, whether or not you want to call that additional incitement or call it, as we do in our pulse, the celebration of violence, I think the point is the same, which is, we thought there was a risk of additional violence, and we
thought the president's remarks contributed to that. But is it, in fact the case that your own decision to definitely suspend President Trump from the platform was driven not by the theory that his speech to the crowd on January sixth led to the violence, but rather on the basis of comments he made after that violence had already begun. Yes, that's it is right. What we removed was commentary after the violence had begun. It was a video in a
text post. But the reason that we had that celebration of violence policy is because we think that kind of commentary can indeed stoke further violence. And in fact, here not only did we remove the content, but we extended the twenty four hour band that was called for by our policies. We extended that indefinitely because we thought the risk of violence on the ground was still very present
and likely would be throughout the transition to power. That's really interesting too, the indefinite band through the as it were, transition until Joe Biden was eventually sworn into office, at which point will come to this later. Facebook turned this issue over to its oversight board. Was that then, because you were worried about violence, or you were worried that somehow the transition itself was in jeopardy through democracy itself being in danger, or those basically the same thing. For us,
it's about the risk of violence. What we're looking at is through the lens of our values around allowing speech but also promoting safety and removing what we think could reasonably contribute to a risk of physical harm to somebody. And here we had actual physical harm happening on the grounds.
We thought there was a continued risk of that, and we did not want the president at that time, who had a high number of followers, a really big microphone, and a pattern of celebrating violence, to be able to further stoke violence. The question that a lot of critics of Facebook inevitably are asking, once you did do it
is why not sooner? Right? You say the president was glorifying violence, Well, what was it when after the Charlottesville violence, which included a death, the president said there were fine people on both sides. You know, why wasn't that a celebration of violence? Why were in other comments the Trump has made over the course of his presidency comparably violations of policy, such that only now, when there was intact
on the Capitol was he actually dep platformed. Well, first, I would say this isn't the first time we removed content from the president. Any time that we have a controversial posts by any world leader, and this has happened a number of times, including with President Trump and other high profile leaders in the United States, anytime we have a post that is close to the line, we have to look at what the most natural reading of that
post is. When you look at that video and he's saying I love you and thank you, one could argue that he was addressing the protesters generally and not those who had breached the capitol. We thought the most natural reading since he was also saying, okay, go home peacefully, we thought the most natural reading was that he was referring to those who had engaged in the violence that gets you to the taking down of the specific content.
But this decision was different because this was an indefinite suspension, what at least colloquially one would call a deep platforming, which is a bigger deal than taking down content that occurred in the past. Was there a specific rule that you could point to that merited the deep platforming, the indefinite suspension rather than the taking down of the content. Yes. So basically, the first time that somebody elates one of
our policies, unless it's a really severe policy. For instance, if somebody posts a child sexual abuse material, then we would immediately take down the account. But for violations that fall into a general category such as a bullying violation or a celebration of violence violation, the first time is usually just a warning, But if there is a second violation within a period of time, then the consequence is generally a twenty four hour ban on that person posting
on our services. So your account is still there, you just can't post anything. And if you recall that day, we came out and we said we've removed the president's content and he is banned from posting on our services for twenty four hours, So that was just a straight application of the policy. But what we then did the next day, we say, we're going to suspend that privilege to post indefinitely because of the fear of further violence, and that we at least have that in place through
the transition of power. So that part that extension from twenty four hours with the indefinite suspension was based on circumstances on the ground and not just routine application of our policies. I will say, of course, that the consequences, you know, do we take down somebody's account for this certain number of strikes or do we ban them those
Sometimes we do exercise some judgment. We'll look at somebody's account, for instance, and say, well, in this case, we think this post was very borderline, and this person didn't get at notice of his or her violation until we're actually not going to remove the page. We'll give them a final warning. That sort of thing is fairly routine. Here it went in the other direction. We said, we think we need to extend the ban at least through the
transition of power and indefinitely after that. What would you say to a skeptic who said, Okay, I accept that you have to exercise judgment. But why is it that through the entire of Donald Trump's presidency that judgment did not involve his being suspended? And then after Congress stayed up all night and voted that he was not ultimately that the election was over and that he had lost, then suddenly the exerciser judgment went against Trump. You know, when he was he posed less of a threat to
the company. Well, like I said, we had removed content from the President's account before he had not hit the threshold that would trigger the twenty four hour ban. So that's just the application of our policies. I will say that one of the questions we've gotten in the wake of this decision is what about other world leaders? What about world leaders who are seen as by the international community, of the human rights community, has real bad guys, and
why don't you remove them from your service? And what I can say there is, again, we remove content when it violates our policies. We have removed content from other world leaders, and that includes praise of violence. It also includes sharing misinformation about COVID nineteen. So we do remove that content, but we only impose those additional consequences when it's called for under our policies. You mentioned the other
world leaders. This goes to one of the principles that's in your statement of values, and the first of them does acknowledge that sometimes because there's a preference that you have for freedom of expression, especially on political topics, that sometimes elected officials will say or do things that would otherwise violate your policies and you don't take them down because you think that those things serve positive news value.
How does that interact with the fact that somebody is a world leader, I mean, is that basically a reason to be more permissive with respect to statements by world leaders with the policy The news wordiness policy is a little bit different than that, and actually rarely do we
use it with world leaders or politicians. Basically, in our community standards, we say here's what's prohibited, and then we say, if something if we think the value for the public in seeing something outweighs the safety risk because of the item's newsworthiness, then we may leave it up even if
it violates our policies. And we do apply that newsworthiness policy regularly, I think probably most often in the context of, say there's a nude art exhibit or there's an image of graphic violence in the context of somebody raising awareness about a war, and it shows a nude child or something like that, where we would say this is newsworthy, so we're going to leave it up. We think that the risk of safety is far outweighed by the value
of people seeing this content. In a small handful of cases, we have used that policy to leave up content posted by world leaders or politicians, but that's fairly rare. But generally it would it would include something where we think there's no real safety risk, and we think that people should be able to see that this politician engaged in this speech which is likely distasteful or there's probably something
about it that's problematic but not unsafe. I want to turn out to the Oversight Board, which I helped advise on and you helped construct and build. And in fact that's how we met when I came out to Menlo Park back when people still traveled places at the very very early stages to think and talk about potential oversight Board directions. And sure enough, the baby's all grown up.
And I mean, so far the Oversight Board has decided I guess six and a half cases, yeah, five and a half cases, five and a half cases, none of them on the scale of this decision. This is a huge decision, huge for the company, huge for the Oversight Board. Possibly not insignificant for politics in the United States, given that more than seventy million people voted for Donald Trump and lots of Republicans seem to believe that he still
has a big influence within his party. So I guess the first question I have is do you think they're ready for it? I do. I do think they are, and I think the decisions they just put out so basically as you know, but people listening may not know. The Oversight Board was constructed to be an independent check on the decisions that my team is making, that Facebook
is making on removing people's content. And when they issued their most recent decisions, their first decisions the slight of there are five opinions that they put out and then there was there was one decision they couldn't make because the post was actually removed by the person who had posted it. But in their five decisions, they really explained
their thinking. They demonstrated real seriousness and sophistication, and I think I'm really excited about how they approached these first cases and the potential for them to decide really important cases in the future, starting with the decision to indefinitely suspended remove content from President Trump's account. In most of the cases that they heard this first trash, the oversight Board flipped the decision that Facebook had made. How are you going to feel if they flip you on this
one too? Well, we referred it to them because we think they should get to make this decision. So you know, we're we're looking forward to that. And I will say the criteria for for us. So the way the way the board can get a case a user could appeal to them be Facebook could say, boy, this is really hard and really significant, and we think that somebody else should be making this decision. And in this case, we decided to refer this to the oversight Board. The Trump decision.
The criteria we use are is it a significant decision? It clearly is? And is it a difficult decision? And here the fact that we have had people, some people saying why wasn't a permanent band, why didn't you do it sooner? And we've had other people saying, I can't believe Facebook would remove a sitting president's ability to post really shows how difficult this is. So I think they
are the right group to decide it. And we didn't just ask them tell us whether or not we were right to remove this particular video and impose this indefinite suspension. We said, tell us how we should think about removing content from or in definitely suspending. World leaders are those in positions of power in countries around the world, so this is something that really does have a global implication.
We'll be back in a moment. One of the things that's in process right now is that Donald Trump is being tried for impeachment in front of the US Senate, and counting noses, it seems much more probable than not that he will not be convicted by the Senate, and according to a norm that is in place, despite the fact that I don't like it very much, when a present is not convicted by the Senate because there's no two thirds vote to convict him, that president usually says
I was acquitted by the Senate. I'm not sure I love the word acquitted in that context, because it's nothing like acquittal in front of your jury, as you know as a former prosecutor. But the president is likely to say if the Senate doesn't convit him, I was acquitted, and whether it's too late for him to say it to the Oversight Board or not in public, I would think that Trump or his supporters are likely to say, listen, Facebook, who were you to second guess the Senate of the
United States? You know, the impeachment if a present is it bit like an indictment, then he gets a trial. I was tried, I was acquitted. I'm not guilty of incitement, and therefore you should reinstate me. At least, That's what I would say if I were supporting Donald Trump in this in this effort, should the Oversight Board care about that?
Should it matter at all that there's been a public political process prescribed by the Constitution, and if at the end of that process it turns out the Trump is not removed, does that matter? I think it's really for the Oversight Board. I mean, that's why we have them. So I actually won't won't give an opinion on that because you know how to unduly influence them, or because it's not because you don't want to take a stand on it. Well, I just don't think. I don't think
it's really my role. I think the reason we have them is because we think they should be able to make that sort of decision. Monica, tell me about how, maybe a little soon to say this, but how does your job and the job of your whole team that work on content policy change in a world where there's now this oversight board to review the decisions that you guys made. How does that affect you when you go
to work in the mornings. Well, I can just tell you my personal reaction to the first slate of decisions, which was I was very happy and felt like we got clear direction from them. And this is not about reinstating three posts that we had removed that we ended
up reinstating after their decisions. It wasn't about that so much as it was about the other guidance they gave us about why they thought we had to reinstate these posts, and so, for instance, things like you need to provide more granular information about your COVID nineteen misinformation policies, or you need to it was operational advice about what we need to tell people about whether review is automated or
using human beings. And it was process advice about ensuring that people have the ability to be reheard to appeal our decisions. That's the kind of guidance that can help us know where to invest from an operational standpoint of product standpoint. And do you and the company view those recommendations as they're a kind of a subtle area, right? I mean, the board is empowered to give you non
binding recommendations. But it's also true that if the board makes something necessary to its decision in a case, then arguably that would be binding. So how do you figure out what it is in a given situation? What we've we've and I should say, we're we're going to take the we have thirty days under the process that we've devised, thirty days to digest the decisions and respond to them publicly,
and we'll respond to them in newsroom posts. So we'll have to look into the specifics of each of their recommendations before we have an answer to give on that. But more generally, the process we have is if they tell us that a specific piece of content should be up or down, we will honor that and we will implement that right away. And we've done that with the
decisions that they gave us. If there is other content that is identical in terms of what it's saying and basically it's in parallel context, it's being used the same way, then we will try to find that and make the same decisions. So, for instance, if they tell us, hey, this particular meme that you this is, I'm making this.
But if they said this meme that you removed for hate speech does not violate and you should reinstate it, we might look for other instances where we had removed that same meme and say, okay, if it was shared without a caption and it was shared in the same way,
we're going to reinstate that right away. So that's part of implementing the binding part of their decisions, the policy guidance stuff, including them saying, for instance, you know you should look at the comments under a post in your evaluation, or you should have an automatic right of appeal to a human being to rereview content that is not binding
on us. You mentioned COVID misinformation as a currently very important question, one that the oversight board is already referred to, and obviously it takes up a lot of your own
thinking and time. How broadly speaking, has the company decided to think about COVID misinformation And I'm thinking now especially about vaccine related misinformation as we head into a period where for the moment, there's still a question of getting it enough people vaccines who want them, But at some point they'll, with any luck, there'll be a shift it we'll start wondering about what they call a vaccine hesitancy, which does seem to me like a major, major, major euphemism.
People who don't want to be vaccinated, And if they don't want to be vaccinated, that may be on the basis of a view of the world that from a scientific perspective might be counted as misinformation. So how is the company thinking about that? Oh, this is such a
this is a so difficult and be so important. And we've been focused on this since last January, I mean since the pandemic first began, and we've been working closely with health authorities, most notably probably the World Health Organization and the CDC in the US to get their guidance on how we should be thinking about and responding to COVID misinformation. By the way, misinformation is just part of it.
We have a number of COVID specific policies. One that's that's interesting that maybe we could talk through time is what to do with commerce offers to sell masks or COVID test kits, especially when there are shortages or when things aren't necessarily reliably certified. So there are a lot of COVID specific policies, but specifically in the area of misinformation,
we've developed really a two pronged approach. One is removing or down ranking and labeling content that is misleading or inaccurate, and then the other prong is really aggressively promoting accurate information about the vaccine, about treatments, about the virus itself. And so I mean, just to I think we put
the COVID Information Center. I think we got that going in March or maybe earlier of twenty twenty, and we have had hundreds of millions of people visit that, which is encouraging, and part of that is because we're blasting notifications trying to direct people to that center. In fact, in just December twenty twenty, we had more than one hundred and thirty million people visit that information center. So
that's one thing we're doing. But as you say, like actually responding in the moment to misinformation that shared on the platform is also really really important, and so we are removing and this is criteria that we've worked up with health organizations. We're removing content that falls into a number of categories where if somebody believed it, it would contribute to the risk of that person would get COVID
or would spread the virus. So, for instance, false statements, false claims about the disease being no worse than the flu, or the virus doesn't really exist, or poor people are immune from it, they can't get it, or five G causes covid all of them. What about their people who say What about someone who says, I just don't believe these vaccines will work and I believe they'll do harm and I'm not going to take them. That's something we
would allow. If somebody says I personally and then they're giving their their own experience, we would generally allow that. If they are saying something as a statement, so something like you know, I've I've looked at it, the vaccines don't work, or the vaccines cause in fertility, or did you see this study or this article, and then they're signing something that is inaccurate, we would remove that. What gets really tricky, and this is sort of where your
comment goes. What gets tricky is statements of personal questioning or personal testimonials. So let's say somebody says, I just got the first vaccine shot. I've never been this sick. It's really really horrible. If I had it to do all over again, I don't think I would have gotten it. What do you do with that? That's just a person stating his or her own opinion. What about somebody stating facts like my sister got the vaccine on Monday. On Wednesday,
she was diagnosed with pancreatic cancer. With everybody, with the high number of people get vaccinated, some people are going to have heart attacks the next day, not because of the vaccine, but because they were always going to have a heart attack on Tuesday. And so it goes further than that. There will be some people I don't know further, but the hope be cases of people who get vaccinated and then the next day have COVID because they were
infected before they got the vaccine. I mean, that's going to happen to some people, right. So that's one of the tricky questions for us. How do we deal with it's sort of testimonial content, and what's your approach been
with personal testimonials, we generally allow it. I mean, if it looks like if we see a case where it looks like somebody is intentionally trying to scirt the policies, maybe this is a financially motivated actor, or maybe this is somebody who is generally sharing conspiracy theories and there's something more going on there, we might take a different approach. But if this is just a regular person who is sharing a personal experience, our general policy is to allow it.
There's an in between approach too, so we remove content where we think it can contribute to the spread of the virus, and that is there are all different kinds of claims that fall into that, but it's generally stuff about diminishing the seriousness of the virus, or saying that there are cures that there aren't, or discrediting the vaccines.
But then there is also content that we demote, meaning it won't get the same distribution on Facebook, and we put labels on it that direct people to that COVID Information center, and that will include content like the vaccine is man made, this is all a big conspiracy, and there there's not so much a safety risk, but we do want to make sure that people are actually getting
accurate information about the virus. So just to basically put some numbers on it, since March, and I think these numbers go through maybe October, we removed about twelve million posts for COVID misinformation, and I think since then in December, I think we've removed maybe just over four hundred thousand such posts. So that kind of gives you the general
idea of scale. And then in terms of the demoting and labeling content that where there's not a safety risk but it's still widely debunked misinformation, it's more than one hundred and sixty million posts in that same timeframe that we've labeled, so as a proportion, it's almost it's more than ten times as much down ranking or labeling compared to removing content. I want to ask you about the
future of content moderation in that way. I mean, do you see is that characteristic of where you see your whole field going I mean, is there going to be more and more and more and more down ranking and labeling rather than removal or an addition to removal, or do you think those numbers are likely to remain relatively stable going forward in terms of the ratio when it comes to misinformation, I think I think labeling will become
more and more important. Now. Facebook already does it quite broadly. We've done this since twenty seventeen. We work with more than eighty fact checking partners. Not just on COVID misinformation, this is on misinformation about any topic. If it's going viral and a fact checking agency that we work with wants to fact check it, then they can label it and we will down rank it and we will apply the label to it. That's something that we already do
quite broadly. But I think this is an approach that you're starting to see some of the other platforms get into. In fact, you saw it and they run up to the election, and I wouldn't be surprised if it's if it's something that maybe increases in its importance an important tool. One of the criticisms that I've heard a lot, sometimes directed at the oversight poort, but more broadly directed against content moderation is that in a sense, it's all very
well and good. Everyone says, it's good that you're doing that, it's nice that the Facebook wants to do that. But if the biggest, deepest social cost associate with Facebook is people finding themselves in algorithmic bubbles where they mostly hear what is referred to them by their friends, their family, people of like mind, and if that, if that drives polarization. And these are very controversial claims, but I'm ventriloquizeing what
critics often say. Then they say, you know, isn't it just sort of a band aid to say we're taking down the worst content or we're down ranking content that we don't especially like. The strong form of the criticism would say, all of the tools that you've placed in content moderation or in content and policy, those should go to the very fundamental question of what the company allows
to be seen in the first place. You know, maybe the news feed that Facebook produces should come under the auspices of content policy, you know, should be similarly not just checked for misinformation but which it is, of course, but more broadly, should be part of a process of trying to curate material in a way that minimizes polarization. And obviously that's not the world that we live in now, but it's a normative vision of how things could evolve
or develop in the future. When you hear that kind of criticism, how do you tend to react to it. Well. You know, one of the things I think that points to is the power of us directing people to authoritative information, which, like I said, the numbers that actually are very good. We have a COVID information center, we had a voting
information center before the US election. We're building other information centers, and what we're seeing is people actually do visit these when they are directed to them in the moment, and so that's something that I think can be effective against polarization. For instance, in the run up to the election, we were directing people very broadly, not just when there was something false, but when people were discussing election related topics. We were saying, get the facts here and pointing them
to a bipartisan, accurate set of resources. So I do think that's important. The other thing I would say is because of the headlines and because of the understandable focus on the election recently, I think there's a misperception that Facebook is all about news or politics, and in fact, the news content, the percentage of Facebook content that is
related to the news is very very small. I think it's less than five percent, and so when you think about polarization going up, and I think there are some studies out there that show this polarization has been increasing politically in the United States for decades, and there are many reasons for that. So it will not be enough for social media companies alone to say, well, we're going to take this one approach. This is something that we
have to work on as a broader society. A last question, Monica, and again this comes from skeptics. They'll say, well, look, the oversight board is great, but it's only going to hear a handful of cases. What about all the other cases where every day Facebook is making decisions about content posted by users who get a lot of engagement, including people whose values and views might threaten the content policy standards.
How do you assure or try to assure the general public that the company's profit incentive, which goes alongside engagement, is not enough to overcome the counteracting principle of enforcement that you and your team are are charged with implementing.
I guess the skeptical way of putting it would be, it's nice that the oversight board will oversee you some of the time, but why should the rest of the world trust you when the oversight board isn't looking I guess I have two answers to that, and one is sort of a personal perspective, which is I've been in this job now for eight years or so, and what it looks like is my team of a couple hundred people coming together with experts on speech and safety from
around the world and crafting a set of standards which we then apply with thousands of content moderators that use the rules and the guidance that we give them. It is not dictated by concerns about revenue or you know, I can't for instance, when we're when we're designing our policies, we don't we don't talk to people on the sales team about how that would affect revenue. That's not part of what goes into this, and so that's that's one
personal assurance I would give. But in terms of the oversight Board's role on this, I think, yes, it's true, they're only going to hear a small number of cases. And even if we double the size of the oversight board, you know, we make millions of decisions every week, so the oversight board is not going to be able to hear a significant percentage of those cases. But the decisions that we saw them make have an effect in flagging
for us the broader concerns. It's not just about reinstating a piece of content, Like I said, their guidance was much more around what kind of notice has to be provided, what kind of process has to be provided, and those that's the sort of guidance that will indeed lead to us thinking about bigger questions that will affect all of our users. Monica, thank you for taking us under the hood.
It's a complicated engine in there. We will look forward to seeing what the oversight board does in the Trumps suspension case. I myself, while looking forward to with bated breath, I mean I am about as far from the capacity to be objective about the oversight part is it's possible for me to be about anything apart maybe from my
actual children. But on the other hand, the oversight port is in fact totally independent now and independent not only of Facebook, but certainly independent of me, and so I myself I'm watching with fascination and not a little terror to see it, to see how it all comes out. Well. Thank you so much, and thanks for the conversation. I
always learn so much when I'm speaking to Monica. The truth is that we never really ask what happens behind the scenes at the big social media platforms when speech, whether that of an ordinary person or of Donald Trump, is left up or taking down. Where Monica lives professionally is an epicenter of a new form of power. It's the power to decide who is heard. It's also the power to amplify or decelerate the trajectory of information as
its heads through the world. This is a crucial historical moment for the governance of content on social media, for what content is allowed to remain and what content is
taken down. We are witnessing a deep interpenetration of how the president's words and speech play out in the realm of government as in the impeachment, and how they play out in the realm of communication across social media as we see with respect to Trump's suspensions, the outcomes of each are going to matter for the way we think about free expression in the United States and the world.
And when the Oversight Board reaches its decision about Donald Trump, I will come back to you here on deep background with a possibility of further discussion and conversation. In the meantime, I'm watching the impeachment trial as closely as I know how I better be, because I'm on TV almost every night this week trying to offer an opinion about it.
I'm sparing my Deep Background listeners those comments for the moment, but as the trial develops, if important things come up that we think are relevant to our listeners, I promise to come back to them in the very near future. Until the next time I speak to you, be careful, be safe, and be well. Deep Background is brought to you by Pushkin Industries. Our producer is Mo laboord, our engineer is Martin Gonzalez, and our shore runner is Sophie
Crane mckibbon. Editorial support from noahm Osband. Theme music by Luis Guerra at Pushkin. Thanks to Mia Lobell, Julia Barton, Lydia Jean Cott, Heather Faine, Carly mcgliori, Maggie Taylor, Eric Sandler, and Jacob Weisberg. You can find me on Twitter at Noah R. Feldman. I also write a column for Bloomberg Opinion, which you can find at Bloomberg dot com slash Feldman. To discover Bloomberg's original slate of podcasts, go to bloomberg dot com, slash podcasts, and if you liked what you
heard today, please write a review, Tell a friend. This is deep background