‘AI models are good at picking up nudity, but less good for hate speech’
In just over a year of its existence, Facebook’s Oversight Board has admitted 23 appeals on content moderation decisions and made over 50 policy recommendations to the company. In an interview, Sudhir Krishnaswamy, the only Indian member on the board, and the vice chancellor of National Law School of India University (NLSIU), discussed the board’s working so far, its expansion plans and the need for algorithmic moderation of content on large platforms. Edited excerpts:
Now that it has been one year, what do you think of Facebook as a platform?
Facebook is a different kind of platform depending on the jurisdiction you are in. For example, say the likes of Myanmar or some countries in West Africa, Facebook is a primary media source. In a jurisdiction like India it’s mixed — Facebook is big, but other media is also big; what is ostensibly private messaging, but also works as public media is also big. I think that background and understanding of the media environment in each jurisdiction is important because Facebook plays a different role in each of these jurisdictions.
But I think what you’re asking is what Facebook can be. I suppose its promise is that it allows for a disintermediated community, that communities can form irrespective of geography, class, background, etc. That kind of rapid and large community formation can take place, that is its promise. But as we now know with the entire social media universe, where everyone is both a user and publisher, just the format of that platform allows a range of other issues to crop up. The idea that an organic community is the automatic result of a peer-to-peer social media network has been severely tested. I suppose it’s been tested across all platforms, and Facebook is no exception. The challenge of what one has to do about it has not been resolved so far in any jurisdiction.
We often say Facebook’s own policies are evolving, as are global laws. As the board, are you equipped to make recommendations?
The board is an exceptional body in terms of the kinds of people. We take our work very seriously. If there are questions that we have doubts on, like say, what is the nature of the Amhara-Tigrayan conflict in Ethiopia, we commission an opinion on that. We will consult a security group which is world renowned and expert in an area, get their feedback in a period of 7-12 days and factor that in the opinion. We have no hubris, whoever knows, we will ask them. We commission our own translations, (too), if we’re unsure about a particular language.
We also get public submissions. If we raise an issue about hate speech, top civil society organizations across the world will submit a brief. And those are superbly researched, well argued briefs, saying that you should go in this direction for this matter. So my sense is that our process is really strong.
All big platforms want to use algorithmic moderation, but issues remain. Is artificial intelligence (AI) a viable solution?
It’s an evolving field. The balance between legal and software court in various areas is being worked out. On content moderation, we find AI models are pretty proficient in dealing some content. For example, they’re very good at picking up nudity, pornography, banned substances, and arms and ammunitions, but less good for hate speech or incitement, because incitement has subtle use of language. This is where the battle lies. Even in areas of nudity, there are difficult cases—say, featuring female breasts but concerning breast cancer—the algorithms are not able to pick up very well. In some areas the algorithms are off, but it’s being trained and retrained. But in some areas, it’s quite off, and I think this is what frustrates a lot of users. For instance, hate speech or counter speech, where somebody said something and you say something back, and it’s your post that’s taken down while the original message stays up. These are difficult questions and I think that people are trying to automate more effectively across the board.
Because at the scale of these platforms, there will be an automation layer. There’s a certain misunderstanding of scale when people say why don’t you use humans to do everything. Big platforms have to use automation to a certain extent. How much, how good, these are the relevant questions.
But do you think Big Tech has the capability to translate recommendations we make to algorithms?
Some people have this intuition that we know what is counter speech, now code it. But that’s not how code works and it’s not as simple as that. The way AI works is that you develop some classifiers and train the machine on a range of test data. It then picks up how to apply the classifiers and applies them on a range of universal non-test data. This is the process.
Now, you can’t code for a legal norm in this direct way. You code by application. There can be a significant gap between the legal norm articulation and the coding articulation. There’s a trial and error process to get there in the end. Legal code and software code is not fungible in the just turn on a switch kind of way. This, to my mind, is neutral to (all) platforms and a common problem.
Do you see the board’s role evolving to take instances like the Facebook Papers, where the board said it was misled by the company, into account?
We’re defined by our charter. Can the charter be changed is a complicated process, and the reason it’s complicated is because we want to stay independent from Facebook. But can Facebook decide to create other entities to deal with problems like this, it could.
There are two whistleblowers who have attracted more attention. The Oversight Board met Sophie Zhang and we had a session with her, going over many questions, including those about what we are doing and what more we should be doing. We are also meeting Frances Haugen shortly, and we will have a similar engagement.
My sense is that the board sits between Facebook the company, its users, the general public and various actors or activists in this space. We are not beholden to take the company’s view on anything. We can talk to anyone and we do talk to people who are seriously critical of Facebook. Wherever we think we have something to learn, we reach out to people and ask them to tell us what we need to know.
One of the big issues in the Facebook Papers was the cross-check programme, and we’ve taken a clear view that Facebook’s responses to us on the program in the Trump case were ambiguous and misleading at best. Now Facebook under that pressure has requested us to do a policy advisory opinion on that program, which is underway. At the end of which that cross-check programme will not be what it has been.
So, I think that on all these issues the board is already there. We don’t need to revise the charter to take on the cross-check program, we’re already doing it under the policy advisory opinion format.
When you make a policy recommendation, what happens if Facebook does not comply?
The decision on whether a particular post stays up or is taken down is binding on Facebook. The policy recommendations are not binding on Facebook, but are compelling on them. You might ask on what basis should we expect Facebook to follow them. We have made upwards of 50 policy recommendations at this point and Facebook’s response to us has been that for about a third they’ve said we’ve done it, for another third they’ve said we’re working on it, and for the last third they’ve said it may not be feasible.
The larger structural point is that as the Oversight Board gains legitimacy, and is seen by users and the general public as exerting the right kind of influence and pressure on Facebook, then Facebook will come under increasing pressure.
Do you see the board expanding and adding more members in future?
Yes. The expansion is already underway, and the process of selecting members is already underway. I’m not sure of the exact date but there’ll be an expansion maybe by the second quarter of 2022.