Wednesday, April 24, 2024
spot_img
spot_imgspot_imgspot_imgspot_img
spot_img
spot_img

Report identifies problems in big tech content moderation, calls for an end to outsourcing

Share

The NYU Stern Center for Business and Human Rights released a new report on the approach to content moderation taken by major social media companies, including Facebook, YouTube and Twitter. The report, “Who Moderates the Social Media Giants? A Call to End Outsourcing,” presents new findings about how big tech firms regulate the content that appears on their platforms; identifies major problems with the platforms’ current approach; and makes concrete recommendations for reforms.
Facebook, YouTube and Twitter outsource most of their human content moderation-the job of deciding what stays online and what gets taken down-to third-party contractors, which creates a marginalized class of reviewers who are seen as “second-class citizens.” The NYU report finds that the peripheral status of moderators has contributed to inadequate attention being paid to incendiary content spread in developing countries, sometimes leading to violence offline. Outsourcing has also contributed to the provision of subpar mental health support for content moderators who spend their days staring at disturbing material.
Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights told Capital that the report finds social media companies lack local expertise in countries like Ethiopia. “They should hire more Ethiopians with broad language skills and a solid grasp of the country’s culture” Barrett said.
“Online speech foments ethnic and religious violence and hostilities, one thing social media companies can do is to remove content that is false and may lead to physical harm,” he further added.
Focusing primarily on Facebook, the report found that every day, three million Facebook posts are flagged for review by 15,000 moderators. The company has admitted to at least a 10 percent error rate in making content decisions, which suggests 300,000 mistakes a day. At Facebook, outsourced content moderation takes place in more than 20 sites worldwide.
“The report finds that Facebook and others rely on unpaid local activists and NGOs to monitor online speech, but often ignore their warnings. There have been communication breakdowns and inattentiveness. Facebook says this is changing-that it is listening more closely to its partners on the ground. Perhaps the company has learned from past experience. But all of these problems have not been fully solved,” the deputy director told Capital via email.
“Social media companies cannot cure all societal ills, including the incitement of violence. These companies need to do more to ensure against the misuse of their platforms. But there is a big role for civil society organizations, journalism outfits, and, in some cases, the government, in fostering civility and diminishing the influence of bad actors. Political leaders must come together and get behind education efforts aimed at enlightening social media users and all citizens,” Barrett said.
The report also calls on big tech to invest more in countries like Ethiopia to prevent violence from spreading virally.
“More attentive content moderation and fact-checking, more social media literacy training, and more support for public-spirited journalism and civil society activity should be done,” Barrett said.
In recent months, the coronavirus pandemic has exacerbated problems with content moderation. As content moderators have sheltered in place at home, without access to secure computers, social media companies, including Facebook, announced that they would rely more on AI to remove harmful content. The result has been that the platforms have mistakenly taken down some legitimate content from healthcare professionals and others, whose posts were mistakenly flagged for removal.
“The widespread practice of relying on third-party vendors for content review amounts to an outsourcing of responsibility for the safety of major social media platforms and their billions of users,” said Paul Barrett adding “the results of this practice range from moderators not receiving the caliber of mental health care they deserve, to real-world violence in some countries where insufficient moderation leads to hateful content remaining online.”
This report involved dozens of interviews with executives from Facebook and other tech firms, as well as with former content moderators operating in the U.S. and Europe; scholars and researchers studying the issue; and on-the-ground activists and NGOs.

Read more