Company that moderated and filtered content for Facebook, says they regret working with them

Company that moderated and filtered content for Facebook, says they regret working with them

Aug 16, 2023 - 19:30
 0  41
Company that moderated and filtered content for Facebook, says they regret working with them

A company that was hired to monitor Facebook posts in East Africa has admitted that looking back, it was a mistake to work with the Meta-owned social media platform.

Former employees of Sama, a company that took on the contract of this moderation work, have shared that they were deeply affected by being exposed to disturbing posts. Some of them are currently pursuing legal action against the company in Kenyan courts.

Wendy Gonzalez, the CEO, stated that Sama will no longer engage in tasks related to filtering harmful content.

When moderators couldn’t take it anymore
Several ex-workers have recounted their distress after encountering videos depicting violent acts such as beheadings and suicides. Sama operated this moderation hub from 2019.

Daniel Motaung, a former moderator, revealed that the first gruesome video he witnessed was a live beheading. He’s currently suing both Sama and Meta, the owner of Facebook. Meta asserts that all its partner companies must provide continuous support. Sama claims they always had certified wellness counsellors available.

Reflecting on the situation, Gonzalez remarked, “You might ask, ‘Do I regret it?’ Well, I’d probably phrase it like this: armed with the knowledge I have now, including the toll it took on our main operations, I wouldn’t have agreed to it.”

They also steer clear of any artificial intelligence work associated with weapons of mass destruction or police surveillance.

Referring to the ongoing legal proceedings, Gonzalez opted not to comment on whether she believed the claims made by employees who asserted they had suffered from viewing distressing content. When asked about her general stance on the potential harm of moderation work, she noted that it’s “a novel field that unquestionably requires thorough research and allocation of resources.”

A distinctive outsourcing company
Sama stands out for what it does and is highly sought after by many emerging social media platforms. Their USP was they trained people from economically weaker sections of Nairobi and gave them technical, computing skills. Sama also gave them a job

People from economically disadvantaged areas of Nairobi were earning $9 per day through “data annotation.” This involved labelling objects in driving videos, such as pedestrians and streetlights, to train artificial intelligence (AI) systems. Workers who were interviewed expressed that this income had been instrumental in helping them break free from poverty. The daily average wage in Kenya is about 5 dollars, with most people earning closer to the minimum wage of 3-4 dollars.

She strongly believes that it’s vital for Africans to participate in the digital economy and contribute to the advancement of AI systems.

Gonzales reiterated that the decision to take on the moderation work was driven by two key factors. First, the recognition that moderation is a crucial and necessary task to safeguard social media users from harm. Second, the importance of having African content moderated by African teams.

When it comes to moderators’ compensation at Sama, they started at approximately 90,000 Kenyan shillings ($630) per month – a respectable wage in Kenya, comparable to that of nurses, firefighters, and bank officers, Gonzales shared. When asked if she would undertake such work for that pay, she clarified that moderation wasn’t her role within the company.

Sama’s contribution to ChatGPT
Sama also partnered with OpenAI, the company responsible for ChatGPT.

An employee named Richard Mathenge, tasked with reviewing extensive volumes of text that the chatbot was learning from, and flagging any potentially harmful content, revealed that he had been exposed to disturbing material.

Sama confirmed that they discontinued this work when their Kenyan staff expressed concerns about requests related to image-based material that wasn’t part of the original contract. Gonzales stated that they promptly ceased this work.

OpenAI responded by stating that they have their own “ethical and wellness standards” for their data annotators, recognizing the challenging nature of the work for their researchers and annotation workers worldwide.

Gonzales views this type of AI work as another form of moderation – a type of work that the company will not be engaging in again.

“Our focus lies in non-harmful computer vision applications, such as driver safety, drones, fruit detection, crop disease detection, and similar areas,” she explained.

She concluded with a strong assertion, “Africa must have a voice in AI development. We can’t perpetuate biases. We need input from individuals across the globe to contribute to building this universal technology.”

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow