Inside an office in Nairobi’s Upperhill district, a woman named Grace dedicates her days to moderating content flagged by social media users. Her job involves filtering out distressing content, such as videos of abuse and violence, to ensure that users experience less disturbing social media feeds. Despite this critical work, she earns only a modest wage. Companies leveraging AI models contribute to significant financial growth yet often achieve this by outsourcing labor to locations where laws may allow for lower wages and less stringent working conditions. Content moderation, therefore, appears not as an isolated task, but as part of a larger systemic issue involving the hidden labor necessary to maintain digital platforms.
Previously, companies like Sama in Kenya have been criticized for low employee wages, such as during a TIME magazine investigation revealing payments under $2 per hour to workers labeling data for AI training. Although Sama ceased this specific contract, the general trend of outsourcing moderation to lower-cost regions remains. The demand for this labor grows as artificial intelligence technology becomes more entrenched in media, highlighted by the International Labour Organization’s 2024 report, noting that this sector employs over 150,000 in Sub-Saharan Africa.
How does the outsourcing framework function?
Major technology companies, including Meta (NASDAQ:META) and Google (NASDAQ:GOOGL), routinely contract Business Process Outsourcing (BPO) firms for moderation tasks in countries with high English fluency and low labor costs. These subcontracted firms further disperse employment agreements, causing protection and wage disparities for numerous moderators working under challenging conditions around the globe. An example of this system is Sama in Kenya, handling difficult content under questionable labor conditions.
What does a typical workday entail?
Workers like David from Manila experience a workday that lacks permanence and security. Tasks involve making rapid and informed decisions about flagged content, with error-prone benchmarks pressuring workers constantly. Facilities sometimes include wellness rooms to manage psychological impact, which often prove ineffective, considering their limited scope compared to the volume and intensity of disturbing content processed.
Findings by researchers at the University of Oxford and DAIR reveal severe mental health impacts analogous to trauma seen in first responders. These impacts expose significant flaws within an industrial framework treating chronic psychological distress as an individual rather than systemic challenge. Although some companies have improved conditions, any economic elevation risks shifting contracts to outfits willing to accept lower wages, maintaining this systemic issue.
Why is the harm ignored?
Discussions on AI safety heavily focus on technical biases or existential threats from AI but rarely consider the production practices. The focus on technological implications comes at the expense of acknowledging the experiences of those moderating the content, a portion of the conversation where the affected workers remain conspicuously absent, bound by nondisclosure agreements and hesitancy to speak.
Despite advancements in AI ethical frameworks, transparency in a company’s moderation practices is rare. A promising approach is requiring organizations to employ moderators directly with standard employment benefits, thereby recognizing moderation as an occupational hazard requiring comprehensive protection and prevention. Until these measures are central to AI discourse, the real cost of technological sophistication remains hidden.
“They tell us we are keeping the internet safe. But who keeps us safe?” remarked Grace, reflecting on the irony of her work. The laborers tasked with making digital platforms “safe” continue to endure unsustainable conditions, underscoring an industry-wide accountability gap.
“The industry systematically treated psychological harm as an individual wellness issue rather than a structural labor condition,” Dr. Sarah T. Roberts commented, highlighting the pervasive neglect within broader AI safety debates.
