AI doesn't have to be this bad part 2: Invisible labor and worker exploitation
Real impacts AI is having on society + optimistic thoughts for improvement
Welcome back to AI doesn’t have to be this bad, a discussion of tangible ways AI is negatively impacting our world today, and optimistic thoughts on how to solve them.
Don’t forget to check out part 1 on attribution, compensation, & fair use.
Invisible labor and worker exploitation
If I asked you to close your eyes and describe the people who make AI possible, you’d probably imagine a software engineer in a puffer jacket, a billionaire CEO trying desperately to be masc, and a slightly neurodivergent academic. Or maybe you’d think about me and my cohort of millennial tech employees.
And yes, all of these people make up the AI workforce.
But in addition to white-collar workers, the companies building AI systems rely on people to do data labeling, quality assurance, and monitoring. We rarely hear the stories of these data workers, who are often people in the global south that are underpaid and overworked.
Investigations from Time and CBS reported companies like Meta and OpenAI using the outputs from data labelers in Kenya working for less than $2 an hour. It’s not uncommon for these workers to spend all day labeling text and images depicting assault, rape, violence, war, gore, and other horrifying content. In 2023, Wired reported on cases where this data was being labeled by minors as young as 13.
One worker in Kenya described their experience as: “Before you go to bed, you'll see a graphic picture of what you are reading during work…And if you get a chance to sleep, you actually had very crazy nightmares. And when you have these nightmares, you can wake up at night and shout during the fact that you are reading text about rape, violently raping people and some who are being killed.”
Someone has to do the janitorial work so that when we chat with an LLM it responds politely.
Over 200 of these workers have filed a lawsuit for the “unreasonable working conditions” that in many cases resulted in psychiatric problems. And in May 2024, workers in Kenya published an open letter to President Biden outlining the exploitative ways US tech companies outsource this critical labor. The letter highlights ScaleAI, a US data labeling company whose services are used by almost every major generative AI company. ScaleAI is valued at nearly $14B and was built on the idea that human annotated data is critical to AI, yet they face multiple lawsuits for poor working conditions, union busting, wage theft, and what The Washington Post describes as their “vast underbelly” of “digital sweatshops.”
Data workers are a critical part of the development of AI. Without this invisible workforce, we would not have the high quality generative AI models we have today. But their work is often undervalued and precarious.
What does a better version of this look like?
A vision I hear shared a lot in the western tech world is the romantic idea of generative AI allowing every child to have access to a personalized tutor. The pursuit of AI is often done in the name of democratization and the potential to uplift people all over the world.
While it’s a nice thought and certainly makes you feel good about working in tech, those kids in Kenya would probably benefit more from their parents having stable work and being paid a living wage.
AI companies can democratize the benefits of AI now by paying people fairly for this critical labor. We don’t have to wait until we’ve created and deployed super-intelligent AI tutors to start moving the needle on inequality and opportunity. Data labeling is critical labor that we all rely on, and AI companies should provide more transparency into where they source their data labeling from and how those workers are treated.
One organization that’s advocating for this workforce is the Data Worker’s Inquiry, a community-based research project sharing the stories of these invisible workers and their working conditions. You can read the personal stories from data workers in several different countries, and donate to directly support these workers.
Let’s remember that for all of the talk of super-human intelligence, AI is created from human labor. So let’s compensate *all* of the humans who make this technology possible, not just the ones in the company HQ.
You may find this webinar interesting Nikita https://www.eventbrite.dk/e/ai-sustainability-navigating-the-double-edged-sword-tickets-1277766807919?aff=erelpanelorg
It's not a technology problem, it's a business model problem and fixable.