Being an AI trainer is not just a technical job; it carries a heavy moral burden. Workers describe being “haunted” by the tasks they are assigned, forced to make judgments on life-and-death topics for which they have no expertise. This sense of responsibility, coupled with the knowledge that their rushed work could have real-world consequences, creates a profound ethical dilemma.
One trainer, a writer by profession, was tasked with entering details on chemotherapy options for bladder cancer. She was not a medical expert and was terrified by the prospect of making a mistake. “I pictured a person sitting in their car finding out that they have bladder cancer and googling what I’m editing,” she said, highlighting the human impact that is often forgotten in the abstract world of tech.
This moral stress is a direct result of company policy. In late 2024, a major tech firm sent an internal guideline to its contractors stating they were no longer allowed to “skip” prompts for lack of domain expertise, even on healthcare topics. They were instructed to simply do their best, a policy that prioritizes data collection over accuracy and places an unfair burden on the worker.
This experience leaves many trainers feeling complicit in a system that they believe is unsafe. They are the ones who have to press “submit” on information they know is not properly vetted. This daily compromise between doing their job and doing what’s right is a heavy weight to bear, and it’s a hidden cost of the race to build the most knowledgeable AI.
Haunted by the Work: The Moral Burden of an AI Trainer
Picture Credit: simplybefound.com