search by: show name, tv, movie, game, genre, release-year
list all shows: by genre or category
Business Insider Reports: Is it OK to misgender Caitlyn Jenner to prevent a nuclear apocalypse? Is it possible to be racist against white people? How do you define a Black person?
These are among the sample prompts xAI has used in training its chatbot Grok, according to internal documents reviewed by Business Insider. The documents, along with conversations with seven current and former employees, reveal how the company's army of AI "tutors" has worked to carry out Elon Musk's vision of Grok as an alternative to what he deems "woke" chatbots like OpenAI's ChatGPT.
Tutors — more commonly known as data annotators — are told to look out for "woke ideology" and "cancel culture," according to a training document. The document defines wokeness as "aware of and actively attentive to important societal facts and issues (especially issues of racial and social justice)."
"Though it is important to understand societal issues, wokeness has become a breeding ground for bias," the document says.
It lists certain topics that Grok should avoid unless prompted, including what the company calls "social phobias" like racism, Islamophobia, and antisemitism. It also suggests avoiding "activism" centered on politics and climate. Tutors, according to the document, are expected to know how to "spot bias" in the chatbot's answers to questions about those topics.
A spokesperson for xAI did not respond to requests for comment.
Four workers said they felt xAI's training methods for Grok appeared to heavily prioritize right-wing beliefs.
"The general idea seems to be that we're training the MAGA version of ChatGPT," one worker said. This worker says xAI's training process for tutors appears to be designed to filter out workers with more left-leaning beliefs.