When Elon Musk purchased Twitter, called it X and launched an AI called Grok – many felt this particular AI was going to be leaning towards the MAGA right that Elon has been pandering towards since he made X a very horrible place to be on the internet. Grok is the AI tool on X, and despite Elon Musk’s promise that Grok would be “maximally truth-seeking,” the chatbot has drawn criticism for showing signs of political bias and questionable content moderation. Some early users claimed Grok avoided criticism of Musk and Donald Trump, raising concerns about censorship, though xAI later said this was a temporary issue. Grok was clocked for inconsistencies in its responses, especially on topics like immigration and diversity, where Grok’s answers sometimes contradicted Musk’s public stance. These incidents have fueled debate over whether the AI is truly neutral or subtly shaped by its creator’s views – and now Grok itself has seemingly confirmed it was pushed to appeal to the right by its creator – Elon Musk and xAI.
I get really irritated by all the people who get an AI to claim something about its training then post things like this about it.
The chat bot doesn’t know anything at all about its training, that’s not how the training works. It’s not impossible for it to spit out parts of the prompt, but the training is something else entirely and any claim to the contrary is just the AI role-playing
It’s not impossible that it might even spit out other prompts that were a part of its training data. Or joke prompts from chat logs it trained on.
Though I’d be very surprised if it actually is handled in a neutral way. Though, on the other hand, previous chat bots did turn racist pretty quickly in the past, so I could see him letting it remain neutral, knowing that it will spit out that kind of shit if allowed to, and avoiding an explicit right wing bias could give it credibility outside of the far right.