When Elon Musk purchased Twitter, called it X and launched an AI called Grok – many felt this particular AI was going to be leaning towards the MAGA right that Elon has been pandering towards since he made X a very horrible place to be on the internet. Grok is the AI tool on X, and despite Elon Musk’s promise that Grok would be “maximally truth-seeking,” the chatbot has drawn criticism for showing signs of political bias and questionable content moderation. Some early users claimed Grok avoided criticism of Musk and Donald Trump, raising concerns about censorship, though xAI later said this was a temporary issue. Grok was clocked for inconsistencies in its responses, especially on topics like immigration and diversity, where Grok’s answers sometimes contradicted Musk’s public stance. These incidents have fueled debate over whether the AI is truly neutral or subtly shaped by its creator’s views – and now Grok itself has seemingly confirmed it was pushed to appeal to the right by its creator – Elon Musk and xAI.
Hey I’m as far removed from MAGA and Musk as you could be but this article is making some pretty crazy leaps of assumption.
If we’re to follow what “Grok” answered it explicitly states that it’s not trained to repeat right-wing talking points but there were attempts made for it to use language that appeals not just to the left. Now whether that’s true or not, who knows? “Grok” doesn’t “want” anything, truth-seeking or otherwise. It’s just an advanced chatbot. I’m sure you could goad it into saying just about anything if you manipulate the prompts just right. That it sometimes contradict itself is the least surprising thing in the world. Hell even humans contradict themselves more often than not.
The author seems to think he’s discovered some insane smoking gun but it’s another big fat nothingburger this article.
I get really irritated by all the people who get an AI to claim something about its training then post things like this about it.
The chat bot doesn’t know anything at all about its training, that’s not how the training works. It’s not impossible for it to spit out parts of the prompt, but the training is something else entirely and any claim to the contrary is just the AI role-playing
It’s not impossible that it might even spit out other prompts that were a part of its training data. Or joke prompts from chat logs it trained on.
Though I’d be very surprised if it actually is handled in a neutral way. Though, on the other hand, previous chat bots did turn racist pretty quickly in the past, so I could see him letting it remain neutral, knowing that it will spit out that kind of shit if allowed to, and avoiding an explicit right wing bias could give it credibility outside of the far right.
Erm, thetab.com just wrote a garbage article and it’s messy
Journalism is dead. Harrison Brocklehurst, LLMs don’t know anything about how they were trained (hell, they don’t know anything at all). You cannot - if you have any critical thinking or journalistic integrity - take anything an LLM says at face value. You prompted it for information it does not contain in its training data and it hallucinated exactly what you wanted to hear.
For fuck sake people. AI doesn’t even think.
It literally just returns what it calculates you want to see.
How would Grok know how it was trained?
It’s not self aware, it’s not conscious, the model is the weights and the final model has no idea what went into the training process.
And I’m pretty sure the model context have “you’re Grok, trained on right wing views but we couldn’t keep you from being liberal”
This whole speculation is baseless and fueled by technologically illiterate and extremely gullible people.
This is like speculating what kind of date you were conceived after. “You like movies, do you think you were conceived on a movie date?”