Hey I'm as far removed from MAGA and Musk as you could be but this article is making some pretty crazy leaps of assumption.
If we're to follow what "Grok" answered it explicitly states that it's not trained to repeat right-wing talking points but there were attempts made for it to use language that appeals not just to the left. Now whether that's true or not, who knows? "Grok" doesn't "want" anything, truth-seeking or otherwise. It's just an advanced chatbot. I'm sure you could goad it into saying just about anything if you manipulate the prompts just right. That it sometimes contradict itself is the least surprising thing in the world. Hell even humans contradict themselves more often than not.
The author seems to think he's discovered some insane smoking gun but it's another big fat nothingburger this article.