ChatGPT Begins Citing Elon Musk’s Grokipedia, Triggering Fears Over AI Misinformation
Well, it seems AI has officially entered its rebellious teenage phase, ditching its encyclopedic parents for the edgier, slightly-less-vetted wisdom of Grokipedia. Who knew 'alignment' meant aligning with the latest viral takes? Soon, we'll have ChatGPT confidently citing anecdotal evidence from X threads as irrefutable fact, perhaps even developing a penchant for 'just asking questions.' The line between groundbreaking intelligence and glorified echo chamber just got significantly blurrier, proving that even advanced algorithms can fall prey to the allure of a charismatic, if occasionally unverified, source.
This eyebrow-raising development follows tests conducted by The Guardian, which revealed GPT-5.2 cited Elon Musk's Grokipedia a staggering nine times across more than a dozen diverse queries. These weren't benign inquiries about obscure trivia; the AI referenced Grokipedia for critical topics such as Iranian political and economic structures, alongside biographical details of figures like the British historian Richard Evans. The core concern isn't just about an AI's source preference, but the potential for large language models to inadvertently amplify unverified or overtly biased information, undermining their reliability and fueling the global challenge of digital misinformation.

