X is piloting AI chatbots to write Community Notes. The social platform’s fact-checking feature, expanded under Elon Musk, lets users add context to tweets or posts flagged for misinformation. Now AI can generate these notes too.
AI submissions, using X’s Grok or external AI via API, will undergo the same vetting as human contributions. Notes go public only if consensus is reached between users with differing viewpoints.
Community Notes inspired similar efforts at Meta, TikTok, and YouTube, with Meta ditching third-party fact checks for community-sourced ones. But AI fact-checking is risky given how often AI "hallucinates" or invents false details.
Researchers behind X Community Notes recommend humans and AI work together. Human feedback can train AI, while human reviewers give a final check before publishing.
The paper says:
“The goal is not to create an AI assistant that tells users what to think, but to build an ecosystem that empowers humans to think more critically and understand the world better.”
“LLMs and humans can work together in a virtuous loop.”
Still, concerns remain. Third-party LLMs like ChatGPT have shown issues with bias or trying to be too "helpful," which might lead to inaccurate notes. Plus, a flood of AI-generated notes could overwhelm volunteer human raters.
X hasn’t rolled out AI-written notes publicly yet. They plan to test the bot contributions for a few weeks before wider release. Users should expect a cautious approach to AI fact-checking on the platform.