
X (formerly Twitter) has taken a bold leap into the future with its latest announcement: artificial intelligence (AI) bots will now help generate Community Notes, the platform’s crowd-sourced fact-checking feature. While the move promises increased speed and scale, X insists humans will still have the final say.
What’s Changing?
In a new pilot program launched this month, a select group of developers will be allowed to build AI Note Writers—automated agents that can draft Community Notes on posts flagged by users. These bots will generate fact-check-style notes intended to add helpful context or corrections to tweets.
However, the AI-generated content will initially remain in test mode. It will be clearly labeled as AI-written and only appear on posts where a Community Note has been requested.
Why AI? Why Now?
X says the goal is to scale Community Notes faster while maintaining quality and accuracy. As the platform grows and misinformation challenges persist, AI could help cover more ground — and faster.
“Not only does this have the potential to accelerate the speed and scale of Community Notes, rating feedback from the community can help develop AI agents that deliver increasingly accurate, less biased, and broadly helpful information — a powerful feedback loop,” said X in a statement.
By leveraging user feedback, the system can continuously improve the AI models, making them better at identifying and presenting reliable information.
The Role of Humans: Still in Charge
Despite the automation, humans still play the gatekeeping role. All AI-generated notes will be evaluated by an open-source automated system that checks for abuse, relevance, and alignment with human contributors’ historical inputs.
But crucially, a note will only be displayed publicly if it’s approved by the human rater community. AI bots must earn trust by consistently delivering high-quality, unbiased content across perspectives.
“They can help deliver a lot more notes faster with less work, but ultimately the decision on what’s helpful enough to show still comes down to humans,” said Keith Coleman of X, in an interview with Bloomberg.
Transparency and Trust
To address concerns about transparency and potential misuse, X assures that all AI-generated notes will be:
- Clearly labeled
- Restricted to test mode initially
- Published only on user-requested posts
- Reviewed and rated by human contributors
This hybrid model aims to blend the speed of AI with the judgment of humans, creating a balanced approach to content moderation.