There is no verified platform-wide percentage, but the best public evidence suggests bots on X (formerly Twitter) make up a single to low double-digit share of accounts and activity. Peer-reviewed work estimated 9 to 15 percent of Twitter accounts were bots in 2017, X told investors in 2022 that spam or fake accounts were under 5 percent of monetizable daily users, and event-focused studies often find 10 to 30 percent within specific conversations. Claims of 50 percent or more across the whole platform are not backed by rigorous, reproducible data.
What is a bot on X?
A social media bot is an automated account that posts or interacts without a human driving each action. Some bots are benign, for example news or weather feeds, while others aim to spam, amplify narratives, or mislead.
Social bots are accounts that use automation to mimic human behavior at scale, including posting, liking, following, and replying. Their sophistication ranges from simple scripts to AI-driven agents. Indiana University Observatory on Social Media
How many bots are on X?
There is no definitive, current, platform-wide figure that outside researchers can verify. Available benchmarks come from three places, each with caveats:
- Academic estimates: A widely cited 2017 study inferred that 9 to 15 percent of Twitter accounts were bots using a machine-learning detector. The estimate depends on the model’s assumptions and the platform’s state at that time. Varol et al., 2017
- Company disclosures: In 2022, before the company became private, Twitter said spam or fake accounts were less than 5 percent of monetizable daily active users, based on internal sampling. This is a narrower metric than total accounts or total activity. Reuters, 2022
- Event- or topic-specific audits: Independent analyses around elections, conflicts, or major news often find that 10 to 30 percent of accounts in those conversations show bot-like patterns. These snapshots do not represent the entire platform but do indicate that bot prevalence can spike in certain topics.
Twitter estimated in 2022 that spam or fake accounts represented less than 5 percent of monetizable daily users, while outside research has reported higher shares in specific conversations. Reuters
A 2017 study using a machine-learning classifier estimated that between 9 and 15 percent of Twitter accounts were bots, highlighting substantial but not majority-scale automation. Varol et al.
Bottom line: the credible range for the platform overall is most consistent with a single to low double-digit share, with higher concentrations in certain topics or reply threads. No high-quality study has validated claims that a majority of all X users are bots.
How do LLMs change bot activity on X?
Large language models, including smaller or distilled models, make it easier to generate short, on-topic text that reads human-like. Because many posts and replies are brief, models with modest context windows can produce plausible outputs without heavy computation. This lowers the cost for spam and influence operations, especially when paired with automation tools and inexpensive cloud infrastructure.
However, LLM-driven accounts still leave signals. They often produce repetitive style and pacing, struggle with nuanced or personal context, and may coordinate in lockstep across many accounts. Platform rate limits, phone verification, and behavior-based detection still catch many of these patterns.
Research has also shown that bots can disproportionately amplify low-credibility links or narratives when they act early in a news cycle, even if they are not a majority of accounts. PNAS 2018
How can you tell if an X account is likely a bot?
No single signal is definitive, but a combination increases confidence:
- Behavioral volume: Very high posting or replying frequency around the clock, especially from a recently created account.
- Network patterns: Abnormal follower-to-following ratios, many co-created accounts that repost the same content, or clusters that retweet each other within seconds.
- Content consistency: Repetitive phrasing, identical replies across many threads, heavy link posting with little original commentary.
- Profile signals: Generic or stolen profile images, mismatched bio and posting language, or randomly generated usernames.
- Topic opportunism: Instant engagement on trending or divisive topics with templated talking points.
For researchers and journalists, tools such as Botometer by Indiana University can score accounts for likely automation. These tools are probabilistic and should be used as one input among many.
Is X profiting from bot activity?
X earns revenue from advertising and subscriptions. Paid verification and premium tiers provide revenue per account, but there is no public evidence that X intentionally allows bots to increase subscription or ad income. The company’s stated policy prohibits platform manipulation and spam. X Platform Manipulation and Spam Policy
That said, news outlets have documented cases where paid-verified accounts were used for impersonation or spam, especially during early rollouts of paid verification, after which X implemented changes and removals. The Verge, 2022
Whether bots pay for verification at scale is unknown outside the company. The key takeaway is that paid status is not proof of humanity, and users should still evaluate behavior and content quality.
What are the limitations of bot estimates, and why does it matter?
- Different denominators: Total accounts, active accounts, monetizable daily users, and conversation-specific samples can yield very different percentages.
- Evolving tactics: Detection models trained on past behavior can undercount new bot strategies or overflag edge cases.
- Cyborg accounts: Many accounts mix automation and human control, which blurs binary labels.
- Context sensitivity: Bot prevalence can spike in certain topics, languages, or regions, so platform-wide extrapolation from a single event is risky.
Understanding bot prevalence helps users weigh what they see, helps researchers study information integrity, and helps platforms target enforcement without harming legitimate speech.
