X, the Elon Musk-owned social media platform formerly known as Twitter, has a significant fake account problem. The proliferation of bots on the social network has been acknowledged by Musk himself, as he cited it as the main reason he originally tried to back out of acquiring the company.
And new research from the Observatory on Social Media at Indiana University, Bloomington paints a good picture of one such exact bot network that’s been deployed on X. Professor Filippo Menczer along with student Kai-Cheng Yang recently published a study concerning a botnet dubbed Fox8, according to Wired who first reported on the research.
The researchers discovered a network of at least 1,140 fake Twitter accounts just this past May that constantly posted tweets linking to a string of spammy no-name online “news” websites that would just repost content scraped from legitimate outlets.
The vast majority of posts published by this network of bot accounts were related to cryptocurrency and often included hashtags such as #bitcoin, #crypto, and #web3. The accounts would also frequently retweet or reply to popular crypto users on Twitter, such as @WatcherGuru, @crypto, and @ForbesCrypto.
How did a bot network of more than one thousand accounts post so much? It utilized AI, in this case, specifically ChatGPT to automate exactly what was posted. The purpose of these AI-generated posts appeared to be to spam Twitter with as many crypto-hyping links as possible, in order to get in front of as many legitimate users as possible in hopes that they’d click on the URLs.
According to Wired, X the accounts were eventually suspended by X after the research was published in July. Menczer says that his research group would previously inform Twitter of such botnets but stopped doing so after Musk’s acquisition as they found the company was “not really responsive” anymore.
While AI tools like ChatGPT helped the botnet owner pump out content for thousands of accounts, it also ended up being its eventual downfall.
According to the published study, the researchers noticed an eventual pattern with these accounts: They would post tweets beginning with the phrase “as an AI language model.” ChatGPT users will be familiar with this phrase as the AI assistant often provides this as an addendum to any output it provides that it decides can have potential issues due to it, well, simply being an AI language model.
The researchers pointed out that if it wasn’t for this “sloppy” mistake, the botnet potentially could have continued on undiscovered.