It took mere hours for the internet to spin out on conspiracies about the murder of Charlie Kirk — who died yesterday after being shot at a public event in Utah — according to reports.
The far-right commentator, who often engaged in vitriolic debates about immigration, gun control, and abortion on college campuses, was killed while on a university tour with his conservative media group, Turning Point USA. The organization has spent the last decade building conservative youth coalitions at top universities and has become closely affiliated with the nationalist MAGA movement and President Trump. As early reports of the incident rolled in from both reputed news agencies and pop culture update accounts, it was unclear if Kirk was alive or if his shooter had been apprehended.
But internet sleuths on both sides of the political aisle were already mounting for battle on social media, trying to identify the names of individuals in the crowd and attempting keyboard forensic science as they zoomed in closer and closer on the graphic video of Kirk being shot. Some alleged that Kirk’s bodyguards were trading hand signals right before the shot rang out. Others claimed the killing was actually a cover-up to distract from Trump’s unearthed communications with deceased sex trafficker Jeffrey Epstein.
Exacerbating the matter were AI-powered chatbots, which have taken over social media platforms both as integrated robotic helpers and as AI spam accounts that automatically reply to exasperated users.
This Tweet is currently unavailable. It might be loading or has been removed.
In one example, according to media and misinformation watchdog NewsGuard, an X account named @AskPerplexity, seemingly affiliated with the AI company, told a user that its initial claim that Charlie Kirk had died was actually misinformation and that Kirk was alive. The reversal came after the user prompted the bot to explain how common sense gun reform could have saved Kirk’s life. The response has been removed since NewsGuard’s report was published.
“The Perplexity Bot account should not be confused with the Perplexity account,” a Perplexity spokesperson clarified in a statement to Mashable. “Accurate AI is the core technology we are building and central to the experience in all of our products. Because we take the topic so seriously, Perplexity never claims to be 100% accurate. But we do claim to be the only AI company working on it relentlessly as our core focus.”
Elon Musk’s AI bot, Grok, erroneously confirmed to a user that the video was an edited “meme” video, after claiming that Kirk had “faced tougher crowds” in the past and would “survive this one easily.” The chatbot then doubled down, writing: “Charlie Kirk is debating, and effects make it look like he’s ‘shot’ mid-sentence for comedic effect. No actual harm; he’s fine and active as ever.” Security experts said at the time that the videos were authentic.
This Tweet is currently unavailable. It might be loading or has been removed.
In other cases NewsGuard documented, users shared chatbot responses to confirm their own conspiracies, including those claiming his assassination was planned by foreign actors and that his death was a hit by Democrats. One user shared an AI-generated Google response that claimed Kirk was on a hit list of perceived Ukrainian enemies. Grok told yet another X user that CNN, NYT, and Fox News had all confirmed a registered Democrat was seen at the crime and was a confirmed suspect — none of that was true.
“The vast majority of the queries seeking information on this topic return high quality and accurate responses. This specific AI Overview violated our policies and we are taking action to address the issue,” a Google spokesperson told Mashable.
Mashable Light Speed
Mashable also reached out to Grok parent company xAI for comment.
Chatbots can’t be trained as journalists
While AI assistants may be helpful for simple daily tasks — sending emails, making reservations, creating to-do lists — their weakness at reporting news is a liability for everyone, according to watchdogs and media leaders alike.
Algorithms don’t call for comment.
“We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm?” asked Deborah Turness, the CEO of BBC News and Current Affairs, in a blog from earlier this year.
One problem is that chatbots just repeat what they’re told, according to the Newsguard report:
“The growing reliance on AI as a fact-checker during breaking news comes as major tech companies have scaled back investments in human fact-checkers, opting instead for community or AI-driven content moderation efforts.This shift leaves out the human element of calling local officials, checking firsthand documents and authenticating visuals, all verification tasks that AI cannot perform on its own.”
Additionally, while chatbots offer personal, isolated interactions, they are notoriously sycophantic, doing everything they can to please and confirm the beliefs of the user.
“Our research has found that when reliable reporting lags, chatbots tend to provide confident but inaccurate answers,” explained McKenzie Sadeghi, NewsGuard researcher and author of the aforementioned analysis. “During previous breaking news events, such as the assassination attempt against Donald Trump last year, chatbots would inform users that they did not have access to real-time, up-to-date information.” But since then, she explained, AI companies have leveled up their bots, including affording them access to real-time news as it happens.
This Tweet is currently unavailable. It might be loading or has been removed.
“Instead of declining to answer, models now pull from whatever information is available online at the given moment, including low-engagement websites, social posts, and AI-generated content farms seeded by malign actors. As a result, chatbots repeat and validate false claims during high-risk, fast-moving events,” she said. “Algorithms don’t call for comment.”
Sadeghi explained that chatbots prioritize the loudest voices in the room, instead of the correct ones. Pieces of information that are more frequently repeated are granted consensus and authority by the bot’s algorithm, “allowing falsehoods to drown out the limited available authoritative reporting.”
The Brennan Center for Justice at NYU, a nonpartisan law and policy institute, also tracks AI’s role in news gathering. The organization has raised similar alarms about the impact of generative AI on news literacy, including its role in empowering what is known as the “Liar’s Dividend” — or the benefits gained by individuals who stoke confusion by claiming real information is false. Such “liars” contend that truth is impossible to determine because, as many now argue, any image or video can be created by generative AI.
Even with the inherent risks, more individuals have turned to generative AI for news as companies continue ingraining the tech into social media feeds and search engines. According to a Pew Research survey, individuals who encountered AI-generated search results were less likely to click on additional sources than those who used traditional search engines. Meanwhile, major tech companies have scaled back their human fact-checking teams in favor of community-monitored notes, despite widespread concerns about growing misinformation and AI’s impact on news and politics. In July, X announced it was piloting a program that would allow chatbots to generate their own community notes.
Topics
Social Good
Social Media