Grok spread misinformation about the Bondi Beach shooting

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

In the evening of Dec. 14, a large crowd of people gathered on Australia’s Bondi Beach to celebrate the first night of Hanukkah, and were instead met with violence, as two gunmen opened fire on the group. As of today, 15 people have been killed.

One of the assailants was taken down by bystander Ahmed Al Ahmed, whose brave decision to grapple with the shooter and take over his weapon was captured on video and shared widely across social media platforms. Informed by an epidemic of gun violence that has turned many bystanders into heroes, it’s clear on camera that the man in the white shirt is potentially saving dozens of lives. The long-barreled gun is in clear view as he wrests it from the hand of a man clad in black, who then topples over and ambles away.

But X‘s Grok, the AI chatbot designed by Elon Musk’s AI venture xAI, didn’t see it as such.

As users stumbled across the harrowing video of Ahmed the following morning and asked the chatbot to explain, Grok described the scene as “an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it.” X users have since added a fact-check to the bot’s reply. In another response, Grok mislabeled the video as footage from the Oct. 7 Hamas attack, and credited it to the Tropical Cyclone Alfred in another, Gizmodo reported.

X hasn’t yet explained why this glitch occurred, or why Grok has made similar fumbles beyond queries about Bondi Beach.

Mashable Light Speed

But watchdogs know why, and it’s very simple. Chatbots are bad at breaking news. In the wake of the killing of far-right commentator Charlie Kirk, Grok exacerbated conspiracy theories about the shooter and Kirk’s own bodyguards, telling some users that a graphic video clearly showing Kirk’s death was just a meme. Other AI-powered search sources, including Google AI Overview, also gave false information in the immediate aftermath of Kirk’s death.

“Instead of declining to answer, models now pull from whatever information is available online at the given moment, including low-engagement websites, social posts, and AI-generated content farms seeded by malign actors. As a result, chatbots repeat and validate false claims during high-risk, fast-moving events,” NewsGuard researcher McKenzie Sadeghi told Mashable at the time.

Social media platforms have also scaled back human fact-checking across the board, and chatbots may instead prioritize frequency over accuracy in real-time news responses.

AI companies know this is a glaring gap for their bots, and it’s why they’ve courted news publications into larger and larger licensing deals to better their products. Earlier this month, Meta signed multiple commercial AI agreements with news publishers, including CNN, Fox News, and international publication Le Monde, adding to its existing partnership with Reuters. Google is running a pilot program with participating news publishers to expand AI-powered features, including article summaries, to Google News.

Hallucinations and accuracy also remain a big problem for large-language models and AI chatbots in general, which often confidently provide false information to users.

source

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Recent News

Editor's Pick