Technology Reporter


WhatsApp says its new AI feature embedded in the messaging service is “entirely optional” – despite the fact it cannot be removed from the app.
The Meta AI logo is an ever-present blue circle with pink and green splashes in the bottom right of your Chats screen.
Interacting with it opens a chatbot designed to answer your questions, but it has drawn attention and frustration from users who cannot remove it from the app.
It follows Microsoft’s Recall feature, which was an always-on tool – before the firm faced a backlash and decided to allow people to disable it.
“We think giving people these options is a good thing and we’re always listening to feedback from our users,” WhatsApp told the BBC.
It comes the same week Meta announced an update to its teen accounts feature on Instagram.
The firm revealed it was testing AI technology in the US designed to find accounts belonging to teenagers who have lied about their age on the platform.
Where is the new blue circle?
If you can’t see it, you may not be able to use it yet.
Meta says the feature is only being rolled out to some countries at the moment and advises it “might not be available to you yet, even if other users in your country have access”.
As well as the blue circle, there is a search bar at the top inviting users to ‘Ask Meta AI or Search’.
This is also a feature on Facebook Messenger and Instagram, with both platforms owned by Meta.
Its AI chatbot is powered by Llama 4, one of the large language models operated by Meta.
Before you ask it anything, there is a long message from Meta explaining what Meta AI is – stating it is “optional”.
On its website, WhatsApp says Meta AI “can answer your questions, teach you something, or help come up with new ideas”.
I tried out the feature by asking the AI what the weather was like in Glasgow, and it responded in seconds with a detailed report on temperature, the chance of rain, wind and humidity.
It also gave me two links for further information, but this is where it ran into problems.
One of the links was relevant, but the other tried to give me additional weather details for Charing Cross – not the location in Glasgow, but the railway station in London.
What do people think of it?
So far in Europe people aren’t very pleased, with users on X, Bluesky, and Reddit outlining their frustrations – and Guardian columnist Polly Hudson was among those venting their anger at not being able to turn it off.
Dr Kris Shrishak, an adviser on AI and privacy, was also highly critical, and accused Meta of “exploiting its existing market” and “using people as test subjects for AI”.
“No one should be forced to use AI,” he told the BBC.
“Its AI models are a privacy violation by design – Meta, through web scraping, has used personal data of people and pirated books in training them.
“Now that the legality of their approach has been challenged in courts, Meta is looking for other sources to collect data from people, and this feature could be one such source.”
An investigation by The Atlantic revealed Meta may have accessed millions of pirated books and research papers through LibGen – Library Genesis – to train its Llama AI.
Author groups across the UK and around the world are organising campaigns to encourage governments to intervene, and Meta is currently defending a court case brought by multiple authors over the use of their work.
A spokesperson for Meta declined to comment on The Atlantic investigation.
What are the concerns?
When you first use Meta AI in WhatsApp, it states the chatbot “can only read messages people share with it”.
“Meta can’t read any other messages in your personal chats, as your personal messages remain end to end encrypted,” it says.
Meanwhile the Information Commissioner’s Office told the BBC it would “continue to monitor the adoption of Meta AI’s technology and use of personal data within WhatsApp”.
“Personal information fuels much of AI innovation so people need to trust that organisations are using their information responsibly,” it said.
“Organisations who want to use people’s personal details to train or use generative AI models need to comply with all their data protection obligations, and take the necessary extra steps when it comes to processing the data of children.”
Dr Shrishak says users should be wary.
“When you send messages to your friend, end to end encryption will not be affected,” he said.
“Every time you use this feature and communicate with Meta AI, you need to remember that one of the ends is Meta, not your friend.”
The tech giant also highlights that you should only share material which you know could be used by others.
“Don’t share information, including sensitive topics, about others or yourself that you don’t want the AI to retain and use,” it says.
Additional reporting by Joe Tidy