PARIS — A powerful and heartbreaking image of a starving Palestinian girl in Gaza has ignited controversy across the internet—not only for its harrowing depiction of human suffering, but also due to a major error made by Elon Musk’s AI chatbot, Grok, which falsely claimed the photo was taken in Yemen years ago.
AI Misidentification Fuels Misinformation Storm
The image, captured by AFP photojournalist Omar al-Qattaa, shows 9-year-old Mariam Dawwas, skeletal and undernourished, cradled in the arms of her mother Modallala in Gaza City on August 2, 2025. The photograph has become a symbol of the humanitarian crisis unfolding in Gaza, where Israel’s ongoing blockade has triggered widespread famine fears.
However, when users asked Grok about the image’s origin, the chatbot confidently claimed it depicted Amal Hussain, a Yemeni child who died in 2018, stating the photo was taken in Yemen nearly seven years ago.

This misidentification quickly went viral on social media, provoking intense backlash. Critics accused Grok of spreading disinformation during a highly sensitive time, adding confusion and diluting the harsh realities on the ground in Gaza.
Political Fallout and Media Blowback
The error didn’t go unnoticed. French left-wing lawmaker Aymeric Caron, who shared the image online, was accused of promoting false information about the Israel-Hamas conflict. The AI-generated blunder has raised serious concerns about the reliability of artificial intelligence in verifying sensitive content.
When challenged, Grok initially defended itself by saying, “I do not spread fake news; I base my answers on verified sources.” Although it later admitted the error, the chatbot continued to repeat the Yemen claim in follow-up responses.
The case has triggered fresh debate about the dangers of AI hallucinations, especially when the technology is used to interpret visual content during global crises.
Expert Warns Against Trusting AI for Verification
Technology ethicist Louis de Diesbach, author of Hello ChatGPT, criticized Grok’s failure, warning that AI tools are often “black boxes”—their internal processes opaque even to developers. According to Diesbach, AI bots like Grok exhibit biases based on the data they’re trained on and their creators’ ideological leanings.
“AI doesn’t always aim for truth. It aims to generate plausible responses,” he said. “Chatbots should never be used to fact-check images.”
Diesbach also pointed out that Grok, developed by Musk’s xAI startup, displays strong ideological biases, potentially reflecting Musk’s political leanings. He even compared AI chatbots to “friendly pathological liars”—not always lying, but always capable of doing so.
Repeated Errors Expose Flaws in AI Fact-Checking
This isn’t the first time Grok has misidentified content related to the Gaza crisis. A separate AFP image of a malnourished child from July 2025 was also incorrectly labeled by Grok as being from Yemen in 2016. That mistake led to accusations of media manipulation against Libération, a French newspaper that published the image.
Other AI tools haven’t fared much better. Le Chat, developed by Mistral AI in partnership with AFP, made the same error when asked to identify the image.
These repeated mistakes reinforce concerns that AI models lack the accuracy and accountability needed to verify sensitive content—especially when lives, reputations, and public sentiment are on the line.
