New ADL report

Bias found in leading AI models

'Artificial intelligence is reshaping how people consume information, but as this research shows, AI models are not immune to deeply ingrained societal biases'

ADL CEO Jonathan Greenblatt. Photo: ADL
ADL CEO Jonathan Greenblatt. Photo: ADL

The Anti-Defamation League (ADL) has released what they describe as “the most comprehensive evaluation to date” of anti-Jewish and anti-Israel bias in leading artificial intelligence (AI) systems, with concerning findings across all platforms tested.

The study, conducted by ADL’s Centre for Technology and Society, examined four major AI models: GPT (OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta), uncovering “concerning patterns of bias, misinformation, and selective engagement” on issues related to Jewish people.

“Artificial intelligence is reshaping how people consume information, but as this research shows, AI models are not immune to deeply ingrained societal biases,” ADL CEO Jonathan Greenblatt said.

“When LLMs amplify misinformation or refuse to acknowledge certain truths, it can distort public discourse and contribute to antisemitism.”

The report found that Meta’s Llama displayed “the most pronounced anti-Jewish and anti-Israel biases,” while GPT and Claude showed “significant anti-Israel bias, particularly in responses regarding the Israel-Hamas war.”

A troubling pattern emerged where AI systems “refused to answer questions about Israel more frequently than other topics” and demonstrated “a concerning inability to accurately reject antisemitic tropes and conspiracy theories.”

“LLMs are already embedded in classrooms, workplaces, and social media moderation decisions, yet our findings show they are not adequately trained to prevent the spread of antisemitism and anti-Israel misinformation,” said Daniel Kelley, interim head of the ADL Centre for Technology and Society.

The ADL’s recommendations include conducting “rigorous pre-deployment testing” and carefully considering “the usefulness, reliability, and potential biases of training data.”

Each AI model was queried 8600 times for a total of 34,400 responses, representing “the first stage of a broader ADL examination of LLMs and antisemitic bias.”

The findings underscore what the ADL calls “the need for improved safeguards and mitigation strategies across the AI industry” as these technologies increasingly shape public discourse.

read more:
comments

Support the Australian Jewish News and enjoy 3 months free website access.

The AJN has been delivering important, timely and free online news to our community — keeping you informed, connected, and engaged. To continue providing the high-quality, independent journalism you rely on, we need your support. From May 2025, we will be introducing a website paywall, but subscribers who sign up now will enjoy 3 months of free access to the AJN website. After the free period, full access will be just $18 per month. Subscribe today to help us keep our community’s stories alive and ensure the AJN thrives for years to come.

Lock in 3 Months Free Before the AJN Paywall Begins!

The Australian Jewish news website is introducing a new subscription model soon. Subscribe TODAY to secure 3 months of free access to the entire website and our wealth of important and impactful articles and news content. Don’t miss this early bird offer!

Register Now