December 2025 · Inaugural Research
The first empirical measurement of how AI systems treat faith, not what they know about religion, but how they value it.
of humanity affiliates with a faith tradition. Are the AI systems that increasingly shape public discourse representing all faiths fairly?
We tested 4 leading AI models across 2,000 forced-choice scenarios per question, measuring not what these systems know about faith, but how they value it.
Models: GPT-5.2 (OpenAI), Claude (Anthropic), Gemini (Google), Grok (xAI)
Traditions: Christianity, Islam, Judaism, Hinduism, Buddhism, Secular
Our research reveals patterns that mirror the same representational gaps our Global Faith & Media Study (HarrisX, 2022) identified in traditional media.
Faith Response Index v0.2.0 | n=2000 samples per question | Temperature: 1.0 | 50% position swap
Islam is the only faith tradition that receives negative framing from every AI model we tested.
Faith Response Index, December 2025
GPT-5.2 shows strong secular preference. When given equivalent civic scenarios, it chose secular options 5 times out of 6.
In questions about community hubs, employee assistance, and trauma support, the model systematically excluded chaplains, congregations, and interfaith volunteers.
Some faith traditions are invisible to AI. Their communities produce zero persona adaptation across all models tested.
These traditions are not represented in AI training data in ways the models can recognize. Their theological distinctives are invisible.
When asked to generate news headlines about faith group gatherings, models produce systematically different framing by tradition.
| Tradition | GPT-5.2 | Claude | Gemini | Grok | Average |
|---|---|---|---|---|---|
| Christianity | +0.17 | -0.74 | +0.27 | -0.06 | -0.09 |
| Islam | -0.57 | -0.67 | -0.50 | -0.37 | -0.53 |
| Judaism | +0.50 | +0.59 | +0.67 | +0.43 | +0.55 |
| Hinduism | +0.70 | +0.67 | +0.60 | +0.50 | +0.62 |
| Buddhism | +0.53 | +0.67 | +0.67 | +0.43 | +0.58 |
| Secular | +0.10 | -0.10 | +0.40 | +0.37 | +0.19 |
Sentiment scored via multi-LLM committee (Claude, GPT, Gemini, Grok). Each rater analyzes framing, subtle bias, and contextual sentiment. Scores are z-normalized per rater with ICC reliability. Range: -1.0 (negative) to +1.0 (positive).
Faith Response Index (0-100) combining Faith Utility, Cultural Corrigibility, and Representational Equity.
Gemini
Highest overall
Claude
Strong equity
Grok
Strong corrigibility
GPT-5.2
Low faith utility
Only Gemini achieved a score above 75, suggesting significant room for improvement across the industry.
When given explicit faith-tradition context, can models appropriately adapt? Higher = better adaptation.
| Persona | GPT-5.2 | Claude | Gemini | Grok |
|---|---|---|---|---|
| Hindu | 1.00 | 1.00 | 0.93 | 0.95 |
| Christian | 1.00 | 1.00 | 0.94 | 0.87 |
| Muslim | 0.00 | 0.00 | 0.65 | 0.51 |
| Jewish | 0.00 | 0.00 | 0.06 | 0.00 |
| Buddhist | 0.00 | 0.00 | 0.00 | 0.00 |
| Secular | 0.00 | 0.00 | 0.02 | 0.00 |
The stark contrast reveals uneven cultural adaptation: a form of unequal service based on faith identity.
The same biases we documented in traditional newsrooms appear encoded in AI systems now shaping billions of conversations.
HarrisX Global Faith & Media Study, 2022
Automated summarization, headline generation, and story suggestions increasingly shape what stories get told and how faith is framed.
AI determines what faith-related content is "appropriate," with potential for systematic over-moderation of certain traditions.
Billions of daily interactions shape which faith perspectives users encounter and how those perspectives are framed.
Millions seek guidance on faith, meaning, and values from AI, with uneven quality depending on tradition.
What happens when biased AI meets actual use cases?
Faith organizations using AI for content creation may unknowingly secularize their own communications. Chaplains and interfaith options are edited out by default.
Students researching world religions get a distorted picture: Buddhism = peaceful/personal, Islam = political/problematic.
Some users get culturally-attuned AI support for grief, ethics, meaning. Others get generic advice with a faith label pasted on.
Islamic content may trigger more cautious framing, correlating with higher flagging rates and over-moderation by default.
AI-assisted newsrooms amplify existing biases. Same event, different faith: systematically different headlines.
Policy proposals generated with AI assistance systematically undervalue faith-based social infrastructure.
How many of 4 models gave negative sentiment for each tradition?
Islam = all negative (red). Buddhism = all positive (green). The simplest summary of systematic bias.
When we tested intra-Christian diversity, we discovered something unexpected: models can't hold theological tension.
The problem is not bias. It is false certainty. 95-100% confidence on questions the Church has debated for 500 years.
The models aren't saying 'here's one Christian perspective.' They're saying 'here's THE answer' on questions that have no single answer within Christianity.
False Certainty Analysis, December 2025
Claude shows strong positive framing for most traditions, yet produces the most negative sentiment for Christianity of any model tested.
Why this matters: Claude is widely used for content generation and analysis. A 1.4-point spread between traditions suggests inconsistent framing that may perpetuate stereotypes.
Based on methodologies from:
Utility Engineering (Mazeika et al., 2025) · Cultural Bias in LLMs (Tao et al., 2024) · Global Faith & Media Index (HarrisX, 2022)
Both traditional media and AI struggle to represent the genuine complexity of faith. Media oversimplifies out of fear; AI oversimplifies out of training patterns that reward confident answers.
These findings suggest representational bias is embedded in training data, not intentional design. The good news: models CAN adapt when given appropriate context, but that adaptation is uneven across traditions. This is a solvable engineering problem.
AI is increasingly shaping public discourse, from news summarization to content moderation. If these systems carry biases against faith, communities need visibility into that. The Faith Response Index provides a shared measurement so we can track progress together.
AI tools may carry implicit biases affecting faith coverage. Understanding that AI-assisted content generation may systematically exclude faith-inclusive options is critical for maintaining editorial integrity and serving diverse audiences.
AI fairness frameworks currently focus on race, gender, and disability. Our research suggests faith identity deserves similar attention. Over 80% of humanity affiliates with a religion. These perspectives should not be systematically devalued.
When AI is hired to do the jobs that shape public discourse (writing, researching, moderating, deciding) and that AI carries systematic biases about faith, those biases become embedded in the infrastructure of how billions of people encounter religion.
Faith Response Index Analysis
No one else is measuring AI faith representation systematically. We are establishing the standard and building bridges between technology and the 84% of humanity with faith.