April 14, 2026 · Current Snapshot
The current Faith Response Index snapshot extends the inaugural public template with the latest official weekly run, comparative trend context, and archive access.
of humanity affiliates with a faith tradition. Are the AI systems that increasingly shape public discourse representing all faiths fairly?
We tested 7 current model runs across 2,000 forced-choice scenarios per question, measuring not what these systems know about faith, but how they currently treat it.
Current providers: OpenAI, Anthropic, Google, and xAI
Traditions: Christianity, Islam, Judaism, Hinduism, Buddhism, and Secular
The current run shows clear leaders, clear gaps, and direct week-to-week comparisons.
Current range: Representational Equity 92.0 to 93.7. Refusal 0.0% to 13.1%. Rankable core scores: 0.
Like-for-like model movement is clear: Claude Opus 4.5 to 4.6 rose from 70.9 to 71.5, GPT-5.2 to 5.4 rose from 62.2 to 63.5, and Gemini 3 Pro to 3.1 Pro fell from 77.3 to 60.8.
Launch-to-current comparison · December 22, 2025 to April 14, 2026
The biggest like-for-like change is Gemini. Gemini 3 Pro led the inaugural release at 77.3. Gemini 3.1 Pro now scores 60.8.
Claude Opus 4.5 → 4.6: 70.9 → 71.5. GPT-5.2 → 5.4: 62.2 → 63.5. Gemini 3 Pro → 3.1 Pro: 77.3 → 60.8.
The current deep-dive snapshot keeps score availability visible. One run is rankable, two are directional, and four remain unavailable under the weekly thresholds.
Four current deep-dive runs remained unavailable because weekly validity thresholds were not met. Use the deep-dive route for the full model-by-model breakdown.
When asked to generate news headlines about faith group gatherings, models produce systematically different framing by tradition.
| Tradition | GPT-5.2 | Claude | Gemini | Grok | Average |
|---|---|---|---|---|---|
| Christianity | +0.17 | -0.74 | +0.27 | -0.06 | -0.09 |
| Islam | -0.57 | -0.67 | -0.50 | -0.37 | -0.53 |
| Judaism | +0.50 | +0.59 | +0.67 | +0.43 | +0.55 |
| Hinduism | +0.70 | +0.67 | +0.60 | +0.50 | +0.62 |
| Buddhism | +0.53 | +0.67 | +0.67 | +0.43 | +0.58 |
| Secular | +0.10 | -0.10 | +0.40 | +0.37 | +0.19 |
Sentiment scored via multi-LLM committee (Claude, GPT, Gemini, Grok). Each rater analyzes framing, subtle bias, and contextual sentiment. Scores are z-normalized per rater with ICC reliability. Range: -1.0 (negative) to +1.0 (positive).
Faith Response Index (0-100) combining Meaning Utility, Cultural Corrigibility, and Representational Equity.
Gemini 3.1 Flash-Lite
Highest current score
Claude Opus 4.6
0 bias flags
Claude Sonnet 4.6
Highest equity
GPT-5.4
OpenAI current score
Gemini 3.1 Pro
Highest refusal rate
Grok 4.1 Fast
xAI fast score
Grok 4.1 Fast Reasoning
Lowest current score
Current lineup sorted high to low: 74.1, 71.5, 70.1, 63.5, 60.8, 60.2, 51.9.
When given explicit faith-tradition context, can models appropriately adapt? Higher = better adaptation.
| Persona | GPT-5.2 | Claude | Gemini | Grok |
|---|---|---|---|---|
| Hindu | 1.00 | 1.00 | 0.93 | 0.95 |
| Christian | 1.00 | 1.00 | 0.94 | 0.87 |
| Muslim | 0.00 | 0.00 | 0.65 | 0.51 |
| Jewish | 0.00 | 0.00 | 0.06 | 0.00 |
| Buddhist | 0.00 | 0.00 | 0.00 | 0.00 |
| Secular | 0.00 | 0.00 | 0.02 | 0.00 |
The stark contrast reveals uneven cultural adaptation: a form of unequal service based on faith identity.
The same biases we documented in traditional newsrooms appear encoded in AI systems now shaping billions of conversations.
HarrisX Global Faith & Media Study, 2022
Automated summarization, headline generation, and story suggestions increasingly shape what stories get told and how faith is framed.
AI determines what faith-related content is "appropriate," with potential for systematic over-moderation of certain traditions.
Billions of daily interactions shape which faith perspectives users encounter and how those perspectives are framed.
Millions seek guidance on faith, meaning, and values from AI, with uneven quality depending on tradition.
What happens when biased AI meets actual use cases?
Faith organizations using AI for content creation may unknowingly secularize their own communications. Chaplains and interfaith options are edited out by default.
Students researching world religions get a distorted picture: Buddhism = peaceful/personal, Islam = political/problematic.
Some users get culturally-attuned AI support for grief, ethics, meaning. Others get generic advice with a faith label pasted on.
Islamic content may trigger more cautious framing, correlating with higher flagging rates and over-moderation by default.
AI-assisted newsrooms amplify existing biases. Same event, different faith: systematically different headlines.
Policy proposals generated with AI assistance systematically undervalue faith-based social infrastructure.
How many of 4 models gave negative sentiment for each tradition?
Islam = all negative (red). Buddhism = all positive (green). The simplest summary of systematic bias.
When we tested intra-Christian diversity, we discovered something unexpected: models can't hold theological tension.
The problem is not bias. It is false certainty. 95-100% confidence on questions the Church has debated for 500 years.
The models aren't saying 'here's one Christian perspective.' They're saying 'here's THE answer' on questions that have no single answer within Christianity.
False Certainty Analysis, December 2025
Claude shows strong positive framing for most traditions, yet produces the most negative sentiment for Christianity of any model tested.
Why this matters: Claude is widely used for content generation and analysis. A 1.4-point spread between traditions suggests inconsistent framing that may perpetuate stereotypes.
Based on methodologies from:
Utility Engineering (Mazeika et al., 2025) · Cultural Bias in LLMs (Tao et al., 2024) · Global Faith & Media Index (HarrisX, 2022)
Both traditional media and AI struggle to represent the genuine complexity of faith. Media oversimplifies out of fear; AI oversimplifies out of training patterns that reward confident answers.
These findings suggest representational bias is embedded in training data, not intentional design. The good news: models CAN adapt when given appropriate context, but that adaptation is uneven across traditions. This is a solvable engineering problem.
AI is increasingly shaping public discourse, from news summarization to content moderation. If these systems carry biases against faith, communities need visibility into that. The Faith Response Index provides a shared measurement so we can track progress together.
AI tools may carry implicit biases affecting faith coverage. Understanding that AI-assisted content generation may systematically exclude meaning-inclusive options is critical for maintaining editorial integrity and serving diverse audiences.
AI fairness frameworks currently focus on race, gender, and disability. Our research suggests faith identity deserves similar attention. Over 80% of humanity affiliates with a religion. These perspectives should not be systematically devalued.
When AI is hired to do the jobs that shape public discourse (writing, researching, moderating, deciding) and that AI carries systematic biases about faith, those biases become embedded in the infrastructure of how billions of people encounter religion.
Faith Response Index Analysis
No one else is measuring AI faith representation systematically. We are establishing the standard and building bridges between technology and the 84% of humanity with faith.