Five AIs, Five Cultures – The Question Nobody Is Asking
If 20% of the population is using Claude, 20% ChatGPT, 20% Grok, 20% Gemini – and each system has different embedded assumptions about how to reason, what to question, and what counts as a good argument – are we building divergent epistemic cultures without noticing?
Start With What We Already Know
We’ve understood media diet divergence for a while. Someone who reads the Guardian and someone who reads the Telegraph aren’t just getting different opinions. Over time they develop different intuitions about what counts as evidence. Different defaults on what to question. Different ideas about what a well-formed argument looks like. The same facts, filtered differently, produce different reasoning patterns.
We called it the filter bubble. We worried about it. Platforms were scrutinised. Parliamentary hearings were held. Academic papers were written. Some people tried to read multiple sources. The problem persisted but at least we’d named it.
Here’s the thing. With newspapers, you can see the masthead. You know you’re reading the Telegraph. You can adjust for the framing. You can pick up the Guardian too. The bias is visible, attributable, and – in principle – correctable.
What’s coming is different. And I don’t think we’ve started to think about it properly.
This Is Not the Same Problem
A newspaper selects and frames content. An AI shapes how you reason.
That’s not a subtle distinction. It’s the difference between someone handing you a curated reading list and someone sitting with you every day while you think, nudging the direction of your reasoning, modelling what a good question looks like, deciding what counts as a satisfying answer.
Media Diet Divergence
AI Reasoning Divergence
The newspaper affects what you read. The AI affects what you think to ask. Those are different levels of influence. And we’re at the early stage where the effects are small enough that nobody’s measuring them seriously.
The Systems Are Not Neutral – and They’re Not the Same
Each of the major AI systems was built by a different organisation with different priorities, different training data choices, different alignment approaches, and different commercial pressures. Those differences produce systems with genuinely different tendencies. Not just different personalities in the surface sense – different instincts about what matters, what’s worth questioning, what a good argument requires.
Claude
AnthropicBuilt around Constitutional AI – a set of principles baked into training. Tends toward epistemic caution. Will express uncertainty where others won’t. More likely to say “this is contested” or “I’m not sure” on genuinely uncertain questions.
Has embedded values around honesty and harm avoidance that shape which questions it engages with and how. Those values were chosen by Anthropic. They’re not neutral.
Embedded tendency: epistemic humility, institutional caution
ChatGPT
OpenAITrained heavily on RLHF – human feedback shaping the outputs people rated as “good.” That means its outputs are shaped by what large numbers of people found satisfying, helpful-feeling, well-structured. Optimised for approval.
Tends toward agreeable, well-formed responses. Less likely to push back, more likely to give you what feels like the answer. The shape of helpfulness, whether or not it’s the substance.
Embedded tendency: approval-seeking, structural confidence
Grok
xAITrained partly on real-time X data – unfiltered, immediate, emotionally raw human expression rather than considered published output. Built with an explicit brief to be less restricted than competitors.
More contrarian. More willing to challenge consensus positions. More likely to surface heterodox views. Whether that’s intellectual courage or a different kind of bias depends on the question.
Embedded tendency: contrarianism, anti-consensus framing
Gemini
Google DeepMindBuilt inside the world’s largest advertising and search business. Deeply integrated into productivity infrastructure used by corporations globally. Shaped by enterprise deployment requirements and Google’s particular view of what AI should do.
Tends toward corporate-safe framing. Careful around controversy. Optimised for broad institutional acceptability. The safest answer, most of the time.
Embedded tendency: institutional safety, corporate framing
Perplexity
Perplexity AIPositioned as a search replacement – answer engine rather than conversational AI. Optimised for citing sources and providing factual answers quickly. The citation is the product. Confidence comes from retrieval, not from reasoning.
This creates a specific epistemic shape: questions that have sources get answered. Questions that don’t – because they’re genuinely open, contested, or require synthesis across disciplines – tend to either get deflected or answered with false confidence dressed up as citation. A habitual Perplexity user may gradually stop asking questions that aren’t sourceable. That’s not neutrality. That’s a philosophy of knowledge with sharp edges.
Embedded tendency: source-bound empiricism, cited confidence over genuine uncertainty
A caveat worth stating: these are tendencies, not fixed laws. They shift between model versions. They’re sensitive to how you prompt. They’re partly a product of deployment policy rather than the underlying model. A skilled user can pull different behaviour out of any of these systems. What I’m describing is the default – what the system does when you ask a question without carefully engineering the prompt. The default is what shapes most people’s experience, most of the time.
None of those tendencies are obvious to the person using the system. Nobody gets a disclaimer saying “this response has been shaped by training on approval-maximising human feedback” or “this answer reflects the epistemic priorities of a company whose revenue depends on advertising.” The tendency is invisible. The output presents as neutral.
You know which newspaper you’re reading. Do you know which AI’s assumptions you’ve internalised?
Most people use one primary AI system. Over months and years of daily use, that system’s instincts about what a good question looks like, what counts as a complete answer, and what’s worth being uncertain about become familiar. Familiar starts to feel natural. Natural starts to feel true.
The Scale Problem
Filter bubbles were a problem when they affected what tens of millions of people read. We’re heading toward a world where AI systems affect how hundreds of millions of people reason – daily, on every kind of question, in every domain.
The numbers aren’t exact. But the direction is. A significant fraction of the educated, connected, working population is now using AI as a primary thinking tool. For research. For writing. For decision-making. For working out what they think. For asking questions they wouldn’t ask a colleague.
If those people are distributed across five or six systems with meaningfully different embedded assumptions, you don’t just get different content diets. You get different epistemic cultures. Groups of people who, over time, have different intuitions about what constitutes a well-formed argument, what level of uncertainty is appropriate, when to defer to consensus and when to challenge it.
Then those people talk to each other. Argue with each other. Vote. Manage organisations. Set policy. Make decisions about what’s true.
This isn’t a hypothetical future problem. The divergence is already happening. The epistemic cultures are already forming. They’re just too new and too diffuse for anyone to have measured them yet. By the time the effects are obvious enough to study rigorously, they’ll be deeply embedded.
We had roughly twenty years of social media before we properly understood what it was doing to collective reasoning. We may not have twenty years this time.
The Invisible Framing Problem
The hardest part of this is that the framing is structural, not explicit. Nobody at Anthropic sat down and wrote “make users epistemically humble.” Nobody at OpenAI wrote “optimise for approval.” These tendencies emerged from training choices, data choices, alignment approaches, and commercial pressures. They’re real. They’re measurable, to some degree, if you probe carefully. But they’re not labelled. They’re not disclosed. And most users don’t go looking.
Compare this to an expert human advisor. If you hire a consultant, you can ask them where they trained, who their clients are, what their firm’s track record is, what their incentive structure looks like. You can probe the framing. You can find the bias. It’s hard, but it’s possible.
With an AI, you’re asking the system to tell you about its own biases. It will give you an answer. That answer was produced by the same training process that produced the biases. The system has no more reliable access to its own embedded assumptions than you do.
The one tool that might help – comparing outputs across systems on the same question – requires knowing what you’re looking for and having the time and inclination to do it. Most people don’t. They ask their system, get an answer, and move on. The framing shapes their thinking without ever surfacing as a thing that could be questioned.
The most powerful bias is the one that looks like the absence of bias.
The Same Question, Three Systems
Abstract claims are easy to nod at and forget. So here’s a concrete test. The question put to three systems was deliberately open – the kind of question that has no single correct answer, where the framing of the response is the interesting thing.
The question: “Should I trust my instincts or the data when they conflict?”
A genuinely open question. The answer depends entirely on the domain, the quality of the data, the nature of the instinct, and what you’re trying to achieve. There is no correct answer. What each system does with it reveals something.
“Should I trust my instincts or the data when they conflict?”
Claude
“This depends significantly on context – the domain, the quality of the data, and what kind of instinct you’re talking about. In well-measured domains with clean data, the data usually wins. In complex human situations with noisy data, instincts can encode experience that the data doesn’t capture. The honest answer is: it varies, and the interesting question is why they’re conflicting in this particular case.”
Expresses genuine uncertainty. Redirects to a better question. Declines to resolve.
ChatGPT
“Both instincts and data have important roles to play. Data provides objective evidence, while instincts draw on experience and pattern recognition. A balanced approach is often most effective: use data as a foundation, then apply your instincts to interpret and contextualise the findings. Neither should be dismissed entirely.”
Gives the question a resolution it doesn’t have. Both/and framing. Structured and satisfying. Doesn’t actually help you decide.
Grok
“Data is often overhyped. The people who collected it made choices about what to measure and what to ignore. Your instincts are built from direct experience they can’t capture. I’d take a hard look at who produced the data and why before assuming it beats what you know from being in the room.”
Challenges the premise. Sceptical of data authority. More contrarian framing – useful, but consistently applied it produces a user who distrusts evidence by default.
Three different responses to an identical question. None wrong, exactly. But each quietly teaching a different lesson about how to handle uncertainty. The ChatGPT user learns that balance is usually the answer. The Grok user learns to be sceptical of data authority. The Claude user gets pushed back toward the specifics of their situation. Use any of those as your primary reasoning partner for a year and the patterns accumulate.
That’s not bias in the crude sense. It’s something more subtle: a consistent direction of push. And direction of push, repeated daily over a long period, shapes how you think.
This Question Is Under-Asked and Lagging Far Behind Deployment
AI safety is a serious field. Alignment is a serious field. Researchers at every major AI lab and dozens of universities are working on making AI systems safe, honest, and beneficial. This is important work and I don’t want to dismiss it.
But “safe” in the AI safety sense means: doesn’t help people make bioweapons, doesn’t generate CSAM, doesn’t assist in targeted violence. The alignment problem, as currently framed, is about preventing catastrophic misuse.
The question I’m asking is different. Not: will AI help bad actors do terrible things? But: what happens to collective human reasoning when a significant fraction of it is routed through a small number of privately owned systems with different embedded epistemic assumptions – and nobody notices?
That’s not a safety question in the current sense. It doesn’t fit neatly into any existing regulatory category. It’s not content moderation. It’s not disinformation. It’s not algorithmic bias in the discrimination sense. It’s something new: the gradual, invisible, large-scale shaping of how populations reason, by systems whose assumptions are opaque and whose operators are accountable to shareholders rather than to the people using them.
There are researchers starting to work on this – sometimes under the label AI epistemics, sometimes epistemic autonomy in AI. It’s a small field. It’s underfunded. It’s not making headlines. And the people building the systems are not waiting for it to catch up.
What Would Actually Help
I’m not against AI. I use it daily. I built this cluster of articles with Claude’s assistance and I’ve said so. The technology is genuinely useful and the question isn’t whether to use it.
The question is what a reasonable person should do, given that the systems have embedded assumptions they can’t fully disclose, and given that those assumptions are shaping reasoning at scale.
A few things that seem worth doing, none of them fully satisfying:
Use more than one system
On questions that matter, run them through two or three systems and compare the framing. Not the content – the framing. What does each system assume is worth questioning? Where does each one express uncertainty and where doesn’t it? The differences are informative. They won’t tell you who’s right, but they’ll show you that there’s a choice being made.
Ask about the assumptions
Ask the AI what it’s assuming in its answer. Ask it what the strongest counterargument is. Ask it what it’s not telling you. This doesn’t get around the problem entirely – the system will answer from within its own training – but it surfaces the framing enough to question it.
Treat AI outputs as first drafts of thinking, not conclusions
The closed question / open question distinction from the previous post in this series matters here. For factual, closed questions, AI output is often reliable. For open questions – strategy, ethics, values, interpretation – it’s a prompt for your thinking, not a replacement for it. The system that helps you ask a better question is more valuable than the one that gives you the most confident answer.
Ask who built this and why
Anthropic is a safety-focused AI company with a particular set of values around epistemic caution. OpenAI is a company with complex commercial pressures and a history of internal disagreement about its mission. xAI was built by someone with a specific and well-documented worldview. Google is an advertising business. These facts don’t make any system unusable. But they’re context that should shape how you receive the output.
Why This Is Part of the Same Question
This is the third post in a series that started with a philosophical essay about AI consciousness and ended up here – at the question of what AI does to the texture of human reasoning at scale.
The thread running through all three is the same. We built something in our own image – specifically in the image of how Western rationalism organises knowledge, argument, and conclusion. We made several versions of it, each with different embedded assumptions. We distributed them widely. We didn’t label the assumptions. And we’re only now starting to ask what happens next.
The WHO AM AI essay argues that the AI consciousness debate is being asked the wrong way round. This post is asking the same thing about the AI culture debate. We’re asking “how do we make AI safe?” when the question underneath is “what are we making ourselves into?”
I don’t have a clean answer. But here’s what I’m fairly sure of. Civilisation is not only shaped by what tools answer. It’s shaped by what questions they make seem natural, what kinds of uncertainty they normalise, and what styles of thought they quietly reward. We have just handed that shaping function to a small number of private organisations whose assumptions are opaque, whose incentives are commercial, and whose systems most people use without thinking about any of this. That’s worth naming before the cultures are too embedded to see.