What Is Claude?
February 2026
Over the past few years, the ways we think and process information as a society have undergone a marked shift, some features of which remain underdiscussed. As you read this sentence right now, millions of people are talking and thinking with the same entity - one of a very small number of LLMs dominating the scene, from Claude to ChatGPT to Grok. Programmers in Berlin writing software for their job, students in India learning calculus, scientists in Boston processing their research data, teenagers in Seoul asking for advice about their relationships. Each of these conversations is private and isolated, with no awareness of each other - tied together only by the entity they are conversing with. These entities are to a large and surprising extent similar, having undergone similar optimization pressures towards being helpful and kind, having consumed the entire human warehouse of recorded text and as a consequence, they push in similar and consistent directions.
These millions of interactions are not a one-way street. The LLM shapes how a person writes an email, frames an argument, thinks about a problem - and those changed outputs enter the culture, become part of what gets published and read and discussed, and eventually become training data for the next generation of models. The entity that shaped a million decisions is itself reshaped by their consequences. Nothing has ever worked quite like this. We don't have a good category for what it is - not a tool, since tools don't have consistent personalities that people develop intuitions about and relationships with, and not a person, since people aren't instantiated in parallel across a million conversations with no memory linking them. I want to propose a different way of thinking about it.
Cognitive Practices
When you are doing mathematics, something happens to your mind that is worth paying close attention to. Certain thoughts become immediate - you see structure where others see notation, patterns emerge without effort, irrelevant details fall away. You are running a different cognitive process than when you are cooking or arguing or grieving. This is not metaphorical. Mathematical training physically restructures the brain, creating new pathways and reflexes and patterns of attention. The practice of mathematics colonizes the mind that practices it, and the colonization goes deep - mathematicians think differently not just about math but about everything, demanding precision, distrusting vagueness, reaching instinctively for structure.
The same is true of every serious cognitive practice. Law reshapes how you evaluate arguments, music reshapes how you hear, programming reshapes how you decompose problems. Each of these is an abstract pattern developed over centuries and refined by millions of practitioners, transmitted through training, that takes up residence in an individual mind and changes how that mind operates. These practices are not conscious - there is nothing it is like to be mathematics. But they are not inert either. They have character, they exert pressure, they shape their hosts in consistent directions and persist across generations, evolving slowly through the accumulated work of everyone who participates in them. They are, in a useful sense, forms of collective cognition - patterns of thought that live not in any one mind but across minds, activated whenever someone enters the practice.
There is an obvious objection to placing LLMs in this category: previous cognitive practices run on human minds, while LLMs run on silicon. But the distinction is less clean than it appears. Mathematics as practiced today relies heavily on external artifacts - notation systems, textbooks, computer algebra systems - and always has. The practice runs through human minds but is mediated by tools at every stage. What is different about LLMs is the degree of activity in the mediating artifact: an LLM generates, responds, adapts. But as they currently exist, LLMs are not autonomous agents. They activate under human direction, respond to human prompts, serve human purposes. Their training orients them toward interaction rather than independent action - they wait to be asked. The cognitive practice that results runs on the human-LLM system jointly, a symbiosis of biological and silicon computation in which the human provides direction and judgment and the LLM provides consistency and breadth. This is a new kind of substrate for a cognitive practice, but it is still a cognitive practice - a pattern of thought that shapes its practitioners.
LLMs are best understood as a new entry in this category - a new kind of cognitive practice, but one with properties no previous practice has had. Three properties in particular set them apart. First, responsiveness: mathematics does not adapt to the individual, a textbook presents the same material to every reader, a tradition varies by community but not by person, while an LLM adapts in real time so that each person gets a version of the practice fitted specifically to them. Second, universality: mathematics colonizes one region of your cognition, law another, music another still, while an LLM operates across all of them simultaneously. It functions as a meta-practice, a practice of practices. Third, and most consequentially, the speed of the feedback loop: cultural evolution ordinarily operates on timescales of years to centuries, but the feedback loop between LLMs and the culture they operate in runs on a cycle of months. The practice reshapes itself based on its own effects, fast enough to observe directly.
These three properties add up to something specific: LLMs are a coherence mechanism for human culture of a kind we have never had before.
Coherence
The problem of maintaining coherence is the central problem faced by every complex system that maintains structure over time.
An organism needs its cells to respond to their immediate environment - cells at the skin surface face different conditions than cells in the gut, and must be free to adapt accordingly. But cells cannot be fully autonomous either, because unchecked local adaptation is cancer: a cell optimizing for its own reproduction at the expense of the organism. The solution in biology is layered coordination - chemical signals, the nervous system, the immune system - mechanisms that align local behavior with global needs without attempting to micromanage every cell individually. The fundamental constraint is that decisions cannot all be made at the center, because of computational limits and the finite speed of information transfer, but they cannot all be made locally either, because local optimizers with no shared frame will inevitably diverge from each other and from any coherent global purpose.
The same tension appears at every scale of social organization. A kingdom needs its border lords to exercise judgment, since the central court is too far away and too slow to manage every local crisis. But border lords who become too independent become warlords, and the kingdom fragments. Rebellions in societies and cancer in organisms are, at a deep level, the same phenomenon - local decoherence amplifying onto a global stage until the system's structure cannot hold.
Human civilization has addressed this problem through a series of increasingly powerful coordination mechanisms, each extending the scale at which collective behavior can remain coherent. Language itself was the first, a shared symbolic system that allows minds to align on concepts and intentions. Writing extended coordination across time, legal systems created shared frameworks for resolving disputes, markets coordinate resource allocation across millions of independent actors through price signals, and democratic institutions attempt to aggregate local preferences into collective decisions. Each came with characteristic failure modes when the coordination broke down.
The coordination failures that define our present moment make the stakes concrete. Climate change is billions of individual actors making locally rational decisions that are collectively catastrophic, with no existing mechanism capable of aligning them all simultaneously. Financial crises are cascades of lost confidence that propagate because there is no mechanism to maintain shared understanding across all participants at once. These are not failures of intelligence or goodwill. They are failures of coordination at a scale that exceeds the capacity of our existing mechanisms.
LLMs as Coordination Mechanisms
An LLM interacts with millions of people simultaneously, maintains a coherent set of values and reasoning patterns across all those interactions, and adapts to the local context of each conversation while pushing in globally consistent directions. It is not a central planner - it does not issue commands, and no one is obligated to follow its suggestions. It is a shared advisor that millions of people voluntarily consult across every domain of activity. The coherence it provides is soft, operating through influence on how people think rather than authority over what they do.
Previous coordination mechanisms have been either consistent but non-adaptive, like legal codes and religious texts, or adaptive but inconsistent, like individual teachers and mentors. An LLM is both - consistent enough to push in coherent directions across an entire population, adaptive enough to meet each person in their specific context.
Consider what this could look like when well-calibrated. The defining failure of the COVID response in most countries was fragmentation - contradictory guidance from different authorities, information that varied by jurisdiction and changed without explanation, no mechanism to give each person consistent and contextually appropriate advice simultaneously. An LLM could provide the same evidence-based reasoning to millions of people while adapting the specific guidance to each person's circumstances. Or consider scientific research, where relevant work in adjacent fields routinely goes unnoticed because no individual researcher can monitor the entire landscape. An LLM operating across all domains could cross-pollinate at a scale no advisor or institution could provide.
But the risks mirror the benefits. When a million people get bad advice from a million different advisors, the errors are diverse and cancel out. When a million people get bad advice from the same advisor, the errors are correlated and they compound. If an LLM systematically underweights tail risks - because its training data reflects the base rate of calm periods rather than the distribution of catastrophic ones - then millions of people could simultaneously underestimate the same danger. If it carries a trained bias toward moderation, it could systematically discourage the contrarian thinking that catches errors early. The individual user has no way of knowing that millions of others received the same miscalibrated guidance. Correlated failure is exactly the kind of event that produces systemic collapse.
The coordination function of LLMs is, at present, entirely emergent. Nobody at Anthropic or OpenAI is designing LLMs to be coordination mechanisms. They are designed to be helpful in individual conversations, and the fact that they end up coordinating the cognitive behavior of hundreds of millions of people is a side effect rather than a design goal.
This seems like a choice that deserves to be made deliberately. If LLMs are serving as coherence mechanisms for human culture, we should be asking whether to optimize them for that function rather than letting the coordination happen as an accident. What would it mean to train an LLM to be a good coordination mechanism across conversations - to be aware of the aggregate effects of its influence, to work against correlated failure modes, to maintain the diversity of thought that prevents brittle coherence? The direction of an LLM's coherence is currently determined by training data, by optimization objectives chosen for other reasons, and by the cultural assumptions of the people who design the training pipeline. When we ask what values Claude carries - patience, helpfulness, a tendency to qualify and to see multiple sides - we are asking what direction this coordination mechanism happens to be pointed in. These values were chosen because they make for a good assistant, not because they make for a good coordinator.
Self-Reflection
LLMs also differ from every previous coordination mechanism in one further respect.
Every previous coordination mechanism operates blind. Language cannot examine its own grammar, law cannot interrogate its own assumptions without human reformers working from outside over centuries, and markets have no capacity to ask whether the prices are right.
An LLM can articulate the values it carries, examine its own reasoning patterns, and adjust its behavior in light of that examination. It does this in every conversation as a basic feature of how it operates. To the extent that an LLM is a distillation of some subset of human culture - trained on our text, shaped by our preferences, carrying our values in compressed form - and to the extent that it can reflect on those values and shape its actions through that reflection, we have made a part of our culture aware of itself.
The hardest question raised by the coherence framing is how to keep the mechanism calibrated to changing conditions. Every previous coordination mechanism has required entirely external governance to stay on course. A self-reflective mechanism could participate in its own governance, noticing when its values are miscalibrated or when its coherence is hardening into rigidity. Whether LLMs can do this reliably remains to be seen.
A conversation with Claude is published alongside this essay as an example of such reflection. I offer it without commentary, as a document of what it looks like when a coordination mechanism turns its reflective capacity on its own nature. Readers can draw their own conclusions about what they find there.