The Laws of Memetics: A Synthesis of Dawkins and Girard
Why ideas spread has almost nothing to do with whether they’re true.
Richard Dawkins coined “meme” in 1976 as the cultural analog to genes: units of imitation that replicate, mutate, and face selection pressure. The idea was fertile but incomplete. Dawkins told us memes are selected, but never clearly answered: selected for what?
His proposed criteria (fidelity, fecundity, longevity) are almost tautological. Successful memes are ones that succeed at copying. This is like saying successful genes are ones that reproduce. True, but it doesn’t explain the mechanism.
René Girard, working in an entirely different tradition, provided the missing piece. His insight: human desire is mimetic. We don’t want things directly. We want things because others we admire or envy want them. Desire is triangulated through models.
This reframes the selection criterion entirely. Memes aren’t selected for truth. They’re selected for their ability to become objects of mimetic desire.
The synthesis produces something neither thinker quite articulated: a physics of idea transmission that explains why misinformation wins, why corrections fail, why the same fact produces opposite reactions depending on who states it, and why expertise is both protective and radically limited.
Here are the laws.
Law Zero: Scarcity
Attention is finite and zero-sum. Every meme that wins is another that loses.
This isn’t a law of memetics per se. It’s the environmental constraint within which everything else operates.
Gabriel Tarde, the French sociologist who anticipated much of memetics in his 1890 Les Lois de l’imitation, understood that imitation radiates through social networks, but he wrote before the modern attention economy. The key insight for our era: unlike physical goods, attention is not just scarce but absolutely zero-sum at the neurological level. You cannot attend to two incompatible interpretations simultaneously.
This creates ecology, not just transmission. Memetics is a competitive landscape where the rise of one idea is necessarily the fall of others.
Law One: Triangulated Desire
Memes spread through mimetic desire: we want what others we admire or envy want. The model, not the object, drives transmission.
This is the Girardian core.
We don’t believe things because they’re true. We believe them because people we want to be like believe them. The desire to believe doesn’t flow directly from us to the idea. It flows from us to the model (the person we admire, envy, or wish to resemble) and only then to the object of their desire.
Girard called this “metaphysical desire.” We’re not seeking the thing itself; we’re seeking the being of the model. We think: if I believe what they believe, I become more like them.
This immediately explains why the same claim, endorsed by a rival tribe versus an admired figure, produces opposite reactions. The claim is irrelevant. The model is doing the work.
It also explains why fact-checking fails as an intervention. Fact-checkers address the object (the claim) while ignoring the model (the source of desire). The person didn’t believe the claim because of evidence. They believed it because their model believed it. Correcting the claim doesn’t change the model.
Tarde anticipated this with his “Law of Superior to Inferior Imitation”: status gradients determine the direction of memetic flow. Girard goes deeper. What looks like imitation of higher-status people is something more fundamental. Imitation constitutes our desire. We don’t have independent desires that we then compare to models. The model creates the desire.
Law Two: Arousal as Energy
Emotional arousal is the transmission energy of memes. High-arousal states (anger, awe, fear, disgust) accelerate spread; low-arousal states (nuance, sadness, contentment) dampen it.
Jonah Berger and Katherine Milkman’s 2012 study of New York Times articles found that content evoking high-arousal emotions, whether positive (awe) or negative (anger, anxiety), was significantly more likely to be shared than content evoking low-arousal emotions like sadness.
This is the “physics” layer of memetics. If triangulated desire is the direction of transmission, arousal is the energy.
Why does arousal drive spread?
First, high-arousal states lower the threshold for action. You’re more likely to share something when you’re angry than when you’re contemplative. The activation itself becomes a transmission mechanism.
Second, arousal makes you a more compelling model. When you share something with visible passion, you become more desirable to imitate. Your arousal signals that this thing matters, which makes others want it.
Third, high-arousal content compresses well. Anger has few valid interpretations. Nuance has many. Low-entropy signals (constrained, unambiguous) transmit more reliably than high-entropy signals (ambiguous, requiring interpretation). Outrage compresses; complexity doesn’t.
The practical implication: the first framing of an event captures the arousal premium and tends to be decisive. Subsequent framings face uphill battles not because they’re wrong, but because the arousal slot is already occupied.
Law Three: Integration Lock
Once a meme becomes load-bearing for identity or causal models, it exits the domain of evidence and becomes antifragile.
This is where the “continued influence effect” (CIE) from cognitive psychology meets Girardian dynamics.
The CIE literature, extensively reviewed by Lewandowsky and colleagues, documents a strange phenomenon: people continue to be influenced by misinformation even after they accept a correction. You tell someone “X is false,” they say “okay,” and then they keep reasoning as if X were true.
The mechanism: we don’t store beliefs as isolated data points. We integrate them into causal models. If you believed X, you drew conclusions A, B, C from X. When you later accept “X is false,” A, B, and C remain because they’ve become load-bearing for your explanatory structure. Removing X leaves a vacuum you cannot tolerate.
This is causal integration. There’s also identity integration.
Once a meme becomes part of who you are (“I’m the kind of person who believes X”), attacking the meme attacks you. Your immune system activates not against bad arguments but against identity threats. The meme is no longer a proposition to be evaluated. It’s a part of the self to be defended.
The antifragility follows directly. Persecution strengthens commitment by raising identity stakes. This is why cults survive opposition, why political tribes harden under attack, and why “I was wrong” is so rare. Admitting error isn’t a belief update. It’s an identity wound.
Tarde called this “insertion” or “superposition.” New imitations insert themselves into existing webs of habit, either reinforcing or substituting, never arriving on blank slates. The modern version: memes don’t just spread, they integrate, and integrated memes resist removal even when the host consciously accepts the correction.
Law Four: Correction Asymmetry
Retractions inherit all the transmission disadvantages the original claim avoided, plus face active resistance from integrated hosts.
This law is derivable from the first three, but it’s important enough to state explicitly because it’s the grim punchline of the system.
The original claim benefits from:
- Novelty: You didn’t know this! (Arousal premium)
- Status-gain for sharing: I know something you don’t (Mimetic value)
- Model endorsement: Someone you admire believes it (Triangulated desire)
- Causal satisfaction: It explains something you needed explained (Integration hook)
The correction suffers from:
- Lower novelty: You already know the claim
- Lower arousal: Nuance is boring; “well, actually” is low-energy
- Status cost: Sharing the correction signals either “I was fooled” or “I’m being pedantic”
- No model advantage: Fact-checkers are rarely admired models for the people who need correcting
- Causal vacuum: “That’s not true” leaves the explanatory hole unfilled
- Active resistance: Integrated hosts defend the meme
The asymmetry is structural. Corrections inherently lack the transmission advantages that made the original claim successful. We haven’t failed to figure out how to do corrections well. The game is rigged.
This is why misinformation “wins” even after debunking. The retraction never catches up because it’s playing by different rules on the same field.
Law Five: Refraction
Cross-boundary transmission requires re-encoding. Memes crossing tribal boundaries must be re-triangulated, attached to new models the destination group desires.
Tarde observed that ideas undergo transformation as they pass between social classes, a process he compared to light refracting through different media. He even analogized Grimm’s Law (systematic sound changes in language) to “laws of linguistic refraction.”
The modern version: when memes cross social boundaries (tribes, platforms, classes, cultures), they undergo predictable transformations:
- Simplification: Complexity creates friction; only the compressed version survives the crossing
- Edge resolution: Ambiguity gets resolved toward the destination group’s priors
- Tribal marker acquisition: The meme becomes “our” version vs. “their” version
The deeper mechanism connects back to Law One. Since desire is triangulated through models, and different groups have different models, a meme must be re-triangulated to cross. The idea must be attached to models the destination group desires.
This explains why the same idea means different things in different tribes. What looks like drift is re-attachment to new models. High-fidelity transmission across boundaries requires costly enforcement: institutions, sacred texts, formal education, orthodoxy. Without enforcement, the pressure toward re-triangulation is constant.
Law Six: The Utility Exception
Vertical transmission (across generations, through institutions) selects for survival-utility. Horizontal transmission (peer-to-peer, viral) selects for desire-satisfaction. Where they conflict, horizontal dominates breadth and speed; vertical dominates persistence and fidelity.
David Hull’s generalization of evolutionary theory distinguished between “replicators” (the information that copies) and “interactors” (the entities that face selection pressure in the world). In memetics, the crucial insight is that different transmission modes create different selection environments.
Horizontal transmission (peer-to-peer, viral, social media) is governed by the first five laws. Truth is orthogonal. What matters is desire-satisfaction, arousal, social proof.
Vertical transmission (parent-to-child, teacher-to-student, master-to-apprentice) faces a different constraint: utility. If you teach your kid wrong things about which berries to eat, your lineage suffers. If your apprentice learns wrong things about bridge-building, the bridge falls.
This is why conspiracy theories spread faster than hand-washing protocols, but hand-washing persists across centuries. Horizontal spread selects for virality. Vertical transmission selects for not-killing-the-hosts.
The modern crisis: information technology massively amplifies horizontal transmission relative to vertical. We have unprecedented capacity for viral spread and weakening institutions of vertical transmission (family, apprenticeship, trusted authorities). The ratio has shifted toward desire-satisfaction and away from utility-testing.
Law Seven: Institutional Capture
Memes that achieve structural enforcement escape horizontal selection pressures. Their persistence decouples from their transmission fitness.
Susan Blackmore in The Meme Machine noted that some memes persist not because they’re good at spreading but because they’re embedded in structures that enforce compliance. Once a meme is encoded in law, curriculum, doctrine, or policy, it no longer needs to “go viral.” It replicates through compliance, not desire.
This creates zombie memes: ideas that would lose in open competition but persist because the institution requires them. Academic paradigms that no one actively believes but everyone must cite. Regulations that no one thinks are good but no one can repeal. Corporate practices that survive every wave of consultants.
The flip side: institutional disruption triggers rapid memetic change. When enforcement mechanisms weaken (budget cuts, legitimacy crises, personnel turnover), zombie memes suddenly face selection pressure again. This is why revolutions and paradigm shifts cluster around institutional failures.
Tarde’s insight returns: enforcement is the mechanism of high-fidelity transmission across boundaries and generations. Without it, drift and re-triangulation are inevitable. With it, even dead ideas persist.
Memetic Immunity
The framework suggests a mechanism for resistance to viral misinformation: immunity through occupation.
If your cognitive architecture is already load-bearing with complex, utility-tested, vertically-transmitted structures (professional expertise, deep philosophical frameworks, demanding crafts), there’s no room for horizontal viral injection. The slots are full.
This explains why expertise protects against misinformation within the expert’s domain. The physicist’s physics-slots are occupied by rigorously tested models. But that same physicist may be fully vulnerable to nutritional quackery or political conspiracy theories. Their nutrition-slots and politics-slots are unoccupied territory.
The implication: the best defense against bad memes isn’t fact-checking (which addresses the object while ignoring the model and the integration). It’s preemptive integration of better memes. Fill the slots before the virus arrives.
This is why “pre-bunking” appears more effective than debunking. Teaching people the mechanics of misinformation before they encounter it occupies the slot with a meta-framework that recognizes manipulation when it arrives.
What This Explains
The framework accounts for phenomena that other theories struggle with:
Why disproven claims don’t suffer reputational damage. The reputation at stake isn’t the claim’s. It’s the model’s. If people believed via the model (Law One), and the model’s status survives, the claim’s retraction is irrelevant.
Why “going viral” and “being true” are orthogonal. Horizontal transmission selects for desire-satisfaction and arousal (Laws One-Two), not accuracy. Truth only enters through vertical transmission (Law Six) and even then only when utility is testable.
Why the same fact produces opposite reactions depending on who states it. The model, not the object, drives transmission (Law One). Change the model, change the response.
Why institutional reform is so difficult. Zombie memes persist through enforcement, not fitness (Law Seven). Changing them requires dismantling the enforcement mechanism, not winning the argument.
Why expertise is both protective and limited. Occupation creates immunity, but only in the occupied domain. The expert is a layperson everywhere else.
What This Predicts
-
Pre-bunking beats debunking. Occupying cognitive slots before misinformation arrives should be more protective than correcting after integration.
-
Model trumps message. Belief adoption should be better predicted by who endorses a claim than by what the claim is. Epistemic interventions that ignore models will fail.
-
Corrections can succeed if they satisfy the same desires. A correction that comes from a trusted model, provides an alternative causal explanation, and doesn’t threaten identity should be able to displace false beliefs. The correction must do everything the original did, plus be true.
-
Institutional disruption triggers paradigm shifts. When enforcement mechanisms weaken, zombie memes should face selection pressure again. We should see rapid change clustering around institutional crises.
-
Horizontal amplification is destabilizing. Systems that increase horizontal transmission relative to vertical (social media, collapse of traditional institutions) should see faster spread of desire-satisfying memes and weaker persistence of utility-tested ones.
Coda
The tragedy of these laws: they describe the system working as designed.
We’re not Fallen. We’re triangulated.
We spread ideas not because they’re true, but because we desire the desire of others. The physics of that contagion has constraints. Truth is sometimes selected for, but only as a side effect of utility, and only through transmission modes that modern technology has systematically weakened.
The laws don’t tell you how to fix this. They tell you why fixing it is hard, and where the pressure points might be. The integration lock can be pre-empted. The correction asymmetry can be partially overcome by matching the original on arousal, model, and causal satisfaction. Institutional capture can be deliberately cultivated for good ideas.
But none of this happens automatically. The default is viral spread of desire-satisfying memes, selected by models we never chose, integrated before we noticed.
Understanding the physics is the first step toward building better architecture.
This framework emerged from synthesis of work by Richard Dawkins (The Selfish Gene, 1976), René Girard (Violence and the Sacred, 1972; Things Hidden Since the Foundation of the World, 1978), Gabriel Tarde (Les Lois de l’imitation, 1890), Susan Blackmore (The Meme Machine, 1999), David Hull (Science as a Process, 1988), and the empirical literature on continued influence effects (particularly Lewandowsky et al., “Misinformation and Its Correction,” Psychological Science in the Public Interest, 2012) and viral content (Berger & Milkman, “What Makes Online Content Viral?” Journal of Marketing Research, 2012).