Skip to content

On Systems, Granularity, and the Weird Affordances of Abstraction

Published: at 08:15 AM

Obsessed is not the right word. I have seen people obsessed, and whenever I get deeply interested in a thing, it’s not that. I can still see things in my peripheral vision as well. My inability to completely devote to a thing might be related to not being prone to addiction (this is what it seems like based on observing the people I know and have known). The thing I have been deeply interested about for the last few days is how systems can be incredibly complex, and how there still are super simple analogies for explaining what they do and how they do it, that are actually useful.

My first example here was the cell. I thought I had a pretty good understanding of what goes on within a cell, but turns out I hadn’t been deep enough and had just been satisfied with what a person would perhaps learn in intro bio at the university. The cell is pretty small but going from the cell to the individual atoms that make up the cell adds complexity. The cell was one example I thought of, and one of the original ones that set me on this particular path. Another one was society and companies, especially large companies. It’s entirely fair to represent McDonald’s as a place that’s selling burgers, making a small profit on each transaction and then doing that at a crazy scale. We can, however, take almost any part of their operation and try and understand it very deeply, quickly finding out that things can be rather complex if you decide to look deeper into them.

The point here is mostly just that it’s cool that we have this ability. The affordances in our languages allow for this, and we can select the correct granularity to look at the thing based on the task at hand.

The Snap Example (or: Strategic Simplicity → Optimization Hell)

Another example I just thought of on the morning drive was tech, Snap in particular. I’ve never used it but my older daughter does. I’m deep enough in tech that I was familiar with the business model regardless of never having been a daily user. I asked some questions regarding the user experience, the prevalence of ads, etc.

Now to the actual example. There was a huge, important decision made at some point in the company’s history to do this and not do that. A decision, the reasoning to which could easily be expressed in a couple of sentences. At the time of the decision, there were other alternatives (also easily expressed in a couple of sentences) that had wildly different dynamics. The decision to use this particular monetization model was done with very little data.

Now that the decision had been made, tens of potentially hundreds of people in a growth team or teams made it their life’s work to slice the user experience into tiny, tiny parts that they or their team would then optimize. Success might look like getting their particular KPI (that can in itself be super arbitrary, like being responsible for Streak Resurrection Rate (48h) on Android, tier-2 markets) to move a couple percentage points per quarter if they are lucky.

This is what I find fascinating: the strategic decision exists at one level of description—simple, expressible in sentences, made with incomplete information. But then it unfolds into this entire universe of complexity where people spend years optimizing metrics that didn’t even exist before the decision was made. And someone from the outside could describe the whole thing simply again: “Snap shows ads between Stories.” Both descriptions are accurate. Both are useful for different purposes.

There’s a cognitive scientist named David Marr who had a framework for this that fits perfectly. He said you can understand any system at three levels: the computational level (what does it do and why?), the algorithmic level (how does it do it?), and the implementation level (what’s the physical stuff that makes it happen?). The strategic decision at Snap—“we’re going with this monetization model”—is computational level. The growth teams figuring out how to optimize user engagement? That’s algorithmic. The actual code running on servers, the pixels on screens, the specific A/B tests? That’s implementation. You can understand Snap at any of these levels, and you need different levels for different questions.

What’s Actually Going On Here

I think what I’m noticing is that we can describe the same system at wildly different levels of granularity, and crucially, both descriptions can be accurate and useful depending on what we’re trying to do. It’s not that the simple version is “wrong” and the complex version is “right.” They’re both right, just right for different purposes.

Herbert Simon wrote about this in the 1960s (which, honestly, makes me feel less clever for only noticing this now, but whatever). He called complex systems “nearly decomposable.” You can break them into parts that mostly do their own thing, connected by relatively weak interactions. This is why hierarchies work. You don’t need to understand every protein interaction to understand what a cell does. You don’t need to know about Streak Resurrection Rate optimization to understand Snap’s business model.

What’s interesting about Simon’s idea is the time dimension: in the short term, each subsystem behaves pretty independently. In the long term, they all affect each other. So Snap’s growth team can optimize their metrics day-to-day without thinking about what the ad sales team is doing, but over months and years, their decisions compound and interact in ways that shape the whole company’s trajectory.

The cell thing is a perfect example of this. There’s a biologist named Dennis Bray who points out that cells perform computation. They take inputs, process information, make decisions. That’s one level of description. But if you go down to the molecular level, it’s thousands of proteins bumping into each other, binding, unbinding, changing shape, triggering cascades. Both descriptions are true. The “cell as computer” abstraction is useful for thinking about what cells do. The molecular mess is what you need if you’re trying to actually intervene in the system. (Bray also points out that we probably can’t fully model these systems at the molecular level. There’s just too much complexity. Which is fine, because we don’t need to for most purposes.)

There’s also this guy Geoffrey West who studies scaling laws in complex systems—cities, companies, organisms. Turns out there are these surprisingly simple mathematical relationships that govern incredibly complex systems. A city twice the size of another city doesn’t need twice the gas stations—it needs about 85% more. This works across cities, across time periods, across cultures. Simple law, complex system. It’s the same thing again: the scaling law is useful for certain predictions even though it completely ignores all the details of how cities actually work.

What’s wild about West’s work is that cities and companies scale differently. Cities get more innovative and creative as they grow. Double the size, you get more than double the patents, wealth creation, etc. But companies scale in the opposite direction. They get more efficient but less innovative. A company twice the size is maybe 85% as innovative per employee. Which makes you wonder about Snap, or any large tech company. They’re fighting against this scaling law, trying to stay innovative while getting bigger. The growth teams optimizing metrics are maybe the symptom of that sublinear scaling.

The Boundary Problem

What I keep wondering about is what happens at the boundaries between these levels. With the Snap example, who decided that “Streak Resurrection Rate (48h) on Android, tier-2 markets” was a thing someone should optimize? There’s this weird moment where someone at a higher level of abstraction creates a lower level of abstraction for someone else to work within. The strategic decision creates the optimization space. The optimization space didn’t exist before the decision.

And here’s where things get messy: translating between these levels loses fidelity. “We want to maximize user engagement” (strategic level) becomes “optimize for daily active users” becomes “increase streak resurrection rate” becomes “specifically focus on the 48-hour window on Android in tier-2 markets.” Each translation is a compression. Each compression loses context. By the time you’re at the bottom, the person doing the optimization might have no idea why this particular metric matters, or whether it even still aligns with the original strategic intent.

Plus, once you’ve created that metric, you’ve created an incentive structure. The person optimizing Streak Resurrection Rate gets evaluated on that metric. They get compensated based on it. So now they’re incentivized to optimize that specific thing, even if the broader strategic context has shifted. Even if “user engagement” now means something different than it did when the metric was created. The metric takes on a life of its own.

This happens in biology too. Evolution doesn’t “care” about individual protein interactions. It cares about whether the organism reproduces. But the protein interactions are what determine whether the organism reproduces. There are these levels that are sort of autonomous but also deeply connected. The higher levels constrain what the lower levels can do, but the lower levels determine whether the higher level strategy actually works.

In companies, the classic thing is that someone makes a strategic decision to “enter the Asian market” or whatever, and then that creates entire departments with their own subgoals and metrics that may or may not actually serve the original strategic intent. The levels can drift apart. The abstraction can stop being useful but everyone’s still optimizing within it.

Why This Matters (Maybe)

I think this matters for a few reasons, though I’m still working this out:

One: It explains why expertise is so domain-specific. Being really good at optimizing Streak Resurrection Rates doesn’t mean you’d be good at deciding whether to use that monetization model in the first place. They’re different levels of the system, requiring different types of thinking. The skills don’t transfer as much as we might expect.

Two: It might explain why interdisciplinary work is hard. Different fields are often operating at different levels of granularity. An economist might want to model a company as a profit-maximizing agent (high level, simple). An organizational theorist wants to understand the internal decision-making processes (lower level, complex). They’re both studying “the company” but they’re not even really talking about the same thing.

Three: It suggests something about how we should make decisions. If you’re at the strategic level, you don’t need to know everything about the lower levels, but you need to know something about the constraints they impose. You can’t make good strategy if you have no sense of what’s actually implementable. And if you’re at the optimization level, sometimes you need to zoom out and ask whether the thing you’re optimizing even matters anymore.

There’s a thing that happens in organizations (and probably in other systems) where people at lower levels of abstraction lose sight of the higher level purpose. They’re optimizing their metric, but the metric was only ever a proxy for something that mattered strategically, and contexts change, but the metric keeps getting optimized because that’s what’s measured. The levels can become decoupled. That seems bad.

Where This Breaks Down

I don’t think this works for everything. Some systems might not be “nearly decomposable” in Simon’s sense. Maybe highly integrated systems where everything affects everything else? I’m not sure. Ecosystems maybe? Though even there we do use different levels of description (food webs vs. population dynamics vs. biochemistry).

Or maybe the issue is that some systems are so tightly coupled that the abstractions aren’t actually useful. Like, you can describe the financial system at a high level (“banks lend money, make profit on interest”) but maybe during a financial crisis this abstraction is actively misleading because everything is connected in ways the simple description hides.

I don’t know. Still thinking about this.

Wrapping Up

Maybe my inability to get completely obsessed with one thing is actually useful here. To see this pattern (that the same system can be described at multiple levels and both descriptions can be useful), you need to be able to zoom in and zoom out. You need to not be so deep in the protein interactions that you forget there’s a cell doing a job. You need to not be so focused on Streak Resurrection Rates that you forget there was a strategic decision that created that optimization space in the first place.

Or maybe I’m just rationalizing my scattered attention span. Could very well be just cope. Hard to say.

Either way, I think there’s something here about how we navigate complexity. Not by understanding everything at once, but by picking the right level of description for the question we’re asking. And having a sense that there are other levels, even if we’re not looking at them right now. The map is not the territory, but we have a whole stack of maps at different scales, and knowing which one to pull out seems like a useful skill.


Further Reading

If this interests you, here are some sources that explore these ideas more rigorously:

Systems Theory / Complexity:

Biology:

Organizations/Economics:

Cognitive Science/Philosophy:


Next Post
Your Brain Doesn't Have 'Present Bias', It Has Optimal Commitment Timing