Dedicated to Jeffrey Goldstein, Peter Allen and Paul Cilliers — friends and academic mentors whose thinking runs through every page of this series, and whose influence on the world continues long after them.
Every configuration that has ever existed or ever will exist was implicit in the initial conditions. You are reading this because it was always going to happen.
At the foundation of everything: simple rules. Applied uniformly. Forever. A cellular automaton divides space into discrete cells, each holding a finite state. A rule function maps the current neighborhood state to the next state. No exceptions. No negotiation. No gaps.
Simple rules can produce universal computation. Rule 110 — eight transitions, two states, three-cell neighborhoods — is Turing-complete, a result proven by mathematician Matthew Cook. The broader insight that trivial local rules generate unbounded complexity traces back to von Neumann, Ulam, and Conway long before it became fashionable. The universe's ruleset need not be elaborate to produce everything we observe. The complexity is in the initial conditions and the length of the run.
No causality escapes the grid. No cell acts of its own accord. Every state at time T+1 is a deterministic function of the state at time T. The entire future trajectory of the system is written in the moment the first cell is set.
This is not a metaphor. It is a hypothesis about the actual nature of physical reality — one consistent with every experimental result we have, and that gains explanatory power precisely from what it implies about observers inside the system.
Consider a pattern within the automaton — not a static shape, but a dynamic, self-referential structure. A localized region that updates in ways that model the broader environment. That responds to inputs. That, in some functional sense, represents.
This quasi-entity exists entirely within the automaton. Every thought it has, every choice it appears to make, is a configuration of cells transitioning according to the same rules that govern everything else. It cannot step outside the grid. It cannot access the global state. It sees only what its immediate neighborhood provides.
From within, the future feels open. The past feels fixed but imperfectly recalled. The rules feel contingent — as though they could have been otherwise. This is not an illusion exactly. It is precisely what determinism looks like from the inside.
The question "do I have free will?" is not asked by an agent contemplating the automaton. It is asked by a pattern within it — using computational resources that are themselves cells in the grid, following rules they cannot transcend and cannot fully see.
The structure of the automaton is not entirely opaque to those within it. Occasionally the scaffolding shows through — moments when the underlying regularity becomes visible, when the rules leave unmistakable fingerprints on the phenomenal world.
The cellular automaton contains no chemistry, no biology, no mind. It contains only cells, states, and rules. Yet chemistry emerges. Biology emerges. Mind emerges. Each level is real — but only visible through the appropriate filter.
Each emergent level requires a filter — a measurement apparatus, a conceptual vocabulary, a theoretical framework that makes the level visible. Without the filter, the level does not appear in the data. The filter is not optional. It is constitutive. It is what makes the level real at all.
But here is the critical problem: the levels are not cleanly delineated. They blur into each other. Consciousness influences biology. Culture shapes cognition. The organism modifies its chemistry, which modifies the physics. The arrows do not run only upward.
Reductionism promises that we can decompose the system into levels and solve each independently. This is approximately true — and the approximation breaks down precisely where things become interesting.
Newton's laws are not wrong. They are local. Every scientific law holds within a jurisdiction defined by scale, energy, velocity, and complexity. Outside that jurisdiction the law fails — not because it is false, but because it was never universal. It was always an approximation to something deeper, valid within a particular regime of the grid.
The proliferation of laws is not evidence of scientific progress toward a final theory. It is evidence of our position: inside the system, unable to see the grid directly, constructing approximate jurisdictional maps from bounded local observation.
Reductionism teaches: to understand a complex system, decompose it into parts, understand the parts, and reconstruct the whole. This works — approximately. But the reconstruction step is the hard part, and it is where the method silently fails.
Each level of description introduces a vocabulary, a set of variables, a framework. These frameworks are not isomorphic. You cannot translate "temperature" back into the molecular positions and momenta of 10²³ particles. The information has been compressed. The compression is lossy. The bars below show the information retained at each level of description, relative to the complete CA state.
The levels we observe are not cleanly delineated. The boundaries are our constructions, not the automaton's. We draw them where they are useful, not where they are real.
This does not mean science is wrong. It means science is a collection of useful fictions — maps that are not the territory, jurisdictional laws that are not the deep rules, emergent descriptions that are not the underlying computation. The maps are extraordinarily good. They are not the ground.
The automaton below is perfectly deterministic. Given these rules and your initial configuration, every future generation is already implicit in the present state. Paint the grid. Press play. Watch the future that was always going to happen, happen.