After a couple of years in the making, it seemed a happy coincidence that my paper “Emergence of the Fused Spacetime from a Continuum Computing Construct of Reality” was published in Foundations of Physics on International Women’s Day last month.
The paper proposes a new explanation of why the nature of space and time are fused, based on laws of computation. In this post, I’ll try to offer an insight into the making and the meaning of the paper.
Einstein produced the theory of relativity over a century ago, which explains how space and time are fused. Time slows down as fast moving objects approach the speed of light, and also dilates due to gravitational fields. But why do we have these puzzling behaviours in our universe?
It was the phase of first year PhD where your life consists of reading, more reading, states of confusion, and then more reading. My PhD topic in computational physics meant I was in the dark depths of a textbook chapter on numerical stability analysis of algorithms for partial differential equations (PDEs). That day as I left the Cavendish Laboratory for my leafy cycle home, I began listening to a podcast where Nick Bostrom was discussing his seminal paper: “Are you living in a computer simulation?”.
Bostrom’s so-called Simulation Argument has been gaining attention for some time in the field of philosophy, and more recently in physics and computer science. To be quite reductive, the central idea is: if humans reach a level of technological maturity where we have the computing resources (and intentions) to simulate our own existence, and those simulations are accurate enough replications that they can generate their own simulations, then we obtain a fractal structure of reality. In this case, we are statistically more likely to be living in one of the many simulated realities, than to be the single original simulator reality. All very sci-fi.
In more tangible terms, since the advent of computing as a mode of scientific investigation, computational approaches have revolutionised many fields of inquiry. Whether or not we are simulated per se, there is a more foundational question worth exploring: could computational laws and information theory serve as a deeper fundamental building block to all of reality?
Some ideas had come to mind on the cycle, and I arrived home to sketch out the basic concept on the kitchen whiteboard. Fortunately, I had an astrophysicist housemate who heard me out. What if we assume a continuum computing construct, predicated on the core set of principles: discretisation, stability and optimisation, and apply this construct to the set of most fundamental physical laws: the (non-relativistic) conservation equations which underpin our governing physics.
The first step in a computational approach to solving this type of (hyperbolic) PDE is to make space discrete. Imagine a mesh, comprised of computational cells, where each cell contains a data set. Just like how a photograph is discretised of pixels, and each pixel has one data attribute: colour. With all of space being made up of discrete data points (a data set that describes everything at that point in space), the data must then also update in time in discrete ‘iterates’. Imagine an old-fashioned projection of a series of pictures that, to the naked eye, makes a motion film.
Now, there is a central law in computing - as derived on the ear-marked page of the textbook earlier that day - which is a necessary condition for a continuum type simulation to be stable. It may combine 3 names: the Courant–Friedrichs–Lewy (CFL) condition, but this joint effort puts forward what is actually a rather simple concept. An algorithm that updates a data point in time, calculates the updated state from the data of its neighbouring points in space. This is called a computational stencil. Imagine a ball rolling through one such stencil:
At discrete TIME_0 the left cell contains the information about the ball: its position and its velocity. During the discrete update step, the cell in the centre of the stencil receives this information from the adjoining cell: you could say it ‘knows about the ball’ because these cells communicate. If we update by a discrete step in time Δt, to arrive at TIME_1, the new position and properties of the ball are calculated. If Δt is small, the physical ball moves to a position contained within the distance the information about the ball propagates. If Δt is large, the ball rolls beyond the extent of the numerical stencil. Since the ball has moved faster than the rate of information transfer between the cells, the next cells to be updated simply do not ‘know’ the ball is there. The information is lost so to speak. Properties are not conserved, the physics breaks down, and the whole simulation starts to become unstable.
Therefore, for dynamical systems, the CFL condition states: there is a maximum time step for the discrete update of cell data that ensures a simulation is stable. When applied to our core set of conservation equations, this derives a central dependency between: time, the distance between cells in space, and the fastest speed of all information propagation: the speed of light.
Now, applying the computational optimisation concept, a slight inversion of traditional simulations is proposed: rather than enforcing time to update uniformly across cells (by finding the smallest time-step globally, and restricting all cells by this smallest stable ), global time-step equivalency is not enforced, to therefore permit every cell in space to evolve to its local maximum stable time-step. Therefore, every individual cell is computationally optimised.
Evolving every cell simultaneously in terms of discrete time iterates, under this discrete, stable and logically optimised computational construct, we observe the following:
- Across every boundary between computational cells, the stability constraint enforces a local coupling of space, time and the speed of light: this produces a fused spacetime on the macroscale
- Considering the ‘flat spacetime’ (uniform computational cell size) fast moving reference frames cause a relative slowing down of time (dilation) between frames at the macroscale: a special relativistic effect
- If the computational mesh contains regions of refinement (smaller cells) and regions of dispersement (larger cells), then dilation of time inherently emerges across regions in space: a general relativistic effect
Some further computational deductions and discussions are also included in the paper:
- Why the speed of light must be finite (and our governing physics hyperbolic) in order to be computable based on theory from complexity analysis of algorithms
- How gravity relates to computational mesh refinement based on an optimisation argument for information density and precision
- Why, in multiple spatial dimensions, light bends around regions of non-uniform mesh refinement
- How this theory supports the concept of disjunctive governing physics between scales: discrete computations at the lowest level which produces emergent continuous and relativistic behaviour at the macroscale. The implication is that a universal theory perhaps be re-considered in terms of a universal computation.
The paper therefore proposes a new explanation of why the nature of space and time may be fused, by demonstrating how: beginning only with the set of most fundamental (non-relativistic) physical laws, relativistic physics "falls out" naturally from the computational construct. Since this work demonstrates consistencies with our known reality, rather than proposing new and testable hypotheses, the argument is ultimately a philosophical one.
It should be mentioned that I only continued working on this “crazy physics project” (tucked away in my home office, outside of PhD hours) because of the encouragement of some friends in the field who expressed fascination in the idea (and whose support I really appreciated).
And now, when discussing the paper, I am often asked the question: “So, are we living in a simulation?”
Honestly, who knows.
I think the importance of the paper is not the expanded speculation about simulated worlds, but rather, the demonstration of how new theories based on laws of computation could be a powerful alternative basis for exploring fundamental physics. It suggests ways in which the patterns and structures formed from computing and information theory could serve as useful models for deeper behaviours foundational to our reality.