James's Blog

Sharing random thoughts, stories and ideas.

Simulation of Reality

Posted: Nov 22, 2019
◷ 8 minute read

This isn’t going to be about whether or not we live in a simulation. Rather, I want to take a look at the limitations of simulating reality, from somewhat of a computational complexity angle. Some basic knowledge of and reasoning around theoretical computer science would be helpful, such as what Turing machines are. Given the nature of this topic, pretty much everything below will be speculative, and highly likely to be wrong in ways beyond my imagination. The definitions and assumptions can be arbitrary and loose. The goal here is not to produce fully rigorous proofs and arguments, but really to get some healthy intellectual stimulation.


The main question to examine here is the following:

Can any reasonably sized, arbitrary subset of reality be perfectly simulated by some computational process with a positive speed-up?

Why “reasonably sized, arbitrary subset”? Well, without this constraint, the answer is trivially no, since reality is everything that exists, and if we were to simulate the entirety of reality, there would be no room for any computational process to take place in. If we tried to simulate a large percent of reality (say 99%), we may not have enough room left for our computational process to take place in. On the other hand, if we just restricted our simulation to an extremely small part of reality (say 1 cubic Planck length of volume), then the computation may be trivial as not much can happen in that amount of space. Hence this vague condition; we are really looking for some small finite size of reality that is still interesting enough to simulate.

By “perfectly simulated”, I mean able to produce perfect predictions of how reality behaves. This means accounting for the tiniest interactions of the smallest particles or forces, in order to produce the exact predictions of how the subset of the universe being simulated will change over time. The important thing here is that we are really looking for perfect predictions, as that’s the main purpose of simulations. So if there are ways to yield perfect predictions of reality without fully simulating all the interactions, that’s fine. We care about the end, not the means.

“Positive speed-up” is more or less straightforward: it means to have the computational process yield the predictions faster than the real speed of the unfolding of reality itself. In a sense, this computational process would be a fast-forward function (however slight) for the subset of reality that we are simulating.

So overall, this question is essentially asking whether some form of Lapace’s demon can predict the future for some limited part of the universe. Many people believe that the answer is trivially and obviously no, and take it for granted almost as an axiom. But I think there is more here to think about.


With the formulation above, and with one more condition, an easy argument for a negative answer can be made. It is a proof by contradiction, in a similar vein to one of the proofs for the undecidability of the halting problem. It roughly goes like this:

  1. Suppose computational processes that produce a positive speed-up in reality simulation exist, and that there is a finite, maximum speed-up that can be achieved (i.e. infinite speed-up is not possible)
  2. Let S be the optimal, fastest simulation process
  3. S is part of reality. Suppose S itself is “reasonably sized”
  4. Then we can use S to simulate itself. From 1, we should get a positive speed-up from this simulation
  5. This contradicts 2, as S is already the optimal simulation process
  6. Therefore S cannot exist

The additional, critical assumption made here is in point 3, that S itself is “reasonably sized”. If the computational process that can yield a speed-up over reality is somehow no longer reasonably sized, then the argument breaks down. So the above proof shows us that for such a faster-than-reality simulation to be possible, the computational process that can perform the simulation must be larger than the maximum size of the volume of universe that it can simulate. This seems obvious: the computer that can simulate a cubic meter of the universe perfectly, with some positive speed-up, is most likely larger than 1 cubic meter itself.

However, this causes a problem. How can something that is strictly larger generate predictions faster than something smaller? We all know that, at least as far as we understand today, there is a maximum speed of causation in our universe - the speed of light, c. Suppose such a computer, S, as described above, exists, and that it is capable of simulating any cubic meter volume of reality at some greater than 1.0x speed. Then as we’ve shown above, S is necessarily larger than 1 cubic meter. We can then fill the volume of reality to simulate with only photons, where all interactions would occur at the maximum speed of c. This pathologically constructed “reality cube” as the subject to simulate would spell trouble for the computer S, because if the computation requires a larger volume, how can it produce predictions faster than what is already transpiring at the maximum possible speed? We would need some faster-than-light interaction mechanism, which does not exist.

This argument is similar to the reasoning used in some computational complexity proofs, relating time and space complexities. In general, a Turing machine (or algorithm) that requires more possible states must have a longer run time than a smaller Turing machine with fewer possible states.


But are there other ways to get a computational speed-up for our simulator S? Without faster-than-light interactions, one other potential source of speed-up is from state compressions. Basically, if we can losslessly reduce the number of interactions of the simulated volume, we can perhaps perform the simulation and yield predictions faster. Suppose we know that these 100 interactions between photons can be mapped to just 10 different but isomorphic (i.e. equivalent) interactions, then even with a machine that is 5x bigger in volume, we can theoretically achieve a 2x speed up in our simulation.

Sadly, such a speed-up would be fairly pointless practically, for research. In order to perform the state compressions required for the simulation, we would have to know all the isomorphic compressive state mappings beforehand. This means that whatever predictions our simulator, S, can generate would be things that we had previously already known. Imagine a cue ball hitting a group of billiard balls. It will take a few seconds for the end state to materialize in reality. If we were to simulate this, we can speed up this process by skipping directly to that end state (a form of compression), thereby eliminating all the intermediary interactions and states of balls hitting each other and the walls. But this requires us to know what the end state is prior to the simulation, making our simulation useless. So for S to be a faster-than-reality simulator, it cannot produce any new knowledge.

Regardless of its utility, the possibility of this type of state compression based speed-up seems to suggest that bigger volumes of space have some form of additional computational power that cannot be fully encapsulated by the sum of the separate, smaller volumes of space. This power could be a type of emergent property perhaps. The compressed states would require more space to operate in, but we would gain a time advantage. If time can be thought of as the 4th dimension, then this emergence can be seen as a way to “trade more 3D volume cost for less 4th dimension cost”, for performing computations/simulations.


So it doesn’t seem like a universal faster-than-reality simulator that can produce new information can exist. But what about a non-universal machine, one where it can only simulate a subset of interactions, but can do so faster than reality and yield new, useful knowledge? This restriction opens up more potential sources of speed-ups, and I think makes such a simulator theoretically possible.

One source of simulation speed-up could be achieved by interaction translation. Suppose we want to simulate reality with a subset of interactions. These interactions contain a mix of faster and slower processes (i.e. not everything is at speed of light). If we can find an isomorphic mapping, or translation, between the slower interactions and some other faster interaction, our simulator can then translate the slower parts of reality to the faster one, and use that to generate predictions. There is no compressive state mapping required, as it would be a more trivial 1-to-1 mapping. And even though the simulator would need to perform the interactions in a larger volume of space, as long as the speed gained from the translation process is larger, we can yield an overall speed-up.

Since the ultimate speed limit is c, we can imagine a theoretical simulator where all other slower interactions of reality are translated to some equivalent interactions between photons. This simulator would then be able to fast-forward reality, as long as the content of the simulation contains enough slower-than-light interactions.


So why bother with such a hypothetical question and line of reasoning? The implications of the answer to this question can be profound. In my mind, the answer to this question can help determine whether there is a theoretical limit to the progress of the natural sciences. This is similar to how the negative answer to the decidability of the halting problem gave us the theoretical limit on what algorithms can do.

As far as we know today, to progress in the natural sciences, we need to perform experiments in reality and observe the results. We can gain some speed-up via parallelization by performing many experiments at once, but for each individual experiment, we are still limited by the speed of the passage of time. A faster-than-reality simulator would be a way for us to bypass that limit, and it seems like it is at least theoretically possible for some subset of interactions that exist in the universe. Unfortunately, based on my current thinking, a simulator that can produce a speed-up for all interactions of reality, even in the worst case, cannot exist.