2001 – 2025
&&
This question implicitly assumes a temporal ordering and an external origin, but both time and causality are emergent properties of observers within the universe (as established in Paper 1). In the underlying informational substrate, the universe is abstract and timeless. It exists because there are no constraints forbidding it.
&&
Asking why there is something rather than nothing is like asking why “heads” rather than “tails” — both possibilities exist, but only one can be observed. The observer necessarily finds itself in the branch that exists.
&&
Does the theory support the QBism view that our observations shape quantum phenomena?
Yes. In \(IaM^e\) , the observer is an informational structure and is most likely to find themselves in configurations that preserve their reasoning and decision-making capabilities. Observers with reasoning have a higher probability of persistence than those without. As a result, reasoning statistically biases which configurations an observer experiences next. Quantum mechanics emerges from these informational rules, so it may appear that observation or decisions “change” the quantum state, even though all evolution is fully contained within the underlying substrate.
&&
Gödel’s incompleteness theorems constrain formal systems capable of arithmetic, but a ToE aims for ontological closure, not formal deductive completeness. Observer reasoning may encounter Gödel-style limits internally, but these are limitations on embedded observers’ formal reasoning, not on the underlying informational substrate itself.
&&
In the popular formulation, the simulation hypothesis posits an external simulator operating in a higher-level physical reality. Recent work has argued that such hypotheses are not fully consistent with observed features of our universe, placing strong constraints on externally run simulations (Faizal et al. 2025). (Vazza 2025) approaches the simulation hypothesis from a very different perspective, with astrophysical constraints. Regardless of the technical details, this approach merely shifts the explanatory burden: one must then explain the origin and laws of the simulator itself. In this framework, such meta-theories are rejected. A genuine Theory of Everything cannot rely on an external system, as a “theory of everything minus the simulator” is not a theory of everything. Instead, we assume informational ontological equivalence: simulated and non-simulated realities are not ontologically distinct. The universe is not running on something else; rather, all structures, including observers and apparent physical laws, arise within a single abstract informational substrate. In this sense, the question of whether we are “in a simulation” is ill-posed.
&&
Yes, but only in a restricted and precisely defined sense.
Within the theory, observers are informational structures. Any such structure capable of reasoning can, in principle, construct formal systems, Turing machines, and simulations of itself. This has already occurred: humans have defined universal computation and have begun simulating simplified versions of biological, cognitive, and physical processes. Since biological information (including DNA) can be encoded digitally without loss of information, simulated humans can be axiomatic copies of real humans at the informational level.
These sub-simulations do not require a separate ontological substrate. They correspond simply to different configurations within the same underlying informational space (e.g., subsets of \(2^n\)). In this limited sense, recursive simulation is supported.
However, the theory places a fundamental constraint on unbounded recursive simulation: entropy.
While it is logically possible to define a simulation of observers, it is informationally expensive to realize one. Extracting, encoding, and maintaining the information required to faithfully describe even a single human genome is already nontrivial. Scaling this to billions of humans, each composed of billions of cells, embedded in a universe containing astronomical numbers of particles and interactions, rapidly exhausts available informational resources.
As a result, recursive simulations necessarily degrade. They must either:
omit detail,
compress aggressively (thereby losing fidelity),
restrict scope, or
terminate.
While recursive simulation is internally consistent with the theory, its informational cost is high. Consequently, deep, high-fidelity nested simulations are unlikely to persist. The theory favors shallow or short-lived recursions, yet this contradicts our current empirical evidence.
&&
The phenomenon of death can be understood in terms of the information content of an observer. As time progresses, an observer accumulates information in the form of memories, knowledge, and physical structure. During early life, this accumulation typically increases the probability of persistence: improved reasoning, learned skills, and adaptive behaviors make it more likely that the observer continues to exist. However, as the information content grows, the number of future configurations compatible with the continued existence of that observer decreases. Eventually, the set of configurations becomes exhausted, and the observer ceases to exist. In this sense, death arises naturally from the finite combinatorial possibilities available to complex informational structures.
&&
Conditional on entropy increasing and on typical emergence, observers are overwhelmingly likely to find themselves at the entropy value where microstructure count is maximal — which corresponds to a near-critical expansion regime.
&&
The theory adopts Cartesian Certainty—the observer’s existence—as its sole a priori datum. However, it does not default to Solipsism; instead, the number of observers is treated as a probabilistic variable.
Statistically, it is difficult to justify a "singular" outcome in a system of high complexity. Unless it can be mathematically demonstrated that the probability of many is lower than probabilty of one. The theory follows the Highest Probability Rule: "Others" exist if they are a natural, high-likelihood consequence of a reality emerging without predefined constraints.
&&
No. explicitly reject Kolmogorov complexity as a foundational quantity. Kolmogorov complexity is neither continuous nor computable and therefore cannot serve as a physical observable.
In the \(IaM^e\) framework, description length is instead defined through the spectral content of the wavefunction. A physical state is characterized by a well-defined set of frequencies and phases, and its informational cost is given by the minimal spectral length required to represent it under finite resolution. This quantity is continuous, measurable, and can be mapped to a discrete bit representation.
The apparent wave-like behavior of the microphysical world is not postulated but selected: among all admissible informational configurations, those admitting maximal spectral compression dominate the measure. Observers are therefore overwhelmingly likely to be embedded in universes whose dynamics admit compact spectral representations. In this sense, we do not observe a compressed universe because it is assumed, but because compressed configurations are overwhelmingly more probable within the space of physically realizable descriptions.