2001 – 2025
&&
This question implicitly assumes a temporal ordering and an external origin, but both time and causality are emergent properties of observers within the universe (as established in Paper 1). In the underlying informational substrate, the universe is abstract and timeless. It exists because there are no constraints forbidding it.
&&
Asking why there is something rather than nothing is like asking why “heads” rather than “tails”. Both states are equally abstract, neither one is privileged. Observers necessarily find themselves in realizable informational configurations.
&&
The wavefunction is not a physical object in the classical sense; it is the optimal compression algorithm for the underlying informational structure of reality. We perceive the universe as "waving" because we are observing compressed information.
This process follows a deterministic logical funnel:
\(\boxed{\text{Maximal Compression}} \rightarrow \boxed{\text{Maximal Probability}} \rightarrow \boxed{\text{Maximal Predictability}}\)
According to the Gibbs measure \(P(\gamma | O) \propto \exp(-\lambda C_O[\gamma])\), configurations with the lowest complexity (\(C_O\)) are the most statistically likely to be observed. The "laws" of physics are simply the most probable survivors of this informational selection.
&&
The universe expands because its entropy increases; space and entropy are two sides of the same coin. The trajectory of the universal informational configuration toward higher entropy is formally analogous to the Inflationary Epoch followed by gradual expansion. Space is the geometric projection of total entropy: as the underlying information becomes more disordered, the corresponding geometry stretches, producing the observed expansion of the universe.
&&
The universe we observe can be associated with a finite informational description of \(n\) bits, corresponding to a configuration space of size \(2^n\). Different arrangements and interpretations of this information correspond to distinct possible universes within the same abstract informational substrate.
Only a very small subset of these configurations may admit stable, self-referential informational structures capable of functioning as observers. We necessarily find ourselves in one such configuration, which—when interpreted geometrically—appears to originate from a remarkably ordered, low-entropy state.
Other informational arrangements compatible with intelligent observers may also exist and need not share identical geometric or entropic histories. Whether all observer-compatible universes must exhibit an apparent low-entropy “origin” remains an open question.
Importantly, a zero-entropy beginning in this framework does not represent a temporal origin in an external time. Instead, it functions as a boundary condition selected by observer-conditioned probability: low-entropy configurations are maximally compressible and therefore dominate the measure over observer-compatible histories.
The observer-conditioned measure overwhelmingly favors **histories that minimize total complexity**:
\[\mathbb{P}(\gamma \mid O) \propto \exp[-\lambda \, \mathcal{C}_O[\gamma]].\]
A zero-entropy initial state is **maximally compressible** and hence most probable.
\[0 \;\rightarrow\; 1 \;\rightarrow\; \dots \;\rightarrow\; \text{current state}.\]
Starting from high entropy or decreasing entropy requires extra information to embed the observer, increasing \(\mathcal{C}_O[\gamma]\) and suppressing probability:
\[\mathbb{P}(\gamma_{\rm high-entropy} \mid O) \ll \mathbb{P}(\gamma_{\rm zero-entropy} \mid O).\]
Thus, the **arrow of time** and increasing total entropy naturally emerge:
\[\text{Zero entropy} \;\Rightarrow\; \text{Maximal compressibility} \;\Rightarrow\; \text{Observer-compatible universe}.\]
&&
The universe we “see” is not transmitted to us—it is already part of the set that define us.
The observer’s wavefunction can be split as:
\[\Psi_\gamma = \Psi_{\rm self} \otimes \Psi_{\rm env},\]
where \(\Psi_{\rm self}\) is the geometrically persistent part defining the observer, and \(\Psi_{\rm env}\) is the remaining “outside” world.
Temporal extrapolation of \(\Psi_{\rm self}\) has a **limited predictive horizon**:
\[\Psi_{\rm self}(t+1) \sim \arg\min_{\gamma_{t+1}} \mathcal{C}*O[\gamma*{0:t+1}],\]
so the observer cannot see far into the future. Spatial extrapolation generates the compressible environment:
\[\Psi_{\rm env} \sim \text{extensions of } \Psi_{\rm self} \text{ into surrounding bits}.\]
The perceived universe emerges because **low-complexity histories dominate the measure**:
\[\mathbb{P}(\gamma \mid O) \propto \exp[-\lambda , \mathcal{C}_O[\gamma]].\]
In short: the observer does not receive signals from outside; the “world” is a compressible extrapolation of the self-wavefunction, and apparent laws and structures emerge statistically.
&&
Does the theory support the QBism view that our observations shape quantum phenomena?
Yes. In \(IaM^e\) , the observer is an informational structure and is most likely to find themselves in configurations that preserve their reasoning and decision-making capabilities. Observers with reasoning have a higher probability of persistence than those without. As a result, reasoning statistically biases which configurations an observer experiences next. Quantum mechanics emerges from these informational rules, so it may appear that observation or decisions “change” the quantum state, even though all evolution is fully contained within the underlying substrate.
&&
Considering how highly respected authors, such as Stephen Hawking, have noted that a Theory of Everything may be forever constrained by Gödelian limits, we approach this with necessary humility. Gödel’s incompleteness theorems constrain formal systems capable of arithmetic, but our framework suggests a shift in focus: a ToE aims for ontological closure rather than formal deductive completeness. While an embedded observer will inevitably encounter Gödel-style limits in their internal formal reasoning, these might be constraints on the compression and representation of the data, not necessarily a limitation on the underlying informational substrate itself.
&&
No. If singularities are taken as points of infinite curvature, their informational correspondence would compress poorly and thus be suppressed in statistical measure. A geometric point maps to a zero-entropy state, which possesses no degrees of freedom to encode microstructure (such as particles). Therefore, in neither case does the theory imply that "singularities" are points of infinite curvature.
&&
In the popular formulation, the simulation hypothesis posits an external simulator operating in a higher-level physical reality. Recent work has argued that such hypotheses are not fully consistent with observed features of our universe, placing strong constraints on externally run simulations (Faizal et al. 2025). (Vazza 2025) approaches the simulation hypothesis from a very different perspective, with astrophysical constraints. Regardless of the technical details, this approach merely shifts the explanatory burden: one must then explain the origin and laws of the simulator itself. In this framework, such meta-theories are rejected. A genuine Theory of Everything cannot rely on an external system, as a “theory of everything minus the simulator” is not a theory of everything. Instead, we assume informational ontological equivalence: simulated and non-simulated realities are not ontologically distinct. The universe is not running on something else; rather, all structures, including observers and apparent physical laws, arise within a single abstract informational substrate. In this sense, the question of whether we are “in a simulation” is ill-posed.
&&
Yes, but only in a restricted and precisely defined sense.
Within the theory, observers are informational structures. Any such structure capable of reasoning can, in principle, construct formal systems, Turing machines, and simulations of itself. This has already occurred: humans have defined universal computation and have begun simulating simplified versions of biological, cognitive, and physical processes. Since biological information (including DNA) can be encoded digitally without loss of information, simulated humans can be axiomatic copies of real humans at the informational level.
These sub-simulations do not require a separate ontological substrate. They correspond simply to different configurations within the same underlying informational space (e.g., subsets of \(2^n\)). In this limited sense, recursive simulation is supported.
However, the theory places a fundamental constraint on unbounded recursive simulation: entropy. It is informationally expensive to realize one. Extracting, encoding, and maintaining the information required to faithfully describe even a single human genome is already nontrivial. Scaling this to billions of humans, each composed of billions of cells, embedded in a universe containing astronomical numbers of particles and interactions, rapidly exhausts available informational resources.
As a result, recursive simulations necessarily degrade. They must either omit detail, compress aggressively (thereby losing fidelity), restrict scope, or terminate.
&&
Death can be understood in terms of the spectral complexity of an observer’s wavefunction. Let \(\psi_O\) denote the linear encoding of all correlations defining observer \(O\). The observer’s probability of continued existence along a future history \(\gamma \in \Gamma_O\) is given by the Spectral Selection Principle: \[\mathbb{P}(\gamma \mid O) \propto \exp\big(-\alpha \,\Sigma[\gamma]\big),\] where \(\Sigma[\gamma]\) is the number of independent frequency–phase components required to encode \(\gamma\) at the observer’s resolution, and \(\alpha\) is a scale parameter determined by observer bandwidth.
As the observer accumulates information—memories, learned skills, and internal structure—the effective spectral complexity \(\Sigma[\gamma]\) of future continuations necessarily increases. Each additional independent component in \(\psi_O\) reduces the exponential weight of compatible histories. Formally, if the observer requires \(N\) additional frequencies to represent future states, the probability of survival along those histories scales as \[\mathbb{P}_\text{future} \sim \exp(-\alpha N),\] leading to a rapid, exponential suppression of persistence.
Wrinkles and aging are the geometric manifestation of increasing spectral complexity in the observer wavefunction.
Death is the inevitable consequence of combinatorial exhaustion: as the observer’s informational content grows, the set of future histories that preserve identity becomes vanishingly small. Survival probability approaches zero, corresponding to the cessation of the observer’s experiential existence. Hence, death emerges naturally from the finite and exponentially constrained structure of complex informational systems.
&&
Conditional on entropy increasing and on typical emergence, observers are overwhelmingly likely to find themselves at the entropy value where microstructure count is maximal, where the lognormal peaks. The theory suggests (althought not formally derive yet) that this might corresponds to a near-critical expansion regime.
&&
The theory adopts Cartesian Certainty—the observer’s existence—as its sole a priori datum. However, it does not default to Solipsism; instead, the number of observers is treated as a probabilistic variable. Absent a proof that a single observer is statistically favored over many, shared observer structures dominate the measure.
&&
No. We explicitly reject Kolmogorov complexity as a foundational quantity. Kolmogorov complexity is neither continuous nor computable and therefore cannot serve as a physical observable.
In the \(IaM^e\) framework, description length is instead defined through the spectral content of the wavefunction. A physical state is characterized by a well-defined set of frequencies and phases, and its informational cost is given by the minimal spectral length required to represent it under finite resolution. This quantity is continuous, measurable, and can be mapped to a discrete bit representation.
The apparent wave-like behavior of the microphysical world is not postulated but selected: among all admissible informational configurations, those admitting maximal spectral compression dominate the measure. Observers are therefore overwhelmingly likely to be embedded in universes whose dynamics admit compact spectral representations. In this sense, we do not observe a compressed universe because it is assumed, but because compressed configurations are overwhelmingly more probable within the space of physically realizable descriptions.