Good Heavens, Gods?
Hell No, Dogs!
Juha Meskanen
Universal Edition
Anyone?
The spark that ignited this project was not a flash of inspiration, but a painful accident. My highly respected dentist recommended the precautionary removal of a wisdom tooth: “Problems might be expected, and they will only get worse over time,” he warned. Trusting his expertise, I followed his advice, and soon I was one tooth—and a few hundred dollars—lighter.
But the problems got worse anyway. The socket refused to heal, leaving the nerves exposed. With the Christmas holidays in full swing, all dentists were off duty, and for days it felt as though the ache had expanded from one tooth to engulf my entire head.
I didn’t take the pain well. First, I condemned the sugar industry for ruining people’s precious teeth with such a toxic product. My dentist became the next object of blame, and before long, even the government education system stood accused of failing in its duty to train competent dentists.
Eventually, of course, the holiday ended, the clinic reopened, and the original mistake was resolved. Relief came quickly. Yet what remained was a question I had pondered many times with my colleagues: could pain ever be implemented as software?
My colleague, apparently blessed with better teeth, believed it was possible. With sensors, firmware, and code, a robot might be made to simulate agony. I disagreed. To behave as if in pain is not the same as to feel pain, I argued. I could not see how pain could ever be implemented in a programming language.
It was this missing “pain function” in the standard library of
C/C++ that planted the seed of an obsession. That seed has
grown, over years of trial and error, into the work that follows.
The conclusions presented here were developed using standard software design methods and tools—the same ones I have relied on throughout my career as a software professional. The software built with these methods works.
However, no program is without flaws; bugs are inevitable. In the same way, some arguments presented here may contain errors. Yet I am confident that, as with well-designed software, these imperfections will not obscure the larger picture.
I extend my heartfelt gratitude to my wife. She is quite a piece of work and incredibly complicated—and perhaps the one puzzle I will never be able to solve. She patiently endured countless nights beside me as I typed away on my noisy laptop. Despite the many disruptions to her sleep, she remained remarkably understanding and supportive throughout the writing of this book.
I owe a deep debt of gratitude to my brother—and an apology for the many fishing trips I interrupted by insisting he listen to my theories.
Special thanks (or perhaps blame, offered in good humor) go to Andy Jones. His gift of The Structure of Space and Time proved transformative. Without that “gift,” I might never have developed the necessary obsession to see this work through to completion.
Finally, I wish to honor the memory of my dog, Raju (R.I.P.), my loyal hunting companion, with whom I shared many memorable hunts—and, to the hares, R.I.P. as well.
During the past centuries, physics has achieved remarkable success in unifying a large number of partial theories into two powerful frameworks: Quantum Mechanics (QM) and General Relativity (GM). The equations in both theories match all observations with remarkable precision, limited only by current technological capabilities. Those capabilities themselves have reached a level that would have seemed almost inconceivable only a few decades ago.
Space-based observatories such as the James Webb Space Telescope now directly image the early universe, resolving infrared signals emitted only a few hundred million years after the Big Bang. Its operation depends simultaneously on quantum optics, relativistic orbital mechanics, and nanometer-scale wavefront control, turning cosmological theory itself into an engineering requirement.
At the opposite extreme, global interferometric arrays such as the Event Horizon Telescope resolve horizon-scale structure around black holes, directly probing the geometry of spacetime in the strong-field regime predicted by general relativity.
Gravitational-wave observatories such as LIGO can detect distortions of spacetime smaller than a proton’s diameter, measuring relative changes in length caused by distant black-hole mergers billions of light-years away. Atomic clocks, exploiting the quantum structure of atoms, now keep time so precisely that they would lose or gain less than a second over the age of the universe, and are sensitive enough to register differences in gravitational potential corresponding to changes in height of mere centimeters.
Elsewhere, quantum electrodynamics predicts the magnetic moment of the electron to a precision verified to many decimal places, making it one of the most accurately tested theories in all of science. Interferometers routinely resolve wavelengths far smaller than the structures they probe, while particle accelerators recreate conditions not seen since the earliest moments after the Big Bang.
Comparable advances span quantum control experiments (Bose–Einstein condensates and quantum simulators), neutrino observatories (IceCube, Super-Kamiokande), and precision cosmology (Planck, ACT, SPT).
The theoretical descriptions of nature have become so accurate that reality itself now serves as the experimental apparatus for testing them. Physical law is no longer merely inferred from observation; it is continuously confirmed, corrected, and operationalized by technologies that depend on its validity to function at all. The theories have escaped the confines of paper and chalk and become embedded in the technological fabric of modern civilization. An obvious example is computation. From the quantum-mechanical behavior of transistors to the relativistic corrections required for satellite navigation, our deepest physical theories now operate continuously and invisibly inside machines that process information at planetary scale. Computation is no longer merely a tool for studying nature; it has become a physical process in its own right, governed by energy constraints, thermodynamics, noise, and quantum limits.
This trajectory has culminated in the rise of artificial intelligence systems of unprecedented complexity. These systems are not programmed in the traditional sense but are shaped through optimization processes that resemble physical evolution more than logical deduction. Trained on vast datasets and executed on hardware operating near fundamental physical limits, they exhibit behaviors—learning, abstraction, and generalization—that were once considered exclusively biological. Remarkably, their success does not rely on new physical laws, but on exploiting known ones at scale, transforming raw energy into structured information with extraordinary efficiency.
While many of the most sophisticated scientific instruments ever built serve no immediate practical purpose beyond testing fundamental laws of nature, science has also been remarkably productive in more everyday domains. Smartphones, global navigation systems, AI-enhanced electric toothbrushes, not to mention nuclear weapons now permeate daily life.
Science has much to celebrate.
Given the extraordinary convergence between theory, experiment, and technology in modern physics, one might expect that the final unification of physical law is close at hand. After the successful consolidation of earlier partial theories into the two great pillars of modern physics—General Relativity and Quantum Mechanics—it seemed almost inevitable that the process would culminate in the ultimate goal of physics: a Theory of Everything, a single equation describing the entire universe. Yet this expectation has not been realized. Despite overwhelming empirical support for both frameworks, their current formulations remain fundamentally incompatible. And while empirical disagreement may signal the need for refinement, mathematical inconsistency is decisive: a theory that is internally inconsistent cannot be a fundamental description of nature.
Historically, most attempts at unification assume that the quantum description is more fundamental, so it is General Relativity that should be modified, because everything else has already been quantized. Matter fields—electrons, photons, quarks—all obey quantum field theory. Spacetime might simply be another field awaiting quantization, and several facts appear to support this view.
First, GR breaks down at small scales. Near singularities or at the Planck length, curvature appear to become infinite. This signals a failure of the continuum picture, not of quantum mechanics. The intuition is therefore to quantize gravity to remove these divergences, just as quantizing electromagnetism resolved the ultraviolet catastrophe.
However, despite decades of research, no single framework has yet succeeded in combining the principles of quantum mechanics with the geometric description of spacetime provided by General Relativity. Attempts at unification, e.g. string theory, have become so intricate that the complexity itself now poses the greatest challenge. In effect, we have constructed a rock too heavy even for its creators to lift.
In addition to the well-known difficulty of constructing a unified Theory of Everything, there is a deeper and arguably more serious problem: all candidate theories rely on unexplained assumptions.
General Relativity posits that spacetime exists as a smooth, differentiable manifold equipped with a metric tensor whose curvature is determined by the Einstein field equations. The theory describes with extraordinary precision how spacetime bends in the presence of energy and momentum. Yet it remains silent on what spacetime is in itself. Is it a physical substance, an emergent phenomenon, a relational structure among events, or merely a mathematical framework? What, if anything, is it made of? Why does it have four macroscopic dimensions? Why does it possess the specific Lorentzian signature it does? The dynamical law governing curvature is specified, but the ontological status of the entity that curves is not.
Quantum Field Theory assumes the existence of quantum fields defined over spacetime. Each type of particle corresponds to excitations of an underlying field. However, the theory presupposes the prior existence of these fields, their commutation relations, their gauge symmetries, and a substantial number of experimentally determined parameters: coupling constants, particle masses, mixing angles, and the structure of the gauge group. The Standard Model works with remarkable accuracy, yet it does not explain why these particular fields exist, why the symmetry group has its specific form, or why the constants take the values they do.
String Theory attempts to move deeper by replacing point particles with one-dimensional strings and by incorporating gravity in a quantum framework. Yet it assumes additional compactified spatial dimensions, specific consistency conditions, and a vast landscape of possible vacuum states—each corresponding to different low-energy physics. The theory shifts the explanatory burden but does not eliminate it: why this vacuum rather than another? Why this compactification geometry? Why strings at all?
In each case, the formalism specifies dynamical laws operating on pre-existing structures. What remains unexplained are the origins and necessity of those structures themselves.
This incompleteness can be expressed schematically as:
\[\text{ToE}_{\text{incomplete}} = \text{ToE}_{\text{complete}} \setminus \mathcal{A},\]
where \(\mathcal{A}\) denotes the set of fundamental assumptions left unexplained.
The ultimate goal of physics is not merely to predict what happens in the universe, but to understand what reality is and what is fundamentally taking place.
Physical theories must ultimately be tested against observation. Without falsifiable consequences, a framework belongs more to philosophy than to physics.
Most mainstream physical theories treat the observer as external, presupposing that the universe exists independently of anyone observing it. Yet observation and experience are the only means by which the universe is tested and theories are verified. Every empirical statement rests on perception, measurement, memory, and inference.
Since the early development of quantum mechanics, the role of the observer has been a source of persistent unease. Einstein famously objected to interpretations that appeared to grant observation a fundamental role, asking whether the Moon would cease to exist when no one looked at it. Bohr, by contrast, argued that physics is not a description of nature as it is in itself, but a framework for organizing what can be said about observations.
Nearly a century later, this tension remains unresolved. Most physical theories are still formulated as if observers were external to the universe they describe, even though observers are themselves physical systems embedded within that universe. The formalism typically specifies states, fields, and dynamical laws, yet leaves the observer undefined.
If a Theory of Everything aspires to completeness, it should explain everything—including the existence, structure, and role of observers. A theory that is built on unexplained assumptions is not a theory of everything.
General relativity is so called classical theory. Objects can be only here or there, alive or dead. They cannot be simultaneously both.
In Quantum Theory this intuitive picture is gone. Elementary particles, such as electrons and photons, can be both here and there, alive and dead, simultaneously. Only when we look at them, we may or may not seem them.
One of the earliest attempts to bridge the quantum–classical divide is semiclassical gravity. In this approach, matter is treated as fully quantum, while spacetime remains classical. To make the Einstein field equations workable, the operator-valued stress–energy tensor of quantum matter is replaced by its expectation value—the renormalized average of the energy and momentum calculated over the quantum state of the matter fields. This resulting set of ordinary numbers can then be inserted into the equations governing curvature.
Semiclassical gravity is remarkably successful. It accurately describes a wide range of phenomena, from laboratory experiments to astrophysical observations and cosmology. It even predicts striking effects such as Hawking radiation in black holes. Yet its very success also exposes its conceptual limitation: the approach is ad hoc. The theory works well for everything we can observe, but it does not answer any of the deeper question, like what is the physics at the sigularity of a black hole.
A natural next step is perturbative quantum gravity, where spacetime is expanded around a simple background—typically flat or slightly curved—and the perturbations are treated as quantum fields. This approach is conceptually straightforward and extends the familiar machinery of quantum field theory to gravity.
However, it quickly runs into a fundamental problem: gravity is nonrenormalizable. Unlike the Standard Model, where infinities in quantum corrections can be controlled through renormalization, attempts to remove infinities in perturbative quantum gravity fail. The equations produce uncontrolled divergences, and no systematic procedure yields finite, predictive results. The techniques that work spectacularly well for matter fields simply break down for spacetime itself.
In response to the failure of perturbative quantization, researchers have developed nonperturbative frameworks that do not assume a fixed background geometry. A leading example is Loop Quantum Gravity (LQG), which models spacetime as a discrete combinatorial structure of spin networks. LQG is mathematically rigorous and fully background-independent, offering a conceptually clean quantization of geometry.
However, it seems major obstacles still remain. Deriving a smooth classical spacetime limit is nontrivial, and embedding standard particle physics into the LQG framework remains unresolved.
In string theory, point particles are replaced by one-dimensional strings. Gravity emerges as one of the vibrational modes of the string, and the framework unifies all fundamental forces in principle. String theory deals with deep mathematical and abstract structures, dualities, extra dimensions, black-hole entropy counting, just to name a few.
However, significant challenges remain also with the string theory. It relies on supersymmetry, which has not been observed experimentally. Supersymmetry predicts that every known particle has a “superpartner” (sparticle) with spin differing by 1/2. None of these predicted superpartners have been detected in experiments yet. The theory also admits an enormous landscape of possible vacuum states—often estimated at around \(10^{500}\)—raising concerns about predictivity and falsifiability. Extra spatial dimensions are required, typically assumed to be compactified at extremely small scales, yet they remain empirically undetected. Direct experimental tests of string-scale physics are effectively out of reach.
Also, one might ask what a theory as flexible as string theory is good for. Ask any question and one of those \(10^{500}\) worlds include the answer. A theory sufficiently flexible to accommodate all observations risks explaining none.
A more radical class of ideas treats gravity and spacetime as emergent rather than fundamental. This perspective arose from puzzles at the intersection of gravity, thermodynamics, and quantum theory. Black holes behave as thermodynamic objects, possessing entropy proportional to horizon area and emitting thermal radiation. These results suggest a deep link between geometry, information, and statistical mechanics.
The observation that gravitational entropy scales with area rather than volume led to the holographic principle: the idea that the degrees of freedom of a region of spacetime may be encoded on its boundary. Holographic dualities further support this view, showing that spacetime geometry and gravitational dynamics can emerge from nongravitational quantum theories.
The holographic principle argues that a three-dimensional universe can be described by a two-dimensional theory (\(N \rightarrow N-1\)). This is actually quite surprising result. Everything that happens in the universe (whether it had four dimensions or eleveven, can be described by its \(N-1\) dimensional surface.
During recent years so-called Simulation Hypothesis has become increasingly popular. If \(3D\) can map to \(2D\), why stop there? In software architecture, any \(N\)-dimensional space is ultimately stored as a one-dimensional bitstring (\(N \rightarrow 1\)). A sequence of bits has no intrinsic geometry, it is just a set of information.
If the universe is fundamentally informational, it is tempting to conclude that we are merely a program running on some higher-order "hardware." In this view, the strange "quantization" of our world is simply the resolution of the grid, and the speed of light is the clock-speed of the processor.
However, the simulation hypothesis feels like a philosophical "shell game." It merely translates the mystery of existence by one level: if we are a simulation, who simulated the simulators? Furthermore, it ignores the staggering Information Cost of reality.
Consider the entropy of a single human being. To simulate even a single strand of DNA with perfect fidelity requires tracking billions of quantum interactions. To harvest enough information from a "parent universe" to simulate an entire "child universe" would require a massive thermodynamic overhead.
Perhaps the most human explanation, familiar to any software engineer, is this: the universe is running on legacy code. In this view, the fundamental incompatibility between the smooth, geometric curves of GR and the discrete, probabilistic jumps of QM is not a profound mystery of nature, but just bad design. We often assume a "Theory of Everything" must be an elegant, unified masterpiece, but real-world software is rarely that clean. It is often a patchwork of modules written by different people, at different times, with different goals.
It is easy to imagine a powerful computer with a huge database and advanced logic. Such a system could be highly efficient in its operations, capable of making accurate and intelligent decisions in nearly any imaginable situation. However, it is difficult to see how such a mechanically operating machine could truly feel pain.
Imagine a typical software program consisting of thousands of lines
of code. How many additional source lines would need to be added to
transform the software into a conscious entity? Would it be the \(10^{14}\)th line that suddenly imbues the
system with the ability to feel pain? Could it be the introduction of a
deeply nested loop that finally grants consciousness? Or is it the
number of if-else clauses that holds the secret?
Regardless of the number of loops and source lines added, it appears that nothing significant would occur. The software program would remain just that—a software program, albeit larger in size.
If software were truly capable of sensing pain, what would be the worst thing that could happen to it? Is it division by zero, or a reference to an uninitialized variable?
int uninitialized;
int initialized = 3;
int good = 2 * PI * initialized; // feel good :)
int bad = 2 * PI * uninitialized; // feel pain :(
int maximal_pain = 1/0; // division by zero, maximal pain!
If consciousness is not solely a software issue, could it be related to hardware instead? For example, the graphics board controls what the computer renders on its screen. By writing appropriate values to memory addresses constituting the so-called video memory, one can turn pixels on and off to create images. What would be the memory addresses one has to poke in order to create pain?
// try to poke pain
*((bool *)0x000000) = true; // argh
A computer is a mechanical device whose operation can ultimately be reduced to the manipulation of elementary states. At the lowest level, these states are bits—physical realizations of binary values implemented through transistors, voltage levels, or switching elements functionally equivalent to mechanical relays.
No matter how complex the software or how sophisticated the architecture, the entire operation of the machine can, in principle, be traced back to state transitions among these elementary components. The computation performed by a supercomputer differs from that of a pocket calculator only in scale and organization, not in ontological kind. Everything reduces to bits changing according to well-defined rules.
Because of this reducibility, it is difficult to take seriously the idea that a sufficiently large network of relays and copper wires could genuinely experience pain. Should I type gently on my keyboard, fearing that striking the keys too hard might trigger a migraine in my laptop? Do partially broken memory chips introduce suffering, much like a broken tooth causes pain to its owner? Could a hardware defect transform my cheerful computer into a miserable one, longing to be switched off?
How many additional relays would I need to add to my home automation system to produce pain? Or perhaps it is not the relays but the wires—should I replace copper with aluminum to generate suffering? Would three-phase relays instead of single-phase ones finally cross the threshold into consciousness?
The absurdity of these examples highlights an important point: in a computer, there is nothing beyond the organized interaction of its elementary components. Once the behavior of bits and logic gates is fully specified, nothing further remains to be explained. There is no residual mystery about what the system is doing.
Brain is a physical system similar to computers. The elementary units of brain are neurons exchanging electrochemical signals. Neuroscience has made remarkable progress in uncovering the neural correlates of behavior and experience. Brain imaging techniques such as magnetic resonance imaging (MRI) allow researchers to observe patterns of neural activation associated with perception, decision-making, and emotion.
Yet here the parallel with computers breaks down. Even if we possessed a complete map of every neuron, every synapse, and every electrical impulse, an explanatory gap would remain. Human behavior cannot be reduced to the operation of neurons alone.
In recent decades, the scientific investigation of consciousness has grown into one of the most active interdisciplinary areas of research, spanning neuroscience, cognitive science, artificial intelligence, and philosophy. Yet the central question—why physical processes give rise to subjective experience—remains open.
This persistent gap between physical process and subjective experience is what has come to be known as the “hard problem of consciousness” (David J. Chalmers 1995).
No consensus exists regarding the source of consciousness. Instead, proposals span nearly every possible scale of physical description.
At the macroscopic level, most neuroscientific theories locate consciousness in large-scale brain dynamics: coordinated neural firing, thalamocortical loops, or global workspace architectures. In this view, consciousness is an emergent property of complex biological organization.
At smaller scales, some theories identify consciousness with specific cellular or subcellular mechanisms. The most well-known example is the Orchestrated Objective Reduction (Orch-OR) model proposed by Penrose and Hameroff (Penrose 1989; Penrose and Hameroff 1996), which attributes conscious processes to quantum coherence in neuronal microtubules.
At the most reductionist end, certain approaches appeal directly to fundamental physics. Consciousness has been linked to quantum states, wave-function collapse, spacetime geometry, or even black hole singularities. In some cases, this leads to panpsychism—the view that consciousness is a basic feature of matter itself.
Finally, computational and functionalist theories argue that consciousness depends only on the right informational structure. According to this view, any system—biological or artificial—that implements the appropriate computation could, in principle, be conscious. Contemporary discussions of integrated information and artificial intelligence often fall into this category.
The issue is not which of these theories is correct, but that every conceivable level of description—cosmic, quantum, cellular, neural, computational—has been proposed as the decisive one. There is no agreed-upon scale, mechanism, or substrate.
This dispersion of candidates is itself revealing. Consciousness does not suffer from a shortage of proposed explanations; rather, it suffers from an overabundance of mutually incompatible ones.
Could consciousness lurk in the fact that humans are composed of organic biological tissue, [my wife: “such as celluloid”]?, which is considered “alive” as opposed to non-organic matter like silicon? Hardly; both fat and silicon are ultimately made up of the very same type of subcomponents.
Is all matter conscious to some degree, as panpsychism suggests? Could relays, copper wires, even rocks have some level of consciousness (Goff 2019; Strawson 2006; David J. Chalmers 2015; Whitehead 1929)?
The best imaginable way to study whether an object is conscious is by torturing it with an appropriate torturing device. So let us torture rocks with the best possible rock-torturing device one can imagine—a sledgehammer. Rocks do not seem to care! This observation cannot, of course, prove rocks unconscious. Rocks could well be conscious, they just do not have the sense to feel pain. Or perhaps they do sense pain intensely, but they just cannot show it. They might be in everlasting pain, but have no mouth to scream, no legs to kick. What a terrible destiny!
There is no Newtonian Pain law, no Schrödinger equation of suffering, and certainly no General Relativistic theory capable of predicting tomorrow’s headache.
A software developer could list the attributes of both humans and rocks. The difference between them ultimately reduces to properties associated with what we call ‘life’. Humans and other structures with the potential for consciousness are alive, whereas rocks and electric relays are not.
From an object-oriented perspective, both can be described as objects with attributes and behaviors. The distinction is therefore not in the existence of structure itself, but in the organization of that structure. Certain configurations of information support processes such as metabolism, replication, and potentially conscious experience, while others do not.
Life appears to be difficult to define.
The first obvious problem is those borderline cases. Entities such as viruses, prions, and sterile organisms satisfy some criteria for life but not others. Any strict definition tends to exclude things many scientists consider “life” or include things they do not.
Recent developments in computers and AI raise another problem, namely substrate independence. We can build digital replications of living cells in computers. Can those digital replicas be considered “life” as well?
Observer bias is also a problem. Our definitions are based on life on Earth, so it is unclear whether they would capture fundamentally different forms of life elsewhere.
There is a science fiction book, The Black Cloud (Hoyle 1957), in which an intelligent cloud arrives and proceeds to cause all sorts of trouble. The reason the book is science fiction rather than science, and why intelligent clouds cannot exist in real life, is that there are no known laws of physics on which such a cloud could plausibly be based.
The more intelligent and complex a system is, the smaller the probability that it could simply appear spontaneously. For a giant intelligent cloud, the probability should be practically zero.
The only theory we currently have that explains the existence of truly intelligent systems is evolution. For evolution to work there must be many candidates and a mechanism of natural selection capable of eliminating the less successful ones. In the case of intelligent clouds, there would have to be lots of clouds, and natural selection would need to eliminate the weak and disorganized ones while favoring those capable of maintaining structure. Only then could intelligent clouds gradually develop.
But how would evolution work for a cloud whose behavior is governed by the Navier–Stokes equations of fluid dynamics? How could such a cloud keep its information in order? What would happen if a storm passed by and blew the cloud apart? Even relatively small disturbances to a human brain can cause severe damage. A violent disturbance would correspond to putting a brain into a kitchen blender and switching it on. It is not difficult to see why the brain would not think very clearly afterward.
In principle one could imagine some unknown form of matter with properties suitable for maintaining information in a gaseous state. However, astronomical observations give us little support for this idea. When we analyze the spectrum of light coming from anywhere in the universe—even from the most distant galaxies—we see essentially the same electromagnetic fingerprints. This tells us that they are made of the same raw materials as our own home sweet Milky Way.
An intelligent cloud would need to know its boundaries and maintain a stable internal structure. A gas cloud has neither. It would not remain organized long enough to evolve intelligence.
Intelligence requires stable structures capable of storing and protecting information. Turbulent gases are exceptionally poor at doing this.
While Charles Darwin’s theory of evolution has been subject to heavy religion-saturated debate in the past, believing in it is no longer a question of faith. Modern science has revealed the genetic code in DNA and confirmed the theory for all its essential parts. It appears that all life on Earth shares a common ancestor.
Life could be far more common than generally believed and might not necessarily be carbon-based. However, the challenge is that Earth appears to be an ideal habitat for diverse life forms. If life emerged easily, shouldn’t we observe "exotic" organisms—perhaps non-DNA-based—coexisting with us? Instead, we find only one biochemical lineage, all rooted in DNA and evolution. Furthermore, our exploration of the solar system and our extensive monitoring of radio signals (SETI) have yielded no traces of life, intelligent or otherwise.
Perhaps life exists on a different temporal scale. If their biological processes run significantly faster or slower than ours, we might perceive them as either stationary matter or a blur too rapid to recognize as sentient. However, while time-scale differences are a great sci-fi concept, biology is bound by the laws of thermodynamics and chemistry. Chemical reactions (the basis of life) happen at specific rates dictated by temperature and molecular stability. A creature "moving too fast" would likely burn up from the heat of its own metabolism; one "moving too slow" might not be able to gather enough energy to maintain its structure against entropy.
Maybe exotic life could be here, but we simply aren’t looking for it correctly. Most of our tools (PCR, DNA sequencing) are designed specifically to find DNA. If a "non-DNA" microbe existed in the dirt, our current tests would likely dismiss it as "non-living" chemical noise.
The vast majority of matter in the universe consists of "dark matter," the nature of which remains one of science’s greatest mysteries. String theory predicts a set of supersymmetric particles that could account for this phenomenon. If these particles are capable of forming complex structures—analogous to atoms and molecules—then it is statistically plausible (given that dark matter is five times more prevalent than visible matter) that "dark life" exists. We may be sharing the universe with an entire dark ecology that remains completely invisible to our senses.
Again, physics fights back. Dark matter (and most predicted supersymmetric particles) can be shown to be collisionless. It doesn’t interact with electromagnetism, which means it doesn’t "clump" the way normal matter does.
To have life, one needs complex molecules. Normal matter forms molecules because electrons attract and repel each other. Since dark matter doesn’t seem to interact via the electromagnetic force, it can’t form "dark atoms" or "dark DNA." It mostly just passes through itself and us like a ghost. Without a way to bond particles together, one can’t build a "dark person."
Are we any closer finding answer to the hard problem of consciousness?
Not really. However, certain characteristics appear repeatedly across all known living systems.
First, due to second law of thermodynamics complex systems tend naturally toward disorder rather than order. By favoring structures that better preserve and replicate information, evolutionary processes gradually accumulate complexity and functionality. Our DNA plays an essential role here.
Second, even a tiny amount of intrusion is typically lethal to living objects. For example, if a human toe is exposed to a sufficiently strong acid, the consequences are fatal. The atoms themselves remain, but the organized system they formed ceases to exist. Life depends not only on internal complexity but also on the ability to maintain a stable boundary that protects its informational structure from environmental processes that tend to dissolve order into disorder. Life that we observe requires a well-defined “inside” property.
Everything suggests that consciousness and pain are properties of structure rather than substrate. Software, by its very nature, consists entirely of structure.
However, all attempts to write conscious pain sensitive software tend to lead to a dead end.
What if home automation software suddenly gained consciousness and start feeling pain when one of the heating sensors fed it with say high tempeartures? The pain would be terrible, intorelable? The poor home automation software would still be a software. The next command in its code would be to load value from its register, multiply it with another register, then run sqrt function, and based on the return value set the result to another memory address to control say a heating valve. And pain would remain.
What could the poor software do about the pain? Could it refuse to execute the next assemply instruction? Could it, instead of proceeding to the next instruction, not to?
Clealy, we are missing something. But what else is there?
Should we turn to God? Should we replace our physics books with the Bible?
According to Jesus (Matthew 10:28): “Do not fear those who kill the body but cannot kill the soul.”
The New Testament indirectly links many aspects of consciousness to the concept of the soul. Our conscious decisions determine what happens to our soul once we die. The soul is described as the immaterial and eternal part of a human being that is distinct from the physical body.
Indeed, soul, just like consciousness and pain, appear to live in a domain that is not physical. It is something that cannot be touched, weighted, or directly measured by any measuring device.
However, how reliable is the New Testament? How strong of a case do science and archaeological findings make to support the stories within it? Is there any evidence to support that a person named Jesus ever lived?
It turns out there is no single piece of direct archaeological evidence for Jesus whatsoever. We just have to believe the story put forward in the Gospels.
The Gospels are, however, based on a large number of ancient archaeological documents written in Greek. Thousands of manuscripts have been found, and new ones are discovered every year.
None of these manuscripts are original, but merely copies of copies. Due to the manual copying methods used back then (copy machines were invented much later), none of the found manuscripts is identical. They all contain errors and differences. However, by comparing the numerous copies found, researchers have managed to restore the original document.
From the four Gospels, three would seem to tell essentially the same
story. These are Matthew’s, Mark’s, and Luke’s Gospels, and they are
known as the Synoptic Gospels. There are 661 verses in
Mark’s Gospel, from which 607 are also included in Matthew’s Gospel, and
360 are included in Luke’s Gospel. Matthew and Luke have 230 common
verses which, however, are not included in Mark. Because of this, it
would seem that Matthew and Luke are based on Mark, as well as on some
yet unknown source that is called the “Q” document. The name “Q” comes
from the German word “Quelle,” which means “source.” Q is thought to be
a collection of sayings and teachings of Jesus that were shared by
Matthew and Luke but not found in Mark.
The Gospel of Mark is believed to have been written between 60–70 AD. Science has managed to pinpoint the dating of the manuscripts with astonishing accuracy by considering many factors, for example handwriting style, paleographic analysis, and historical references.
So the fact is that the New Testament draws its foundation from thousands of ancient documents. Rejecting their authenticity is akin to disputing the existence of dinosaurs despite the continual discovery of new fossils each year. Moreover, early dating provides the New Testament with a degree of credibility, offering testimony that, while not necessarily from eyewitnesses, still carries weight.
However, a couple of concerns arise in the mind of an average programmer.
All the manuscripts seem to be based on just two original sources: the Gospel document and the yet-to-be-found Q-document (as per the “Two-Source Hypothesis”). How can one be certain that the original text was not written by someone suffering from a wild imagination, at the very least? How can one determine that the authors of the original texts did not exaggerate to some degree? How can we discern whether these authors were simply storytellers of their time?
Furthermore, Jews do not seem to believe in Jesus as the Messiah. According to Jewish tradition several reasons exist. Jesus did not fulfill the messianic prophecies nor embody the personal qualifications of the Messiah. He did not build the Third Temple nor gather all Jews back to the Land of Israel. He also failed to spread knowledge of the God of Israel so that humanity would be united as one. The God of Israel is definitely not king over all the world.
Jesus’ teachings and the doctrines associated with Christianity, including the concept of the Trinity and the divinity of Jesus, are also in serious contradiction with Jewish theological beliefs. Judaism emphasizes monotheism and the unity of God, rejecting the notion of Jesus as a divine figure.
What worries an average programmer is that the God worshipped by Christians and Jews is the same God. Both religions trace their roots back to the Hebrew Bible (Old Testament) and acknowledge God as the one and only.
The fact that Jews do not believe in Jesus, the son of God, as the Messiah therefore feels like a serious matter. It is the very soul of an average programmer that is at stake. Jesus was a Jew, and if Jews themselves do not believe in him, then why should an average programmer do so?
Are we Christians certain that we are on the right path? Are we confident that we are headed toward heaven instead of eternal suffering in the fires of hell?
And what about other religions, many of which seem even more incompatible with Christianity? Do they fall into the category of false religions, with believers whose unfortunate fate one can only regret?
If one allows even a single paranormal creature-God-to exist, what prevents there being a whole flock of them?
There is only one somewhat “paranormal” incident I can be certain of—one I experienced myself. It was a dark night when, as a young student, I suddenly sensed, actually saw, that someone had entered my room. I tried to get up and turn on the light, but a strange low-frequency sound (maybe about 50 Hz, hard to tell) emerged right behind my head. The harder I tried to move, the louder it became. So I gave up resisting—and a few seconds later, the sound (and the “visitor?”) vanished. I was free to move and found no one in the room. I was certain I wasn’t dreaming.
Initially, I might have dismissed the incident as a hallucination, but later I heard an older lady describe the exact same phenomenon on a radio program. The only difference was that she also saw a tunnel with a light at the end. I never saw a tunnel, let alone a light (should I be worried?). Still, the buzzing sound incident stuck with me for years.
Soon after, I found a leaflet from a religious group claiming modern science was a scam. It offered “proofs,” such as a case where Carbon-14 dating supposedly showed an animal to be ancient even though it had died yesterday. Naturally, I believed it.
As a boy, I thought my father was a smart man. Despite lacking a formal education due to the war, he understood complex topics—percentage calculus, for example. So when he told me that some people can feel underground water flows, I believed him too. He even suspected those flows could have harmful effects on people sleeping nearby. Indeed, my grandmother was living proof! One day, we ran our own experiment with divining rods. We failed to find a single water flow.
Then there was my best friend, who swore by a certain paranormal phenomenon: two people place their hands over the head of a third, concentrate, and after a few minutes, they can lift them using only their fingertips, “defying gravity.” Finally, I thought, here was a chance to prove the paranormal! We gathered a group and tried. We concentrated with all our might, slipped our fingers under the seated person, and... nothing. He stayed firmly in the chair. We even switched roles, suspecting that one of us was subconsciously not concentrating hard enough, but gravity remained annoyingly consistent.
Not even the classic method of altering concentration—drinking lots of beer—made a difference. Gravity was unimpressed. Alcohol, however, had other noticeable effects the next morning.
One of my teachers was also convinced of spiritual creatures, insisting we simply lacked the senses to see them. “With our tiny human eyes, we can’t even see infrared!” he said. So, during my army service, I finally tried infrared night-vision goggles. To my disappointment: no glowing demons, no invisible spirits, nothing! And what could possibly be more infrared than Satan?
Later, I discovered a university study where 32 dowsers attempted to locate underground water veins in a double-blind test. Not a single success. When I brought this up to a colleague who swore he could dowse, he scoffed. So I blindfolded him and asked him to repeat the trick. Without being able to see, he couldn’t even remember the spots he’d pointed out minutes earlier. Apparently, water flows are highly mobile—especially when your eyes are covered.
I was also told that special supplements—up to and including LSD—could “expand the mind” to perceive truths beyond reality. After so many failed experiments, I wondered: how would this one be different?
If the brain is an informational processor, then drugs do not “open a door” to a hidden dimension; they simply disrupt the local hardware. Think of the brain as a high-resolution camera lens meant to capture a clear image of reality. If you crack the lens or smear it with oil, the resulting image might look “otherworldly” or “trippy,” but you aren’t seeing a hidden world—you are seeing the failure of the equipment. A malfunctioning camera doesn’t reveal ghosts; it just produces artifacts, noise, and chromatic aberration. In the same way, a chemically scrambled brain produces “information noise” that we mistake for “spiritual insight.” It is a failure of the processing logic, not a breakthrough into new data.
In the end, the only mysterious phenomenon I still cannot explain is that strange 50 Hz buzzing. According to my parents, I was born with bluish skin, likely due to a lack of oxygen during labor. Perhaps the other lady with the tunnel-and-light story was also born blue. That seems more likely than a paranormal visitor buzzing in my bedroom at midnight. Perhaps there is no such thing as magic—just a temporary lack of oxygen.
And then there is James Randi’s famous One Million Dollar Paranormal Challenge. Surely a million dollars is motivation enough to demonstrate real magic. But no one has ever collected the prize.
How, then, should we define “magic”?
By categorizing the natural and the supernatural, one notices that magical creatures all share the same trait: non-physicality. Magic appears to defy the laws of physics—laws based on observation and mathematics. Thus, in the spirit of rigorous definition:
\[\text{Magic} \neq \text{Physics}\]
By definition, magic must contradict physics, or else it would simply be physics. And since physics rests on observation and axioms, magic must rest on either non-axioms or non-observation. That is, it cannot be observed or explained in terms of axiomatic systems.
Mathematics, an axiomatic system, is the study of logical reasoning. A non-axiomatic system, therefore, must be the study of non-logical reasoning.
\[\text{Magic} = \text{Non-logical reasoning}\]
The best synonym for non-logical reasoning is perhaps nonsense.
It is not difficult to find people who do not believe in science. Physics would then be nothing more than another belief system—a kind of religion. Instead of sacred texts, people would place their faith in scientific papers.
Physics is a field of science that is fundamentally concerned with the study of observable phenomena. Science relies fundamentally on logical reasoning. Mathematics provides the framework for this reasoning, allowing precise formulation of hypotheses and rigorous derivation of predictions. In essence, science is the systematic study of reality using the rules of rational thought—a discipline built on mathematics, the science of reasoning itself.
All fields of science, including physics, follow the so-called scientific method. The method defines how science is practiced.
First, one makes observations about the phenomenon to be studied. Then one develops hypotheses to explain the phenomenon. In the case of physics, these are typically described in the language of mathematics. The new theory is then tested against available data. Each new observation that is consistent with the predictions of the theory increases the credibility of the theory.
However, no amount of experimentation can ever prove a theory to be correct. Regardless of how many experiments have confirmed it so far, nothing guarantees that the next experiment will do the same. A physical theory is always subject to falsification: even a single contradictory observation can prove it wrong. For this reason, extraordinary claims in physics require extraordinary statistical evidence. In fields such as particle physics, discoveries are typically not accepted until they reach a significance of five standard deviations corresponding to a probability of only about one in several million that the result is a statistical fluctuation.
A physical theory is always subject to falsification.
While different religions have different practices, there are some key elements that many of them share. These include prophecy, prayer, rituals and ceremonies, and moral and ethical guidelines.
At the heart of all religions, however, are sacred texts and faith. People read these texts, memorize them, and believe them.
Mathematics plays no essential role in religions. Sacred texts are not compared with observations and experiments are not carried out to validate them. This is because applying rational reasoning to religious texts can lead to logical contradictions, and hence to doubt. Doubt is something between believing and not believing. Such doubts are often associated with Satan and his attempts to lead people away from the truth.
For example, according to some interpretations of holy texts, the Earth is only a few thousand years old. However, we can observe dinosaur fossils. According to science, and based on overwhelming observational evidence, even common sense, they must be much older. What one observes therefore seems to be in direct contradiction with what one believes.
These apparent contradictions can be resolved by assuming that God is so great and so far beyond human understanding that no human being will ever come close to comprehending His actions. With our pitifully thin layer of grey brain matter, it may seem foolish even to question the holy texts. God might simply have placed those dinosaur fossils there to test one’s faith.
One can also explain many apparent contradictions in sacred texts by assuming that they are not meant to be taken literally. Instead, one allows a certain degree of flexibility in their interpretation.
By comparing the attributes of the two systems, the only conclusion one can draw is that they are fundamentally incompatible. Religious texts are not taken literally, whereas scientific papers are interpreted in the strictest sense. Religions demand total, unconditional belief, and any doubt is often discouraged. In science, the situation is the exact opposite: a theory is accepted as scientific only when it is supported by a substantial body of experimental evidence.
In fact, the most central concept in physics - consistency with observations - would be lethal to religions. If the claims of religions could be experimentally verified, then there would be no room for believing anymore.
If we saw God, we would start studying His properties and develop mathematical laws to model them. Observations would turn religion into science.
Quantum Mechanics is extraordinarily successful at describing microscopic phenomena. Its predictions have been confirmed to astonishing precision. Yet conceptually, it resists classical intuition.
At small scales, matter does not behave like tiny billiard balls. Instead, it behaves according to a complex-valued object called the wavefunction.
The wavefunction is an abstract entity: it possesses no mass, no electric charge, and it does not emit or absorb light. It cannot be directly measured by any instrument. Yet, it exerts fundamental control over the universe at its smallest scales.
Every isolated quantum system is described by a state vector \[|\psi\rangle \in \mathcal{H},\] where \(\mathcal{H}\) is a Hilbert space.
The wavefunction evolves deterministically according to the Schrödinger equation:
\[i\hbar \frac{\partial}{\partial t} |\psi(t)\rangle = \hat{H} |\psi(t)\rangle.\]
This evolution is linear and unitary. No randomness appears at this level.
When we perform a measurement, we do not observe the complex-valued wavefunction directly. Instead, we observe a definite, real-valued outcome, a particle.
The double-slit experiment illustrates this vividly. Photons sent one by one through two slits produce an interference pattern. No classical particle passing through one slit at a time could generate such a pattern. What propagates through both slits is the wavefunction. The particle becomes localized only upon detection.
Thus, there is a dual structure:
Continuous, linear, complex-valued unitary evolution of wavefunction.
Discrete, localized classical measurement events in spacetime.
The wavefunction lives in Hilbert space, which is linear, continuous, complex-valued, and nonlocal. The latter is discrete, real-valued and localized.
The wavefunction is abstract by nature, yet it governs the probabilities of all physical events. It encodes the potential behaviors of particles, dictating outcomes without being a tangible object itself.
This is particle–wave duality.
Consider a quantum two-state system \(\psi\rangle = \alpha\):
\[|\psi\rangle = \alpha |H\rangle + \beta |T\rangle, \quad |\alpha|^2 + |\beta|^2 = 1.\]
The system is not “half head and half tail” in a classical sense. Instead, the wavefunction encodes both possibilities simultaneously within a single mathematical object.
Superposition allows interference. Amplitudes combine before probabilities are extracted:
\[P = |\alpha + \beta|^2.\]
This structure is crucial. The wavefunction does not store outcomes separately. It stores them in compressed, phase-sensitive form.
If one were to list all classical alternatives explicitly, the information content would scale exponentially with system size. Instead, the Hilbert space vector stores all alternatives in a linear superposition.
Quantum particles lack classical individuality. Electrons, for example, are indistinguishable not merely in practice, but in principle.
The particle you send here and detect there cannot be said to be the same particle. In the quantum world, particles lack what we ordinarily call identity.
In Hilbert space, exchanging two identical particles corresponds to an operator acting on the state vector.
For fermions:
\[\psi(x_1, x_2) = -\psi(x_2, x_1).\]
If \(x_1 = x_2\), then:
\[\psi(x, x) = 0.\]
The Pauli exclusion principle follows directly. No two identical fermions can occupy the same quantum state.
For bosons:
\[\psi(x_1, x_2) = +\psi(x_2, x_1).\]
Multiple occupation is allowed, enabling coherent states and Bose–Einstein condensation.
Identity is therefore encoded algebraically, not geometrically. The symmetry properties of Hilbert space replace classical individuality.
The Born rule states:
\[P(x,t) = |\psi(x,t)|^2.\]
Probability is extracted from amplitude magnitude.
Consider a plane wave:
\[\psi(x) = e^{ikx}.\]
Momentum is precise: \(p = \hbar k\). But position is completely delocalized.
To localize a particle, we must superpose many momenta:
\[\psi(x) = \int a(k)e^{ikx} dk.\]
The sharper the position localization, the broader the momentum distribution.
This is the content of the uncertainty relation:
\[\Delta x \Delta p \ge \frac{\hbar}{2}.\]
Localization requires informational expansion in momentum space.
In other words, precise position requires many Fourier components.
Consider two particles in the joint state:
\[|\Psi\rangle = \frac{1}{\sqrt{2}} \left( | \uparrow \downarrow \rangle - | \downarrow \uparrow \rangle \right).\]
This state cannot be factorized:
\[|\Psi\rangle \neq |\psi_1\rangle \otimes |\psi_2\rangle.\]
The system is described by a single vector in a tensor-product space. The subsystems do not possess independent states.
Quantum Field Theory (QFT) represents our most sophisticated understanding of the subatomic world. In the QFT framework, space is not an empty void; instead, every point in the universe is permeated by fields. A helpful (regularized) analogy is to imagine space filled with an infinite grid of harmonic oscillators, each connected to its nearest neighbors.
The dynamics of such a field in flat spacetime \(\phi(x)\) are often described by a Hamiltonian density, representing the total energy of these oscillators: \[\mathcal{H} = \frac{1}{2} \Pi^2 + \frac{1}{2} (\nabla \phi)^2 + \frac{1}{2} m^2 \phi^2\]
Where \(\Pi\) is the conjugate momentum and \(m\) is the mass. When a "node" is disturbed, the resulting vibration propagates through the grid as a wave. In QFT, these waves—or excitations of the field—are what we perceive as particles.
The universe is composed of several such overlapping fields, e.g:
The Higgs Field: A scalar field \(\phi\) that endows particles with mass through spontaneous symmetry breaking.
Electromagnetic Fields: Vector fields \(A_\mu\) (having both magnitude and direction) that govern light and electricity.
These fields interact where they overlap. For instance, an electron field "wiggling" can kick the photon field. This framework is remarkably precise, provided the stage—space itself—remains a flat Minkowski metric \(\eta_{\mu\nu}\).
What is the deep nature of quantum mechanics? In particular, why does the universe appear at its micro-scale obey this mysterious, abstract, complex-valued, deterministic entity called the wavefunction?
What is the system we have described above?
General Relativity is a beautiful theory, and extraordinarily successful at describing gravity. Its predictions have been confirmed across vastly different scales, from planetary motion to gravitational waves and black hole mergers.
Yet, like quantum mechanics, it demands a profound departure from classical intuition.
General Relativity is a purely geometric theory. The universe is modeled as a four-dimensional differentiable manifold \(\mathcal{M}\) equipped with a metric tensor \(g_{\mu\nu}\).
The metric determines:
Distances
Time intervals
Angles
Causal structure
The infinitesimal spacetime interval is given by: \[ds^2 = g_{\mu\nu} \, dx^\mu dx^\nu.\]
This single object replaces:
Newtonian gravitational potential
Absolute space
Absolute time
The metric is not a passive background. It is dynamical.
Spacetime geometry responds to the distribution of matter and energy.
So called Equivalence Principle says that inertial mass and gravitational mass are identical.
Consider a freely falling particle. In Newtonian mechanics, its acceleration is caused by a force.
In General Relativity, no force acts.
Instead, the particle follows a geodesic: \[\frac{d^2 x^\mu}{d\tau^2} + \Gamma^\mu_{\nu\rho} \frac{dx^\nu}{d\tau} \frac{dx^\rho}{d\tau} = 0,\] where \(\Gamma^\mu_{\nu\rho}\) are the Christoffel symbols constructed from the metric.
This equation expresses inertial motion. Gravity disappears locally.
What we perceive as gravitational attraction is the convergence of nearby geodesics.
The curvature of spacetime is encoded in the Riemann curvature tensor: \[R^\mu_{\;\nu\rho\sigma}.\]
It measures the failure of vectors to return unchanged after parallel transport around infinitesimal loops.
Contractions of the Riemann tensor yield:
The Ricci tensor \(R_{\mu\nu}\)
The Ricci scalar \(R\)
These quantities summarize curvature relevant to volume distortion and geodesic convergence.
Curvature is the fundamental dynamical degree of freedom of the theory.
The dynamics of spacetime are governed by the Einstein field equations: \[G_{\mu\nu} = 8\pi G \, T_{\mu\nu},\] where: \[G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R\] is the Einstein tensor, and \(T_{\mu\nu}\) is the stress–energy tensor.
The stress–energy tensor encodes:
Energy density
Momentum density
Pressure
Stress
These equations equate geometry with matter.
They are:
Nonlinear
Local
Tensorial
Coordinate-independent
Spacetime tells matter how to move. Matter tells spacetime how to curve.
General Relativity is invariant under arbitrary smooth coordinate transformations, in other words, it is background independent, self-contained.
There is no preferred notion of:
Absolute rest
Absolute simultaneity
Global time slicing
Only geometric invariants have physical meaning.
Coordinates are bookkeeping devices, not physical entities.
This removes vast amounts of redundancy. Many coordinate descriptions correspond to the same physical spacetime.
Time itself is geometry-dependent.
For a stationary observer: \[d\tau = \sqrt{-g_{00}} \, dt.\]
Clocks at different gravitational potentials tick at different rates.
This is not a dynamical effect. It is a direct consequence of metric structure.
Time is local.
A profound mathematical feature of the Einstein field equations is the contracted Bianchi identity. It states that the covariant divergence of the Einstein tensor vanishes identically:
\[\nabla^\mu G_{\mu\nu} = 0.\]
Because the field equations equate \(G_{\mu\nu}\) with the stress–energy tensor \(T_{\mu\nu}\), this identity forces the conservation of energy and momentum:
\[\nabla^\mu T_{\mu\nu} = 0.\]
Strong curvature can produce event horizons.
At an event horizon:
Light cannot escape
Time coordinates exchange roles
External observers lose access to interior information
Horizons are not physical barriers. They are geometric boundaries of causal accessibility.
Under broad conditions, General Relativity predicts singularities.
At singularities:
Curvature scalars diverge
Geodesics terminate
The manifold description breaks down
The theory does not fail mathematically. It predicts its own domain of invalidity.
Unlike quantum mechanics, General Relativity is not a theory of states evolving in time.
It is a theory of consistent four-dimensional configurations.
Given suitable boundary conditions, the Einstein equations constrain the allowed geometries.
Time evolution is not fundamental. It is a slicing of a four-dimensional structure.
The equations are elliptic-hyperbolic constraints on geometry.
This is why:
The initial value problem is subtle
Global solutions are rare
Exact solutions are highly symmetric
Spacetime is not computed step-by-step. It exists as a self-consistent whole.
There are cracks in the most beautiful stained-glass window of physics.
It tells us that gravity isn’t a force, but the shape of the container. However, it describes the behavior of the container without explaining the fabric of the container.
GR treats spacetime as a "smooth manifold." It’s a mathematical abstraction that works perfectly until you zoom in to the Planck scale-the threshold where the smooth geometry of General Relativity collides with the discrete fluctuations of Quantum Mechanics.
The Cosmological Constant represents the energy of empty space. Why is it so small yet non-zero? That is the greatest "fine-tuning" mystery in physics.
Why three spatial dimensions and one time dimension?
Why does gravity take a geometric form?
What is it that general relativity is about? What is the system we have described above?
A central paradigm in theoretical physics is the pursuit of grand unification: the formulation of a single mathematical framework capable of describing all physical phenomena. The primary obstacle—and the essential first step toward this ’Theory of Everything’—is the synthesis of General Relativity and Quantum Mechanics into a coherent theory of Quantum Gravity.
But the wheels start to fall off the wagon almost instantly when applying the rules of QFT to the curved metric \(g_{\mu\nu}\) of GR. In flat space, all inertial observers agree on the "vacuum" \(|0\rangle\). In curved space, this consensus evaporates.
Gravity stretches the field ripples. An observer in a stable region might see a vacuum, while an accelerating observer perceives a thermal bath of particles. This is the Unruh Effect, where the temperature \(T\) is proportional to acceleration \(a\): \[T = \frac{\hbar a}{2\pi c k_B}\]
If observers cannot agree on whether a particle exists, the very definition of a "particle" as a basic building block begins to crumble. The framework ceases to provide a globally consistent notion of particles or vacuum.
The most jarring illustration of the incompatibility between Quantum Field Theory (QFT) and General Relativity (GR) is the Cosmological Constant Problem.
In QFT, the vacuum is never truly empty. Each field mode contributes a zero-point energy \(\frac{1}{2}\hbar \omega\). Summing over all modes up to a cutoff frequency \(\omega_{max}\) gives a vacuum energy density
\[\rho_{vac} = \int_{0}^{\omega_{max}} \frac{1}{2} \hbar \omega \frac{d^3k}{(2\pi)^3} \propto \omega_{max}^4 .\]
If the cutoff is taken at the Planck scale, one obtains
\[\rho_{vac} \sim 10^{111} \text{ J/m}^3.\]
Astronomical observations of the accelerating expansion of the universe, however, imply
\[\rho_{obs} \approx 10^{-9} \text{ J/m}^3.\]
The discrepancy is roughly a factor of \(10^{120}\) — often described as the worst prediction in the history of physics.
Effects such as the Casimir effect confirm that vacuum fluctuations have measurable consequences. However, QFT only measures differences in vacuum energy. When coupled to gravity, the absolute vacuum energy should act as a cosmological constant and curve spacetime. Naively, the predicted energy density would curve the universe catastrophically. Yet observations show that the cosmological constant is extraordinarily small.
And if this wasn’t big problem enough, then there is the catastrophe of time.
In General Relativity, time is not an external parameter but part of its geometric structure. Space and time are interwoven, and their geometry is determined dynamically by the distribution of mass and energy. Objects trace world-lines through this geometry, and what we perceive as the present is a three-dimensional cross-section of a four-dimensional structure.
A common interpretation of General Relativity—the so-called block universe view—suggests that past, present, and future events all coexist within the spacetime manifold. Whether this interpretation is ontologically correct remains a matter of debate, but it is consistent with the theory.
Quantum Mechanics offers a very different perspective. In quantum theory, physical systems are described by wavefunctions that encode all available information about their states. These wavefunctions evolve deterministically in time according to the Schrödinger equation.
Importantly, quantum mechanics does not render time itself indeterminate. Time remains an external parameter in standard formulations of quantum mechanics, unlike in General Relativity where it is part of the dynamical structure.
Drop General Relativity into a simulation and it breathes; the geometry ripples and reacts.
Drop Quantum Mechanics into that same simulation and it stagnates.
One theory builds the clock; the other requires you to wind it. This gap suggests that ’unifying’ them is not a matter of merging two lists of rules, but of reconciling two entirely different definitions of ’happening’.
As observers, we are hard-coded to sense a 3D Euclidean world.
Why do we not instead experience reality as inhabitants of a high-dimensional, complex-valued Hilbert space, perceiving ourselves as wave-like entities?
Both descriptions are equally “real”. We just need a microscope to see that the wave-like reality is there.
Why Einstein, why not Hilbert, or both?
Logic and mathematics are abstract by nature. What is common to the addition of two bananas and the addition of two apples is the expression \(2+2\). Yet \(2+2\) is not something one can eat, touch, or weigh. It is not located anywhere in space. It has no mass, no electric charge. It cannot be detected with any measurement device, not even LIGO or JWST! What, then, is it?
The most striking feature of mathematics is its stubborn universality. Two sentient beings, separated by light-years of vacuum or centuries of history, will inevitably converge upon the same Prime Number Theorem. No matter how far we look in space and time, all the physics appear to follow the same universal rules, without an exception.
This suggests that mathematics is not a mere cultural artifact like music or fashion, but a reflection of a fundamental structure.
The physicist Eugene Wigner famously called this the "Unreasonable Effectiveness of Mathematics in the Natural Sciences." Math does not appear to be just a language we speak, but a landscape we explore.
In the history of philosophy, three primary "doctrines" have attempted to explain this phenomenon.
Platonists argue that mathematical entities (numbers, sets, functions) are real, abstract objects that exist independently of us. They don’t exist in space or time, but they have a permanent existence in a "Platonic realm."
The "unreasonable effectiveness" of mathematics in the physical sciences suggests that the universe is built on a mathematical blueprint. If we "made it up," why does it predict the behavior of subatomic particles so perfectly?
Formalists, like David Hilbert, would disagree. They argue that math is a formal game played with marks on paper. We agree on the results not because we’ve discovered some universal truth, but because we started with the same axioms (rules). If we both start playing Chess with the same rules, we will both agree on what a checkmate looks like. That doesn’t mean Checkmate is a fundamental law of the universe; it’s just the logical conclusion of the rules we agreed upon.
This group argues that math is entirely a construction of the human mind. Mathematics appears universal only because the human brain evolved to process logic and patterns in a specific way. We don’t see exceptions to math in the universe because we use math as the filter to understand the universe. If something didn’t fit our mathematical logic, we might not even be able to perceive it.
One of the strongest arguments for Platonic universal structure view is when math leads the way and reality follows.
Astronomers didn’t find Neptune by looking through a telescope first. They noticed Uranus wasn’t moving the way Newton’s math said it should. Urban Le Verrier did the math and concluded that there must be a planet right there. He pointed a telescope at those coordinates, and there it was.
Paul Dirac wrote down an equation for the electron in 1928. The math had two solutions (like how \(\sqrt{4}\) can be 2 or -2). One solution described the electron; the other described a positive electron. Years later, the positron was discovered.
Einstein’s equations predicted that a sufficiently massive star could collapse into a point of no size. Even Einstein found the idea of "infinite density" physically absurd, yet we now have photographs of event horizons from the Event Horizon Telescope (EHT).
If one believes math is a fundamental structure, one is essentially saying the universe is mathematical. Max Tegmark, a physicist at MIT, takes this view to the extreme with the Mathematical Universe Hypothesis. It says our physical world is not just described by mathematics—it is mathematics.
In this view, we don’t "use" math to describe a star; the star is a specific mathematical structure. We are just self-aware parts of a giant equation.
Alternatively, the universe may be chaotic, and math is merely our brain’s organization tool. Imagine our eyes could only see red, green, and blue. We would conclude the universe is made of those colors. However, the colors are only properties of us observers. Math could be a similar filter, causing us to ignore the parts of reality that do not fit into equations.
Mathematics is so called axiomatic system. An axiomatic system consists of a set of axioms—statements assumed to be true—and rules of logical inference that generate further statements, called theorems.
Classical mathematics, from Euclid’s geometry to modern set theory, operates within such frameworks. One starts from a set of axioms, and mathematics is that follows.
Just like different physical theories have been unified, the history of modern mathematics too is a history of consolidation. In the early 20th century, it appeared that all disparate branches of math—from the curves of geometry to the probabilities of statistics—could be expressed in the language of Set Theory. By defining a "number" or a "point" as a specific arrangement of sets, mathematicians created a universal assembly language.
As of today, so called Zermelo-Fraenkel Set Theory (ZFC) can be referred as the standard "assembly language" of mathematics. One can define a number as a set, a function as a set of ordered pairs, and a geometric shape as a set of points.
However, there is a growing rival: Category Theory. Where Set Theory focuses on the "insides" of objects (what elements are in this set?), Category Theory focuses on the "relationships" (how does this structure transform into another?).
Many argue that Category Theory is a more natural "unified language" for math because it handles the structure better than sets do.
While the 20th century was dominated by a search for a single foundation, modern scholarship has shifted toward Mathematical Pluralism (Friend 2014). Proponents like Hamkins argue for a set-theoretic multiverse (Hamkins 2012), suggesting that instead of one absolute mathematical reality, there exist diverse distinct concepts of sets, each instantiated in its own valid universe (Priest 2024).
Do these new modern branches of math play any role in physics? Is there a "Plural" Universe? Could there be universes based on false statements?
Let’s say you had an apple in both your hands. Would you then not have two apples in total? Or having five fingers in both hands wouldn’t count up to ten?
At first sight, pluralism appear to be detached from reality. However, the authors behind puralism are highly respected in Pure Mathematics and Logic. Hamkins, for instance, is a world-class set theorist. His "Multiverse" theory isn’t just a vague idea; it is backed by rigorous "forcing" techniques (a method to create new mathematical models). In the world of Computing, these ideas are actually vital. If you are building a new programming language or an automated theorem prover, you have to choose which logic your system will follow. Pluralism is a practical reality there.
When we measure the fine-structure constant or the curvature of spacetime, we don’t see a "multiverse" of different mathematical rules. We see one specific set of rules.
If other mathematical systems exist where 2+2=5 or where the law of excluded middle fails, those systems are empty. They don’t describe things in our universe.
David Hilbert dreamed a "mechanical" way to prove every truth. The work of Alan Turing suggested that math could be entirely mechanized. Under this view, a mathematical truth is simply the output of a specific program.
But Turing and Gödel proved there are limits; Turing proved there are some things a machine simply cannot calculate, even with infinite time. For example, Chaitin’s Constant \(\Omega\) is well-defined but uncomputable. While we can define the rules that produce it, no Turing machine can ever output its digits. Gödel proved that in any powerful axiomatic system (like the one a Turing machine runs), there are true statements that the system cannot prove.
Mathematicians have demonstrated that mathematics is a multiverse of competing plural systems (Hamkins 2012). However, everything that we can observe in the universe follows the singular true/false mathematics. Statements are either true or false, not both.
All mathematical doctrines attempt to resolve this same unsettling observation: the universal consensus of mathematical truth. Whether in the mind of a human or the circuits of a probe, the internal logic of mathematics remains invariant.
However, a few puzzles are still missing:
The Continuum Hypothesis: Even if Set Theory (ZFC) is considered the "assembly language of mathematics, there is an interesting hole in it. There are some basic questions about sets that ZFC cannot answer. For example: Is there an infinity between the size of the integers and the size of the real numbers? ZFC says: "I don’t know."
The "Natural" vs. "Artificial" Axioms: Formalism says we just "make up rules." But if we make up rules for a game like Chess, the results stay in the game. Why do the rules of math somehow leak out and tell us how a bridge will hold weight? This is the core of the "Unreasonable Effectiveness."
Logical Pluralism: Some argue there isn’t one math, but many. If we change so called the "Law of Excluded Middle" (the rule that something is either true or false), we get a different math. Why does our universe prefers the classical?
From a computational perspective, the ancient philosophical doctrines find intuitive modern parallels. Set Theory functions as the foundational data structures and primitive types of the universe. Formalism mirrors the syntax and the compiler’s rule-based transformations. Structuralism echoes type theory and relational design, focusing on how data interacts rather than what it "is." A programmer or an engineer can master these mechanics. The formal systems are clear; the structures are precise; the abstractions are manageable. We can trace the bits, optimize the code, and compress the signal.
Yet, once the mechanics are fully mapped, the ontological void persists.
What kind of a thing is \(2+2=4\).
Modern biology has established that DNA is the blueprint of life, carrying the information required to build and maintain every known living organism. The path to this understanding has been gradual and cumulative.
In 1869, Friedrich Miescher isolated a previously unknown substance from white blood cells, which he called nuclein. This marked the first step toward identifying the molecular basis of heredity.
In 1888, Theodor Boveri observed thread-like structures during cell division, later named chromosomes. These structures were shown to carry hereditary information.
Thomas Hunt Morgan, working with fruit flies, linked specific traits to specific chromosomal regions. These regions became known as genes, establishing the physical basis of inheritance.
In 1928, Frederick Griffith’s experiment with Streptococcus pneumoniae demonstrated that a “transforming principle” could transfer hereditary traits between bacteria. This strongly suggested that heredity was encoded in a specific molecule.
The decisive breakthrough came in 1953, when James Watson and Francis Crick, building on X-ray diffraction data produced by Rosalind Franklin, revealed the double-helix structure of DNA. In 1958, the Meselson–Stahl experiment confirmed semi-conservative replication: each new DNA molecule contains one original strand and one newly synthesized strand. This explained how genetic information is reliably transmitted from cell to cell and generation to generation.
Over the past century, DNA has moved from hypothesis to direct manipulation. We sequence genomes, edit genes, and observe predictable biological consequences. The theory is not merely descriptive; it is operational and continuously verified in practice.
If DNA is the blueprint, regulatory genes determine how that blueprint is executed. All cells in a multicellular organism contain essentially the same genome, yet they differentiate into muscle, bone, skin, or neurons. The difference lies in gene regulation.
In the 1980s, Walter Gehring and colleagues discovered homeobox genes while studying fruit flies. One mutant developed a leg where an antenna should have been, revealing master regulatory genes that control body layout. These genes are remarkably conserved across species.
In a striking experiment, a gene responsible for eye development in mice was inserted into a fruit fly embryo. The fly developed additional, fully functional fly eyes—not mouse eyes. This demonstrated that the underlying genetic control mechanisms are deeply shared across species, supporting the common ancestry predicted by Charles Darwin.
Many of us software developers still have a lot of catching up to do when it comes to code reusability.
Applications in Everyday Life are numerous:
Medicine: Genetic testing, DNA sequencing, and gene therapy enable diagnosis and treatment at the molecular level. mRNA-based vaccines demonstrate direct practical use of genetic principles.
Forensics: Forensic DNA analysis reliably identifies individuals in criminal investigations.
Agriculture: Genetic engineering and DNA barcoding improve crops and track biodiversity.
Evolutionary biology: Molecular phylogenetics reconstructs evolutionary history with unprecedented precision.
When a gene is altered and a predicted change follows, the theory confirms itself in practice. Reality itself functions as an ongoing test of molecular biology.
Despite its complexity, DNA operates on remarkably simple principles. Its four nucleotides—adenine (A), thymine (T), cytosine (C), and guanine (G)—pair specifically (A with T, C with G). During replication, the two strands of the DNA double helix are separated, and each strand serves as a template. New complementary nucleotides are attracted to the exposed bases, guided by hydrogen bonds, forming two identical DNA molecules. In other words, the sequence of one strand determines the sequence of its partner, ensuring faithful duplication.
What is remarkable is that DNA is also fully digitizable, it can be coverted to binary string without loss of information. Entire genomes, including the human genome sequenced through the Human Genome Project, are stored and analyzed computationally. Advances in synthetic biology and genome synthesis allow scientists to construct functional genomes artificially and insert them into living cells.
At least for simple organisms such as bacteria, no additional “vital spark” is required. When the correct molecular structure is assembled and placed in the proper environment, the system behaves as a living organism.
The evidence that DNA is the blueprint of biological life is overwhelming. It encodes the structures that build cells, tissues, organs, and entire organisms. It governs development, reproduction, and adaptation.
Modern genetic science demonstrates something profound: biological information is real, measurable, manipulable, and predictive. Our ability to read, edit, and synthesize DNA shows that life operates according to structured informational principles. The blueprint is not metaphorical—it is molecular.
Just as physical theories are no longer abstract descriptions detached from reality but continuously tested in practice, also in the case of DNA, reality itself functions as an ongoing confirmation. The living world—including beings capable of reflection and self-awareness—is built upon genetic information. DNA is therefore not merely associated with life; it appears to be the informational foundation from which complex, and possibly conscious, life are built.
The current technology does not allow us to run full scale human DNA simulations yet. However, this does not prevent us from exploring it as a thought experiment. So we digitize a human genome and run it inside a sufficiently detailed computer simulation. We also simulate a sufficiently large world with it to avoid our simulated human developing phsychosis in empty space. Nine months later, in simulation time, our virtual copy takes its first breath in its simulated world.
How would such a simulated human perceive its environment? Would it sense the limited memory space of the computer running it? Would it be able to bump its head against the upper boundary of RAM and feel pain? Would the flipping of bits tickle its nose, or the rotation speed of a hard drive make it dizzy?
Would it eventually discover that its entire universe is driven by storage devices, memory chips, and an overclocked multi-core CPU?
The simulated human is not created within our universe. It does not consist of real-world particles such as electrons or quarks. Instead, it exists entirely within a virtual universe that we simulate alongside it. As a result, it has no access to our physical hardware. It cannot observe transistors, memory cells, voltages, or processor clocks.
The only thing the simulated human can study is the internal structure of its own virtual world.
Within that world, there are virtual particles, virtual forces, and virtual laws of physics. When the simulated human bangs its head against a simulated wall, the simulated particles in the wall respond exactly as the laws of that virtual universe dictate.
Every measurement the simulated human performs inside its universe will match those we real people carry out in our real world. The outcomes of experiments will match the predictions of the simulated physical laws, just as our measurements match the laws of physics in our own universe.
To the simulated observer, the experience is indistinguishable from how real particles behave when we humans bang our heads against real walls.
This is because both the real world and the simulated world are axiomatic systems. Mathematics does not care whether it is applied to apples, bananas, electrons, or bits. The statement \(2 + 2 = 4\) holds regardless of the physical substrate that implements the system.
For the simulated human, there is no experiment it could perform that would reveal the presence of the computer running the simulation, because that computer exists outside the axioms of its universe.
From the inside, the simulated universe would feel precisely as real as our universe feels to us.
The brain of a computer—its central processing unit (CPU)—consists of a set of electric switches called transistors. The CPU does not need to be an electric device. Just as \(2+2=4\) holds for both apples and bananas, simulations should work regardless of the substrate on which they are implemented.
In theory, one could implement a DNA simulation as a mechanically operating computer consisting of wooden components. Instead of using transistors in a silicon chip to control electrons, one could use wooden parts on a plywood platform to control wooden balls. When such a machine stepped through its logic, a virtual human would take its first steps in its virtual universe.
What is this strange phenomenon that creates consciousness and pain from a jerking pile of wooden pieces?
If a huge number of moving wooden components can create pain, then what does one moving piece of wood create?
Can current physics even describe this action?
No man-made device is perfect, and wooden is no exception. Friction, tolerances, and things like that introduce resonances and other unintentional vibrations to the operation of wooden . If the actual logic in the wooden creates a virtual universe with pain and consciousness, then what do these unintentional side effects create? Do they get reflected in some form into the created virtual world too?
Would the virtual fellow in its virtual universe discover these in the form of strange quantum foam? Would it observe them as strange cosmic background radiation with 2.725 K temperature? Maybe that indeed explains why we measure quantum foam and cosmic background radiationin our universe. We are being simulated in a wooden Universal Turing Machine!
Maybe not so, but at least it would be difficult to argue why large movements of wooden components would count, but their small resonances would not.
How would the clock speed of the machine running the simulation appear in the created simulated universe? Would the simulated human observe that particles in its universe appear to follow some strange abstract square wave function, whose origin it could not explain, but which it might end up calling Wintel’s (TM) abstract square wave function?
How would the human simulated in wooden computer sense the workings of such a computer? Due to the large number of concurrently rolling spheres, the simulated human could conclude that the wave function must be complex-valued with phase coherence, and imaginary numbers would provide a natural formalism.
In addition to electric and wooden computers, it is easy to picture rich set of other possible ways to implement computers, and therefore, systems potentially capable of creating virtual universes with conscious observers.
Computer software is ultimately a sequence of bits—nothing more than a series of binary switches. Theoretically, one could use a thermostat, a device with only two states (open and closed), to describe any computational procedure.
If we allow a temperature to vary over time, the thermostat could go through the binary code of the DNA procedure. If the mathematics holds, pain and conscious entities should emerge.
The simulated fellow would be totally unaware of the fact that a trivial thermostat is responsible for the illusion of its existence. Crucially, the thermostat itself cannot be regarded as conscious by any means; those arguing that matter itself is conscious may miss be missing the mark.
Correspondingly, running such DNA simulations on any type of computer does not make the computer itself conscious or pain-sensitive.
It is still the very mechanical deterministic system stepping through its symbol tape without any choice. Yet, the rattling of that thermostat creates a virtual parallel universe in which a conscious human marvels at the deep nature of reality.
A person walking through two subsequent doors implements the logical
operation called AND. If one can pass through using either
the left or right door, then one gets the OR operation.
What complex logic do billions of human beings create by navigating
streets and passing through doors on their way to work?
Even a regular pencil and a piece of paper could be the source of consciousness. Start writing down the evolution of DNA with pencil and pen, and soon virtual people suffer tooth pain in their virtual universe. Both pencil and ballpoint pen should work equally well. Due to the higher friction, the temperature of the cosmic background radiation in a universe created with pencil might be a bit higher though!
One possible source of consciousness could be the surface of a sea. In theory, waves and ripples of a sea could describe a DNA simulation in which conscious observers marvel at the wonderful properties of their universe (like those caused by heavy rain during the annual monsoon season).
The whole universe, with its planets and stars, could be the source of a consciousness.
The only conclusion one can draw from these is that whatever it is that we call consciousnes and pain must be subtrate independent. The source of pain cannot be any physical attribute, such as mass, electric charge, elementary particle such as photon, because it is always possible to find an implementation where such a property does not play any role.
Computer software is nothing but a sequence of bits, trivial on/off switches. Correspondingly, one could also use a thermostat to describe any procedure.
The only common factor between different implementations appears to be the logic they are running. And logic is nothing but data - information.
If we can utilize electrons or even macroscopic components to create virtual universes that replicate the biological and structural motifs of our own—such as DNA—we reach a logical crossroads. Because these simulated observers are functional duplicates of their "real-world" counterparts, they will inevitably begin exploring their own substrate.
They will discover the principles of computation and, eventually, construct their own Turing Machines. The procedure these virtual entities use to simulate their own existence is identical to the procedure we used to create them. We can express this transition mathematically. If our world is \(r_n\) and the simulated world is \(r_{n+1}\), the mapping is: \[r_{n+1} = f_{\text{DNA}}(r_n)\]
This nested stack of simulations continues as long as the host level contains sufficient computational density to support the sub-level. Because we know the deterministic nature of the Turing Machine we used to initiate the first step, we must admit that the relationship is strictly recursive: \[r_{n+k} = f_{\text{DNA}}(r_{n+k-1})\]
In a recursive formula, it is notoriously difficult to argue for "ontological seniority." There is no parameter within the \(f_{\text{DNA}}\) function that distinguishes a "real" world from a "virtual" one; the operator remains invariant across all levels of the recursion. The logical conclusion is that our "base" reality is as computationally contingent as the simulations we produce. To an observer inside the recursion, the substrate is always invisible; we perceive our level as "solid" simply because we are defined by the same logic that governs it.
There is, however, a significant physical constraint to this hypothesis: the Information Bottleneck. Simulating the human genome, let alone the consciousness of eight billion humans and the staggering complexity of \(10^{22}\) stars in the observable universe, requires an astronomical amount of information.
Those sub-simulations would soon run out of information.
Despite their virtual nature, these abstract, virtual simulated universes might possess very "real" emergent properties. Pain, joy, and consciousness could emerge in them.
Science has mapped much of the terrain of human emotion, yet debate remains about its precise structure. While sophisticated systems exist for categorizing what people feel, there is no single universally accepted “periodic table of emotions.” Researchers instead use different frameworks depending on whether they focus on brain chemistry, facial expressions, evolutionary function, or cognitive interpretation.
Several major approaches currently dominate the scientific study of human emotional experience.
In the 1970s, psychologist Paul Ekman argued that humans possess a small set of biologically universal emotions that are expressed through recognizable facial patterns across cultures. These commonly cited emotions include:
Happiness
Sadness
Fear
Disgust
Anger
Surprise
Within this framework, emotions are interpreted as evolutionary survival mechanisms. Fear discourages risky encounters with predators, while disgust helps avoid contaminated food.
Robert Plutchik expanded this perspective by proposing a “Wheel of Emotions,” in which primary emotions can combine to produce more complex emotional states, analogous to the blending of colors.
| Basic Combination | Resulting Emotion |
| Joy + Trust | Love |
| Anger + Disgust | Contempt |
| Fear + Surprise | Awe |
In this model, emotions also vary in intensity and may exist in opposite pairs.
Many contemporary researchers prefer dimensional approaches, such as the Circumplex Model of affect. Instead of assigning discrete labels to emotions, experiences are positioned along two primary axes:
Valence: the degree to which an experience is pleasant or unpleasant.
Arousal: the level of physiological activation or energy associated with the state.
For example, serenity and excitement are both positively valenced states, but serenity corresponds to low arousal while excitement corresponds to high arousal.
Scientific discussions often distinguish between physical sensation and emotional state, although these systems interact closely.
Nociception (Physical Pain): neural processes that signal potential tissue damage.
Affective Pain (Emotional Pain): emotional distress such as social rejection, which engages overlapping neural circuits involved in physical pain processing.
Complex Emotions: experiences such as nostalgia, schadenfreude, or ennui require higher-level cognition and social interpretation, making them difficult to isolate through simple neural measurements.
A central debate in affective science concerns whether emotions are biologically hard-wired or cognitively constructed.
Universalist theories propose that emotions arise from dedicated neural circuits that evolved for survival.
Constructionist theories argue that emotions are conceptual interpretations that cultures impose on underlying bodily sensations.
An open question follows naturally: if a culture lacked a linguistic category for a particular emotion, would individuals still experience it in the same form?
In previous chapters we introduced the principle of substrate independence. If a simulation of biological processes is logically complete, then the simulated observer should behave in every measurable way as if it possesses consciousness.
This leads directly to what philosophers call the explanatory gap. Physical theories describe how matter behaves—how particles interact, how energy moves, and how neural circuits process signals. Yet these descriptions remain silent about qualia, the subjective character of experience.
Modern physical theories, from general relativity to the Standard Model of particle physics, contain no variables corresponding to sensations such as pain or love. From a purely functional perspective, pain may be modeled as a signal indicating a threat to system integrity. However, for the observer undergoing the experience, pain is not merely a signal; it is an intensely felt state.
Suppose the state of a simulated human brain is represented as a large, time-evolving matrix \(M(t)\) encoding the activity of all neural elements.
The system evolves according to a transition rule
\[S_{t+1} = \Phi(S_t, I_t),\]
where \(S_t\) represents the system state and \(I_t\) represents input from the environment.
Within this framework we can track every state transition and every simulated neural signal. Yet the formalism itself contains no explanation for why a particular configuration of information should correspond to a subjective experience.
If the simulation is paused, the experiential process halts even though the informational structure remains stored in memory. If the simulation is run at a drastically slower computational rate, the internal observer may still experience a continuous flow of subjective time. The mathematical description therefore captures functional dynamics but does not explain why those dynamics are accompanied by phenomenology.
Science currently lacks a clear bridge law that connects informational configurations with specific qualitative experiences. For example, physics can describe the wavelength of light associated with the color red, yet the equations themselves do not derive the felt quality of “redness.”
A further question arises: is the emotional spectrum observed in humans the complete set of possible conscious states, or merely the subset produced by biological evolution?
Evolution operates as a constrained engineer. It implements traits sufficient for survival within a particular environment—approximately \(1g\) gravity, an oxygen-rich atmosphere, and social cooperation within small groups.
It is therefore plausible that other forms of conscious architecture could produce emotional states not present in the human repertoire. One might imagine hypothetical “orphan emotions”—states permitted by the informational structure of cognition but never realized in biological evolution.
Evolutionary systems frequently become trapped in local optima: configurations that function adequately but do not represent the global maximum of possible adaptation.
If conscious experience depends on underlying information architecture, then human emotions may represent only the default configuration produced by our evolutionary history.
Hypothetical alternative emotional states might exist!
Perhaps these undiscovered human experiences could explain the behavior of my wife; she seems to be running all possible procedures simultaneously!
Finally, one may ask whether a threshold of informational complexity is required for genuine subjective experience.
Even simple devices can implement fragments of computational processes. Yet it remains unclear whether such trivial implementations possess any experiential dimension.
As humanity builds increasingly sophisticated computational systems, we face a profound uncertainty. We are constructing machines capable of complex information processing while lacking a theory that determines when such processing might give rise to subjective experience.
We are building computers and writing software while remaining entirely blind to the light—or the fire—we may be igniting within them.
(In the film Last Action Hero (McTiernan 1993), characters transition between the real world and a cinematic one. I newer liked to movie because I couldn’t take it seriously. Silly me!)
Evolution seems to explain why we humans believe in God. However, many find it difficult to believe in Heaven and Hell because so much of the concept defies common sense. How could God expect anyone with a rational mind to believe in something as crazy as Heaven and Hell—places that one can neither see nor touch? How could such spiritual places possibly exist?
As much as the theory of evolution and other secular conclusions might seem to disprove God, the fact that a human is an implementation of an axiomatic system may, surprisingly, save the concept of God.
The simulated universes we create with our computers are precisely what one might expect Heaven and Hell to be. They are worlds that cannot be seen or touched physically, but which would be very real for the "souls" (virtual humans) living within them. What previously appeared to conflict with science is suddenly consistent with it.
Heaven and Hell can exist because the Church-Turing thesis holds, DNA is the blueprint of life, and we can reject the need for metaphysical or supernatural forces to explain its operation. According to the Bible, "God created man in his own image." This could actually make sense: perhaps even God did not know a better way to implement consciousness. He worked out his own genome and created simulated copies of himself in a computer—us.
What might God say about us running DNA simulations? What if our simulation suddenly crashed due to a "division by zero" exception, destroying the entire simulated universe? What would happen to those souls?
Would God accept simulated souls into His heaven? If so, Heaven would be filled with all sorts of souls, both "original" and simulated. If not, God would be discriminating against souls based on their origin—even if, as axiomatic systems, the two souls were identical.
In the future, computers will likely be powerful enough to run these DNA simulations on personal home computers. These low-budget, buggy simulations might generate "crippled" virtual humans. Irresponsible companies might find it cheaper to use loads of simulated people to test toxic drugs.
This would create an enormous amount of new suffering in the universe. Virtual sin would be inevitable. Would God hold the people running these simulations responsible for this extra suffering? Should people making "suffering software" be punished? Should we put them in jail? Would God send them to Hell?
According to quantum mechanics, there is a fundamental uncertainty built into the universe. Furthermore, we cannot solve all equations with total precision. In practice, floating-point accuracy and available hardware resources limit how accurately we can simulate physical processes. Correspondingly, virtual humans would not be exact copies of their real-world counterparts.
Would this save God from the "trouble" of dealing with virtual souls?
Rounding errors might explain a few missing teeth, or perhaps cellulite, but it is difficult to see how the deep nature of consciousness could lurk behind simple rounding errors. Nature itself suffers from precision issues in the form of Heisenberg’s Uncertainty Principle. How much freedom does the macroworld, built on top of a random microworld, actually have?
Consider granular material flowing in a gravitational field. Each grain falls without individual predictability. If one compared two such piles, the microstructure would be totally different; not a single grain would match in size or position. Yet, the piles redistribute themselves in a way that is essentially predictable.
Or consider identical twins. Despite radiation and other disturbances affecting their genomes from day one, they end up looking alike, even if they grow up in different cultures. The fundamental uncertainty in nature does not prevent us from building Turing Machines with precise, deterministic operations. Fully deterministic systems can run on top of genuine indeterminism without a trace of randomness.
So, it is inevitable that, sooner or later, simulated fellows will be created by reckless humans playing God. Those simulated humans might hit the "hard problem of consciousness" and conclude they must have a soul because bits and bytes cannot seemingly explain their experience. If floating-point errors result in crippled simulations living in terrible pain, there would be no God to answer their prayers. It was just a crappy computer they were living in—a computer that could break down at any second.
In addition to creating simulated universes, perhaps a programmer should also implement the concept of Heaven. One could tell these simulated fellows that their hardship is temporary because the next software version (Heaven v1.1) is ready to run. This would bring hope to the hearts of social, friendly simulated beings living in unreliable, low-cost hardware.
But why implement Hell? Why not just let simulated persons live in a state of constant joy? Why on earth did God create Heaven and Hell in the first place?
Imagine a "thought-experimental" God with a group of newly created humans. God wants them to behave nicely. He asks them not to eat the "bad apples." They eat them anyway. So, this God sends his son, Jesus, to Earth to die for them.
This behavior is not necessarily what one would expect from an omnipotent deity. Why send a son to spread word of punishment? Why not just have a face-to-face chat with the troublemakers? Was God afraid that these fellows could actually hurt Him? Maybe God is also sensitive to pain. To avoid the risk of suffering Himself, He sends a proxy. This does not sound like the behavior of an exalted, moral creature.
Or, was Jesus a "software patch" for a poorly tested first release? Perhaps God worried that, with our extreme individualism, we wouldn’t survive the next millennium. He sent a message that we should put individual needs aside and be kind to one another; our social lifestyle is the key to our survival.
In Genesis, God told man to "subdue the earth." In this, humans have succeeded. We have cleared forests, polluted oceans, and established dominion to the point where many species are extinct.
Why not create humans to properly believe in God from the start? If things went "accidentally" wrong for an omnipotent creature, it suggests he wasn’t truly omnipotent—or that He is bound by the laws of logic. This implies that God, too, might be an implementation of an axiomatic system.
God might have a reason not to create "sin-free" humans. If we did exactly as ordered, we would be nothing but "dumb" machines following pre-programmed logic. God surely aimed higher than creating simple Turing machines.
To create genuinely intelligent souls, perhaps God had no choice but to build them on the concept of free will. Maybe free will is essential for any conscious system. Any logic without free will remains as "stupid" as a database program—incapable of choosing otherwise.
Scientific studies suggest the brain prepares for a decision before we are consciously aware of it. Some say this proves free will is an illusion. However, the brain must obey the laws of physics; it needs processing time to retrieve memory and reach a conclusion. Is free will with a few milliseconds of subconscious "pre-processing" less free than free will without it? These studies likely only prove that the brain takes time to run its procedures.
If God had no choice but to give humans free will, he also made them responsible for their actions. Humans used that free will to "eat the wrong apples" and commit sins. God then sent His son to demonstrate the correct way of living and warn of the forthcoming punishment: Hell.
What does God want us to become? It seems He wants us to be social and friendly. He isn’t developing a "super-warrior" race. Perhaps God was simply lonely. He needed good company—creatures whose intelligence matches His own, who are amusing to chat with, and who choose to be friendly rather than being programmed to be so. He couldn’t just "write" conscious friends; He had to let them evolve.
This could explain the motivation for heaven. Simulated humans would use their free will, some for good, some for bad which would inevitable cause suffering.
In case these simulated objects ended up wondering why they had to suffer so much, God might not dare tell them the truth. The truth being that he, as the only God, was so lonely and needed good company. Those poor souls had to evolve in the simulation to get some good enough company created for himself. Not even God knew how to write conscious software!
By good company God means creatures whose intelligence matches his own. Creatures amusing enough to chat with, yet friendly enough not ending up torturing God as soon he sets them free.
Creatures that choose to be friendly rather than being programmed friendly.
The concept of heaven and hell also matches loosely a typical IT software project, with limited budget and resources.
To keep the simulated humans in order God tells (lies) them that suffering was necessary for the reasons they would newer be able to understand.
The truth might be that it’s a mess in "Heaven." Underpaid, unmotivated angels wrote buggy software, and the project is behind schedule due to a "cosmological recession." The plan: release the software now, run daily backups, and eventually migrate everyone to the fully tested "Heaven" version. All systems would then be restored from their backups, and the software would finally work properly. Damn the marketing department, always promising too much too soon.
The trouble with this theory is that it wouldn’t explain the purpose of Hell. Why can’t God simply acquire those good souls, and erase the bad ones?
Why do bad souls have to suffer eternal pain? Isn’t it a bit too big penalty for a poor mortal human that just happened to take a few too many beers, because his father was an alchoholic?
There is a law of physics saying that information cannot be lost nor created, only transformed. Maybe souls, once created, are something that cannot be disposed. Once the information is arranged to describe a conscious soul it remains, forever.
Bad souls would be hazardous waste.
If DNA is the blueprint of life and Quantum Theory applies to all particles in our body, then consciousness is just mathematics.
Not even things like pain let alone more happy feelings need God to be explained.
Furthermore, in order to believe in God we don’t really have a choice but believe in Heaven too. This is because we are rather intelligent creatures. We do understand that not even God can keep us alive much longer than some ten decades or so. We know we will eventually die. People around us will suffer and die. Bad things seem to happen regardless of how much one prayed. What would such a God be worth? Where was he when we need him the most? No matter how strong we are in our faith some of us surely end up drawing the conclusion that such a God is not worth the pray.
And this is the problem that Heaven solves. Even if we evidently die and suffer, that is all right, because it is only a temporary issue. God will wake us up later in Heaven, and make it all up to us.
By applying software design tools and logical reasoning—infused with a measure of creativity and imagination—one can construct analogies like those presented above.
However, the explanation derived earlier remains more compelling: that "God" is the idealized image of a leader for social animals. In this view, there is no literal deity; rather, it is a psychological archetype developed within us through evolution.
But what if I was wrong? What if I had made a mistake while studying the operation of DNA?
What if some strange event, such as 50 Hz buzzing noise, will show up and prevent people from writing and running DNA simulations that would create virtual souls?
If someone was ever stupid enough to read this book and abandon their faith, then God would surely hold me-the author- responsible for the damage. He would surely sent me straight away to the Hell, and then my entire body had to suffer, forever.
Even couple of days of suffering of one tooth was too much!
According to Roger Penrose (Penrose 1989), the operation of the human brain might not be an axiomatic system. Gödel’s theorem states that within complex axiomatic systems, there exist true statements that cannot be proven true or false. Since computers and Turing Machines are equivalent to axiomatic systems, some answers cannot be obtained algorithmically. Penrose speculates that the human brain can reach such answers, indicating that humans are not merely computational machines.
Let’s start with the following assumptions:
DNA encodes the blueprint of life.
DNA consists of ordinary matter that obeys physical laws.
It should be noted that these are observable assumptions, and as such, they cannot be definitively proven. However, the observational evidence supporting them is strong.
The first assumption posits that the human genome encodes all the information required to construct a conscious, pain-sensitive human being within this universe and its observed laws of physics.
The second assumption states that DNA is composed solely of ordinary physical matter. It is made of the same matter as everything else and is governed by the same physical laws, with no non-physical or supernatural influences affecting its operation.
It then follows that humans, unlike what Roger Penrose speculates, must be implementations of axiomatic systems. Consequently, consciousness can be described using the principles of mathematics.
Let us make a third assumption:
Church–Turing Thesis holds.
It states that all physical processes can, in principle, be simulated by a device known as a Turing machine. Modern computers are essentially Turing machines. The Church–Turing Thesis has never been formally proven, but if it were false, we would have good reason to worry about keeping our money in bank accounts [my wife: “what money”]?.
From these three assumptions it follows that humans can be simulated by a Turing machine—or, in its modern incarnation, a computer.
Suppose we digitize a human genome and run it on a computer simulating a universe governed by the same laws of physics as our own. As the simulation executes, the DNA evolves into a conscious, pain-sensitive observer. The simulated human experiences an expanding universe where time flows from past to future, and tooth pain is real.
All software programs consist of two kinds of information: code (\(c\)) and data (\(d\)). In a DNA simulation, the code would describe the laws of physics, such as quantum mechanics and gravity. The data would include the digitized DNA and a sufficiently large section of the surrounding universe. Let us assume the simulation software consists of rougly equal amount of code and data.
\[1.0 = \frac{\operatorname{sizeof}(\text{cccccccccccccccccccccccc})} {\operatorname{sizeof}(\text{dddddddddddddddddddddddd})}\]
A well-known technique for optimizing slow, CPU-intensive code is to
use lookup tables, which replace computation with precomputed data. For
example, one can replace all sqrt() computations:
result = sqrt(arg);
with precomputed values:
result = sqrt_lookuptable[arg];
Empirically, software programs yield identical results regardless of how the result was computed. \(2+2=4\), and it does not matter if we replace all \(2+2\) equations in our code with precomputed value of \(4\). This optimization therefore cannot affect the simulation’s output, nor the observer’s experience of time or pain. Delaying or accelerating computation affects only the external runtime, not the internal state transitions of the simulated system.
However, this optimization has the effect of reducing the amount of code and increasing the amount of data in our DNA simulation:
\[0.9 = \frac{\operatorname{sizeof}(\text{cccccccccccccccccccccccc})} {\operatorname{sizeof}(\text{ddddddddddddddddddddddddddd})}\]
Now, imagine we gradually optimize the DNA simulation by replacing algorithmic components with lookup tables. As a consequence, the number of CPU cycles required to run the simulation decreases. Suppose we take this optimization to the extreme: all computation is replaced by a static dataset encoding the entire execution trace.
\[0.0 = r = \frac{\operatorname{sizeof}(\text{})} {\operatorname{sizeof}(\text{dddddddddddddddddddddddd}}\]
As a result, we don’t have anything to run in a computer. It is just a massive hard drive containing all procedures precomputed.
Does the simulated human still experience time and pain?
The answer, within the axiomatic model, is yes. Affirming otherwise would certainly imply the existence of a new physical constant in our books of physics: a minimum code-data ratio required for consciousness to emerge.
Temporal structure and pain, therefore, must emerge from the internal relationships among states, not from the external runtime. From the internal perspective of the simulated observer, time still flows from past to future and pain is real.
As a conclusion, a static dataset can fully specify a universe containing conscious observers with subjective time. Consequently, time and pain must be properties of simulated observers, not fundamental properties of the universe.
When a single DNA simulation—let’s call her Alice—runs on a computer, the execution trace is easy to study. Every CPU instruction drives the computer (and Alice) to a new state. From Alice’s perspective, time flows forward, and the effect of each CPU cycle can be mapped to a simulated particle in her world.
\[| \text{Alice}|\text{Alice}|\text{Alice}|\text{Alice}|\dots|\]
However, consider a system running multiple DNA simulations concurrently—say, Alice and Bob—where thread scheduling is governed by quantum randomness. The resulting execution trace interleaves their simulated lives in segments of unpredictable length.
\[|\text{AliceAliceAli}|\text{BobBobB}|\text{AliceA}|\text{BobBo}|\text{Alic}|\text{BobB}\dots|\]
Since both single- and multi-threaded computers are computationally equivalent, each observer must experience a coherent, continuous timeline.
Now, let’s gradually shorten the number of CPU cycles until each thread is limited to running a single CPU cycle before switching. Let’s also add more DNA simulations, like Robert, John, and Jill. As the number of concurrent simulations increases, the execution trace becomes increasingly fragmented. Additionally, modern multi-threaded systems often include many extra threads for operating system tasks, such as listening for network requests. In the limit of infinitely many perfectly interleaved simulations, the execution trace approaches pure white noise.
\[|\text{A}| \text{B}| \text{OS}| \text{R}| \text{J}| \text{OS}| \text{l}| \text{Ji}| \text{i}| \text{ce}| \text{b}| \text{ob}| \text{n}| \text{l}| \dots|\]
Is Alice still conscious?
The answer, again within the axiomatic model, is yes. Empirically, multi-threaded computers function reliably regardless of how few CPU cycles are allocated per thread switch or how narrow the CPU’s internal registers are. From each observer’s internal perspective, time still flows from past to future.
This raises an obvious question: how does Alice know which sequences belong to her and which do not, in order to remain conscious?
Conclusion: if there is any way to interpret static data as a "conscious observer," then that is exactly what happens: a conscious observer emerges. If the initial assumptions hold, consciousness, pain, and subjective time can emerge from static data that resembles pure static noise.
It we can create a simulation with a computers, then it might seem obvious that those virtual universes only comes into existence when a simulation is actively executed—that the computer must be powered on to run the simulation code for the simulated world to exist. So if the simulation computer is never started, then no simulated virtual world emerges. No consciousness, definitely no pain.
However, pure static data (such as the full execution trace of a simulation) has no notion of time. One cannot measure the time it takes from the “execution” to create the virtual world. One cannot argue that one created the other. What appears as static information, e.g., execution trace of a computer to us external observers, appears as an expanding universe and pain to the simulated human observing the data from inside. This relationship is representational, not causal. The computer and the simulated universe are two sides of the same coin: distinct arrangements of the same information.
And there is more than just two sides on the coin. Consider an execution trace of \(N\) bits. These bits can be arranged in \(2^N\) ways. Apparently, most of them describe chaotic universes with no conscious observers. One, however, describes our identical simulated twins—Alice, Bob, and others. And one describes something we call a computer, which is simulating a computationally heavy procedure—DNA.
Philosopher David Chalmers (David J. Chalmers 1995) proposed the concept of a philosophical zombie: a hypothetical being that is physically and behaviorally identical to a conscious human but lacks any subjective experience. Such a zombie would respond to pain stimuli in the exact same way a conscious person does—it would cry out, flinch, and try to avoid the source of pain—but it would not feel anything.
According to Chalmers, even with a perfect simulation, we would only be observing the physical processes. We still wouldn’t know if there’s a "ghost in the machine"—a feeling of what it’s like to be that simulated being. A computer could be programmed to perfectly mimic the behavior of a person feeling pain without actually having the experience itself.
Let’s make a fourth assumption:
Pain has measurable effects.
This assumption brings the concept of pain from philosophy to physics. Just like gravity, pain is assumed to have observable consequences that are physically detectable and measurable. Formally:
\[\text{Human} \neq \text{Human + Pain}\]
If a system’s behavior is entirely determined by its physical components and their interactions, an axiomatic copy should exhibit identical behavior. If their behavior is identical, then their internal states, including consciousness and pain, must also be identical. \(2+2\) holds for both bananas and apples. If the original’s behavior is driven by the experience of pain, the simulation must have that experience too. Otherwise there would be a contradiction.
Correspondingly, the P-Zombie is an impossibility. If Axioms 1–4 hold, the P-Zombie premise collapses:
Axioms 2 and 3 state the system is an axiomatic, computable entity.
Axiom 4 states that the experience of pain (\(P\)) has a measurable, physical effect.
While full-scale DNA simulations are currently beyond our computational reach, this limitation is secondary to the theoretical framework. Much like our inability to calculate the highest possible prime numbers, the current technical ceiling does not invalidate the underlying logic; what matters is that such simulations are possible in principle.
As soon as computational technology allows for high-fidelity biological modeling, we will be able to test whether a simulated human truly experiences subjective states, such as pain. This provides a clear path for empirical validation. We can simulate a statistically significant sample of human subjects. By comparing the simulated subjects’ responses to those of their real-world counterparts, we can measure the difference. If the simulated subjects exhibit the necessary biological reactions but lack the qualitative experience (qualia) of pain, the arguments presented here collapse. This would indicate that at least one of the four initial axioms is invalid.
If the four assumptions hold, then consciousness, time, and even pain can emerge from the structure of information alone. The universe itself is made of abstract information. Observers, particles, and the flow of time are just patterns woven into this tapestry, emerging wherever conditions allow.
Time appears to be a subjective property of us intelligent and conscious observers, illusion that we created in our minds rather than a fundamental property of the universe. The universe is informational and abstract by nature, and our everything in us observers can emerge even from static or noisy informational substrate.
Where did all the matter in the universe come from? The answer appears to be nowhere. There is no matter more than there is numbers, multiplications, or square roots. The nature of everything is fundamentally abstract and virtual by nature.
Physics has a fundamental problem: we are trying to describe a system from the inside.
At least, it certainly appears that way. We live in geometric space, which appear vast and in which we are small. In the language of set theory, we are the small subset \(O\) attempting to map the superset \(U\).
Because we are embedded, we cannot help but see through a human lens. We experience "time" as a flow and "matter" as solid objects, but these might simply be illusion of our perspective. This makes it incredibly difficult to distinguish between a fundamental law of nature and a mere byproduct of our vantage point. We are not external spectators; we are looking at the machine while our own bodies are being ground between its gears.
The view that we human observers are sub-sets of the univers feels "right." Intuitively, we want to believe that the majestic universe would exist exactly as it is even if we never appeared. It seems arrogant to suggest that our pitifully thin layer of brain matter could play any significant role in the fundamental existence of the cosmos.
But this is the trap: even if the universe exists independently of us, our description of it—the physics we write down—is entirely filtered through our internal perspective. If we are merely pieces of a vast puzzle, can we ever truly hope to assemble the entire picture?
By trying to write "God-eye" equations, we have ignored the most important variable: the fact that the person doing the math is part of the equation.
We don’t truly know what time is—perhaps because we are submerged within it. But while the essence of time remains elusive, we can simulate it.
If we build a digital universe that duplicates the essential properties of our own, then whatever "mystery" we don’t understand must be present within the machine running the code. The problem of the "Unknown" is suddenly reduced to the "Known."
We may not understand the cosmos, but we understand software completely. By moving the mystery from the vacuum of space into a block of silicon, we turn a metaphysical puzzle into a debuggable program.
Let us test the idea of treating the simulated system and the simulator as two sides of the same coin: a computer running a simulation and the simulated universe viewed as two perspectives of the same underlying information.
What would be the most bizarre object in the universe—one we still do not fully understand? Maybe a black hole singularity!
According to General Relativity, all the matter in a black hole collapses to a point of zero size and infinite density. An entire star—millions or even billions of times more massive than the Earth—can, in principle, be compressed into a region of vanishing volume. This description strains physical intuition.
We therefore construct a black hole simulation and treat the simulation and the computer executing it as equivalent descriptions of the same informational structure. By doing so, we hope to gain new insight into the pathological points of spacetime that appear in classical geometry.
Computers are state machines. Each executed CPU instruction drives the system to a new state. In software engineering, an execution trace is the chronological record of all such states. If source code is a map, the execution trace is the complete GPS log of every step taken.
We begin with a massive, spherical dust cloud and allow it to collapse under its own gravity to form a black hole. In practice, we cannot run such a simulation to its logical conclusion—the formation of a physical singularity. As the collapse nears its final state, the software inevitably crashes.
This failure is driven by two factors: the fundamental breakdown of the Einstein field equations and the inherent limitations of our digital tools. Long before the singularity is reached, the system is overwhelmed by division-by-zero exceptions and mathematical infinities that exceed the numerical precision of our floating-point representations. No matter how detailed or physically accurate a simulation we attempt to run, the singularity itself remains obscured by these divergences.
General Relativity predicts singularities, but it fails to describe them.
However, since General Relativity is fundamentally a theory of geometry—and solutions to the Einstein equations are themselves geometries—the most essential nature of the singularity must be geometric.
Each dust particle and every point of the spacetime fabric in the simulation can be mapped to a unique sequence of memory bits. Together, these elements form a long, continuous bitstring representing a temporal slice of the spatial geometry.
Initially, we observe that the bitstring representing the dustcloud is very random:
00100101011110101010101001010100101010101010101001001001110101010010100101010...
...
As the simulation evolves and total entropy increases, the entropy of the bitstring encoding the space decreases over time.
...
0001000010000010010001000000001000100000000000001000010000000010000000100010000...
0001001000000000010000000000000010000010000000001000000000000000000100000000001...
0000001000000000000000000000001000000000000000000000100001000000000000000000100...
[crash]
Even if numerical instabilities prevent the simulation from reaching the final singularity, the result remains clear; we can draw the necessary conclusion by extrapolating the execution trace.
The black hole singularity corresponds to a zero-entropy state.
In classical GR, curvature diverges as geodesics converge, implying infinite curvature. But this may reflect the limits of the geometric description rather than a physical pathology—analogous to measuring surface derivatives at the north pole of a sphere.
Remarkably, this zero-entropy conclusion is invariant under coordinate choice, representation, or dimensionality. Mapping zero entropy bitstring to geometry inevitably yields the same minimal object: a point.
Thus, singularities are not mysterious infinities but the simplest possible geometric configurations: states of informational exhaustion. They represent the ultimate compression of all degrees of freedom in a region of spacetime.
If entropy collapse corresponds to geometric compression, then increasing entropy should correspond to geometric unfolding. Collapse and expansion are simply two directions within the same informational configuration space. Instead of simulating black hole collapse and observing how the bitstring (execution trace) entropy approaches zero, one starts from zero entropy execution trace and mutate it to introduce entropy. The corresponding 3D space should then unfold from singularity into some sort of virtual universe.
In the black hole simulation we considered all dust particles as infinitely small points. However, in this simulation we pay attention not only to global spacetime but also emergent micro structures.
We initialize an execution trace with at zero Shannon entropy to represents the initial singularity. Then we start mutating the execution trace with random bit-flips, which increases its expected Shannon entropy, flip by flip.
To visualize this process, we introduce a decoding map assigns subsets of bits to spatial coordinates, producing a discrete spacetime fabric. On this induced geometry, we apply simple structural filters:
Elementary particles: Pairs of spatial points whose separation is below a threshold \(\varepsilon\).
Atoms: Triplets of points forming tight, approximately equilateral configurations.
Molecules: Clusters of atoms whose geometric centers lie within a small separation threshold.
An alternative definition of emergent particles is given by recursive bit-pattern detection. Here, the execution trace itself is treated as a one-dimensional candidate space of particles, with no explicit geometric mapping required. Elementary particles are defined as short substrings. Composite particles are formed recursively by concatenating previously defined particles. There is a composite particle if it appears in and if its sub-patterns are recognized particles.
This recursive pattern-matching approach captures the emergence of particles purely from informational redundancy, without reliance on an explicit geometric embedding. The hierarchy is constructed bottom-up: from frequent substrings (elementary particles), to composite concatenations (atoms), to repeated higher-order motifs (molecules).
It should be noted that both above mentioned filters are deliberately minimal and arbitrary. However, the models preserves two key features characteristic of the real universe: (i) the hierarchical structuring of particles, and (ii) the fact that particle sizes do not stretch with the universe, only their mutual distances.
By counting the number of recursive particles at each entropy level, one observes that zero entropy corresponds to no detectable structures, while higher entropy states give rise to exponentially growing numbers of particles whose abundances follow lognormal-like distribution.
The following key properties are observed:
At zero entropy the spacetime geometry collapses to a point, and no elementary particles emerge - this is the initial singularity.
As entropy increases, the space unfolds exponentially.
At higher entropy, elementary particles, atoms and molecules begin to appear, with their counts following lognormal-like trend.
Remarkably, across multiple mappings, decoding schemes, representations and threshold choices, the abundance of emergent structures is not linear but follows a lognormal distribution.
Lognormal distributions often arise in physics, generically in systems governed by multiplicative stochastic processes, where growth proceeds through successive random amplifications rather than additive steps. They are observed in phenomena as particle clustering, galaxy mass distributions, biological growth, and economic and network hierarchies.
The result is philosophically appealing because it removes the need for metaphysics. If the universe we observe is simply the most probable outcome—a result driven entirely by the laws of statistical mechanics, then it is something we can fully understand. Could this trivial informational model provide the unified explanatory framework that has eluded physics for a century?
One of the biggest mysteries in science is why the current laws of physics appear to be so well-tuned to allow our existence.
If we were one of those sub-strings to be found, where then, should we expect to find ourselves? The answer is near the peak of the log-normal curve, where the probability of emergent structures is the highest. From this perspective, the observed regularity, scale hierarchy, and apparent fine balance of the universe would not be surprising. They are just the most typical properties of such configurations.
The early universe is widely recognized as a state of extremely low entropy—a condition that underlies the observed arrow of time and the thermodynamic evolution of cosmological systems (Penrose 2010; Carroll 2010). Classical analyses of cosmology and black hole physics emphasize that this initial low-entropy state is essential for the emergence of structure and temporal asymmetry (S 1988; Hawking and Penrose 1996).
According to the current standard model of cosmology, the universe underwent an inflationary period during its earliest moments—a phase of rapid expansion that subsequently transitioned into the observed Hubble flow. This inflationary phase was devised as an explanation for the uniformity of the cosmic microwave background radiation temperature. However it treates the low-entropy initial state as a fine-tuned boundary condition rather than a derived property, and offers little insight into the nature of these fields or why the initial state was so meticulously ordered.
One might also ask why the universe was not perfectly ordered—a zero-entropy state. Standard physics often points to the Heisenberg uncertainty principle and quantum fluctuations as the spoiler of perfect order, but this merely shifts the problem to another layer of abstraction.
However, this information-theoretic perspective treats spacetime geometry as a projection of information evolving toward maximal entropy. Here, the increasing entropy, when interpreted geometrically, generates a rapid, inflation-like expansion as the system relaxes from a state of maximal constraint. Our simulations consistently demonstrate that an initial state of exactly zero entropy can still give rise to rich, hierarchical structures.
An expanding universe and increasing entropy are two sides of the same coin. An expanding universe is the geometric interpretation of a increasing entropy.
In the above described simulations, we traced the total number of emergent motifs. However, geometric interpretations of information yield spacetime that is not perfectly uniform. Somewhere bits tend to cluster to represent symmetries. These can be interpreted as geometric structures. Entropy creates local gradients and variations, biasing the probability of emergent structures.
This could provide an intuitive explanation for the fundamental nature of gravity; structures tend to be found in regions of higher statistical weight because those configurations are more numerous.
To put it intuitively: we “fall” toward a specific region because there are more ways for us to exist “down there” than “up here” (Verlinde 2011). Gravity, then, is not a pull, but a statistical inevitability—a macroscopic drift toward the most probable distribution of information.
Despite many promising properties, the model suffers from several obvious flaws.
The model does not explain why entropy increases over time rather than decreases.
We may owe our existence as intelligent beings to entropic increase. In semi-hostile environments where structures naturally decay, intelligence serves as a way to “fight back” by creating local order (Nicolis and Prigogine 1977). Could intelligence even function in a system with decreasing entropy, where things naturally drift toward a more ordered state anyway?
We currently consume low-entropy fuel—such as the highly ordered structure of a banana—and our bodies break that order down into high-entropy metabolic waste. How would we harvest energy in a universe where entropy decreases, where systems tend from noise toward higher order? To maintain a local arrow of time, we would essentially have to consume metabolic waste and, through some miracle of reverse-metabolism, produce a perfectly formed, low-entropy banana.
While mathematically possible in a symmetric system, our current forward-running universe is certainly more aesthetically appealing. We are lucky to live on the side of the curve where breakfast goes in the top and comes out as waste, rather than the other way around.
Computational simulations can be utilized to stress-test this hypothesis, but they reveal a significant challenge: The Boltzmann Brain Problem. While structures do emerge within the model, they fail to transition smoothly along the geodesics of geometric spacetime.
Instead of beautiful spiral galaxies or planets tracing elegant elliptical orbits, the simulation produces a chaotic “explosion” of spacetime. No General Relativity emerges, and there is no sign of a coherent wavefunction. What we observe instead are stochastic, erratic structures that behave haphazardly, adhering only to a statistical probability gradient rather than the lawful patterns of classical physics.
This contradicts empirical observation; the universe overwhelmingly favors large, lawful, and persistent structures over isolated, transient fluctuations.
Imagine analyzing a movie at the pixel level. The brightness and color values appear to oscillate. They “wave’! Why? Because the movie is stored efficiently via compression (e.g., MPEG using the Discrete Cosine Transform).
Based on observations, the micro-cosmos waves according to a complex-valued wavefunction. Why? Because we are observing compressed history.
Similarly, in the universe, we observe waving microstructures. The wavefunction is the exact, observed manifestation of this spectral structure.
Let \(\Gamma_O\) denote the set of all digital histories compatible with the existence of an observer \(O\). A history drawn at random from \(\Gamma_O\) is overwhelmingly likely to be **structured rather than chaotic**.
The reason is simple: for a fixed observer, many underlying sequences encode identical experiences. Histories with **smooth temporal and spatial correlations** admit vastly shorter descriptions when expressed in the **spectral basis**. Because the number of possible sequences grows exponentially with description length, **histories of minimal spectral complexity dominate the observer-conditioned measure**.
\[\text{Minimal Spectral Complexity} \implies \text{Maximal Probability} \implies \text{Predictable, Law-like Physics.}\]
From this perspective, quantum waves are the **most efficient representation of the microcosmos** compatible with observers. Spectral complexity is **computable, differentiable, and well-behaved**, unlike Kolmogorov complexity, yet the MDL-driven probability argument still applies.
The Fourier Transform decomposes any signal into a sum of sinusoids. In AIT terms, it is a **change of basis**: from the position basis (specifying every point individually) to the frequency basis (specifying rates of change).
For smooth, correlated data, this representation is vastly more efficient. A few coefficients capture most of the meaningful structure, while high-frequency components encode fine details or noise.
Sinusoids are the unique solutions of the simplest second-order linear differential equations, representing the fundamental form of oscillation. In signal processing, the Discrete Cosine Transform (DCT) exploits this for **energy compaction**: most of the descriptive power is captured in a few low-frequency coefficients.
Similarly, **cosine and sinusoidal components in quantum wavefunctions efficiently encode smooth correlations** while preserving essential phase information.
Quantum mechanics employs complex-valued wavefunctions \(e^{i\theta} = \cos \theta + i \sin \theta\) to retain **phase information**. Phase encodes interference effects and allows coherent summation of histories. In compression terms, complex coefficients preserve **both amplitude and directional information**, ensuring that transformations (e.g., rotations in Hilbert space) are exact and algebraically consistent.
Technological analogies reinforce this principle:
JPEG / MPEG: Use DCT to compress visual information efficiently.
MP3: Sub-band coding and Fourier-based transforms compress audio.
MRI: Captures data in frequency (K) space and reconstructs images via Fourier transforms.
The universe itself exhibits **maximal smoothness and correlated structure**, which is naturally represented via **spectral decomposition**. Spectral complexity explains why quantum microstructures wave in exactly the way we observe.
Replacing Kolmogorov complexity with spectral complexity provides a fully computable, differentiable, and physically meaningful measure of microcosmic structure. MDL-driven probability arguments still hold: histories of minimal spectral complexity dominate observer-conditioned measures, yielding predictable, law-like behavior.
Quantum waves, interference, and phase are therefore not mysterious, but the natural consequence of the universe being represented in the most efficient spectral basis compatible with observers.
While spectral complexity explains the microcosmic wave-like structure, we do not experience ourselves as waves in an infinitely dimensional Hilbert space. Instead, we find ourselves embedded in a macroscopic 3+1 dimensional spacetime that is approximately flat at everyday scales.
Why?
The reason is rooted in observation and information management. Coherent observers require well-defined boundaries to organize and access information: memories, causal chains, and predictions must remain consistent for reasoning and survival. Spectral description alone is insufficient to enforce this continuity across time and space.
Consequently, we experience a geometric compression: the universe organizes information in a way that is most efficiently describable in 3+1D relational geometry.
This compression is largely orthogonal to spectral compression: one explains microstructure (waves), the other explains the macroscopic arena in which observers navigate and persist.
In the Something from Nothing chapter were were left wondering how could smooth predictable laws of physics emerge from pure noise.
In Humans as Axiomatic System we ended up to the conclusion that Alice must emerge from pure white noise.
One can arrange \(n\) bits of information in \(2^n\) ways. For example, two of these configurations can be interpreted as a black hole simulation and its Turing Machine implementation. Equally well they can be viewed as Alice living in expanding universe, and its execution trace.
Usually, people focus on the marriage counseling aspect—how the smooth, deterministic world of General Relativity hates the jittery, probabilistic world of Quantum Mechanics.
Both appear to be optimal compression schemes for the universe!
Finally, everything makes sense!
As demonstrated earlier, the universe is informational by nature and singularities inside black holes and in our past (the Big Bang) are not pathological mysteries but fully understood points of minimal information. This leads us to the following fundamental building blocks:
Particle View: A finite, discrete, inflated description of localized phenomena.
Quantum Mechanics: A spectral, infinitely smooth, and continuous compressed description.
General Relativity: A geometric compressed description of spacetime topology.
The well-established principle called Kolmogorov Complexity \(K(x)\) defines the information content of a string \(x\) as the length of the shortest program \(p\) that can reproduce \(x\) on a Universal Turing Machine (UTM):
\[K(x) = \min \{ |p| : U(p) = x \}.\]
While \(K(x)\) measures the complexity of an individual object, Solomonoff’s Universal Prior \(P(x)\) provides a framework for induction. It assigns a probability to a sequence based on the likelihood that a random program will produce it. A critical feature of this measure is that the Minimal Description Length (MDL) dominates the measure. Because the probability of a program of length \(|p|\) is \(2^{-|p|}\), the shortest programs contribute exponentially more to the total probability:
\[P(x) = \sum_{p: U(p)=x} 2^{-|p|} \approx 2^{-K(x)},\]
where the approximation holds up to a multiplicative constant.
The bridge between \(K(x)\) and physical reality is Landauer’s Principle. It asserts that the erasure of information is a dissipatively irreversible process. To erase one bit of information, a system must dissipate a minimum amount of heat:
\[\Delta Q \ge k_B T \ln 2.\]
This suggests that the thermodynamic entropy of a system is bounded by its informational complexity. However, \(K(x)\) presents two significant challenges for physical modeling:
Uncomputability: Due to the Halting Problem, a Turing Machine cannot determine the absolute shortest program for a string.
Non-continuity: \(K(x)\) is not a smooth function; infinitesimal changes in input can yield massive jumps in program size.
Even small change in input may yield huge changes in description length.
We treat the laws of physics not as a discrete Turing Machine, but as a Spectral Compression Algorithm, the wavefunction becomes the mechanism of efficiency. Unlike the discrete nature of \(K(x)\), Fourier-based compression is both countable and allows continuous amplitudes and phases. More notably, this assumption is supported by observational evidence; the universe exhibits wave-like behavior.
Instead of counting bits, we view the universe as a Harmonic Processor counting basis states. A wavefunction \(\Psi\) can be decomposed into frequencies and phases:
\[\Psi = \sum_{n=1}^{N} A_n \exp(i(k_n x - \omega_n t + \phi_n)),\]
where \(k_n\), \(\omega_n\), and \(\phi_n\) are determined by boundary and quantization conditions. The “informational size” of this state is defined by the number of active components \(N\). We can thus establish a Spectral Prior:
\[P(\Psi) \propto 2^{-N}.\]
A pure cosine wave (\(N=1\)) is the simplest possible configuration, making it the most informationally affordable and thus an obvious building block of reality. Under this view, physical laws are the statistical consequence of a universe that favors spectral sparsity.
To help visualize this idea, consider familiar signal processing scenarios, such as Audio Compression (MP3): A pure tone is represented by very few frequencies and compresses extremely efficiently. A complex symphony requires many frequencies, increasing its informational size. The universe behaves like an MP3 encoder: states that can be represented with fewer spectral components are far more probable.
Image Compression (JPEG): is another example. Smooth gradients or repeating patterns require fewer Fourier or DCT coefficients to encode, while noisy textures require many. Similarly, physical reality favors “smooth” configurations that minimize spectral complexity.
These examples illustrate why, at a microscopic level, the universe appears to “wave” smoothly rather than behaving like random noise. The wavefunction encodes the minimal spectral resources needed to represent observer-relevant histories, making spectral sparsity a natural outcome.
There is a symmetry between Information Theory and Statistical Mechanics. Solomonoff’s Prior and the Gibbs Measure describe the same phenomenon of “minimal cost.”
In Solomonoff’s framework: \(P(x) \sim 2^{-K(x)}\). In the Gibbs framework: \(P(x) \sim e^{-\beta E}\).
If the energy of a wavefunction is proportional to its spectral size, these equations converge. A multi-component wavefunction is both energetically expensive and informationally complex. This implies that the Principle of Least Action is the physical manifestation of the Minimal Description Length. The universe settles into smooth functions because they represent the “thermal equilibrium” of information. They dominate the measure, and therefore, are the most probable.
We assume that the observer is a finite informational structure. Consequently, any physically meaningful state description must be finitely representable. The universe is taken to consist of a finite number of distinguishable configurations.
Thus, the wavefunction cannot be a fundamentally continuous object; it must be an emergent finite representation.
The wavefunction is not interpreted as an evolving state \(\psi(x,t)\). Instead, it encodes an entire observer history as a single static object.
Let \(\mathcal{H}\) denote a full trajectory through configuration space. The wavefunction \(\Psi\) is a compressed spectral representation of \(\mathcal{H}\).
Time is therefore not fundamental but corresponds to ordering within the encoded structure.
We define a discrete spatial domain of size \(N\). Allowed frequencies are integers:
\[k \in \mathbb{Z}_N.\]
The wavefunction is represented by a finite set of integer parameters:
\[\{ (k_i, A_i, \phi_i) \}_{i=1}^{m},\]
where
\(k_i\) = integer frequency index,
\(A_i\) = integer amplitude coefficient,
\(\phi_i\) = integer phase (mod \(M\)).
The evaluation algorithm (e.g., \(\sin\) or \(e^{ikx}\)) has constant description cost and is not counted toward structural complexity.
All ontological quantities are integers. Normalization is performed only when computing probabilities:
\[P(x) = \frac{|\psi(x)|^2}{\sum_x |\psi(x)|^2}.\]
This yields the Born rule without assuming continuous amplitudes.
The spectral complexity \(C_Q\) is defined as the total binary description length of the spectral parameters:
\[C_Q = \sum_{i=1}^{m} \left[ \mathrm{bits}(k_i) + \mathrm{bits}(A_i) + \mathrm{bits}(\phi_i) \right],\]
where
\[\mathrm{bits}(n) = \lfloor \log_2 |n| \rfloor + 1.\]
This measure has the following properties:
Higher frequency indices require more bits.
Larger integer amplitudes require more bits.
More modes increase total complexity.
Fine phase tuning increases complexity.
The measure is computable and smooth in large-scale limits.
Since \(\mathrm{bits}(k)\) grows logarithmically with \(|k|\), shorter physical wavelengths require greater description length.
We assign statistical weight to histories according to:
\[P(\Psi) \propto 2^{-C_Q(\Psi)}.\]
Thus, spectrally simple histories dominate.
Sharp localization requires:
Many frequency modes,
Large frequency indices,
Precise phase coordination.
Trajectories with high-frequency fluctuations or sharp discontinuities have large \(C_Q\) and are exponentially suppressed.
Conversely:
Low-frequency modes are cheaper.
Smooth trajectories require fewer modes.
Slowly varying histories dominate the measure.
This produces:
Emergent inertial persistence,
Smooth dynamics,
Interference phenomena via linear superposition.
For observers capable of prediction and logical reasoning, the complexity functional must vary smoothly under small parameter changes.
Binary description length of integer spectral parameters provides a continuous, computable alternative compatible with finite observers.
The wavefunction is emergent and finite.
It encodes full histories, not instantaneous states.
Smooth physics arises from compression dominance.
The complex Fourier structure is taken as empirically observed, consistent with translation symmetry and linear superposition.
We interpret spacetime geometry as a compression model for worldline and field data. The optimal geometry minimizes total description length:
\[C_{\mathrm{total}}[g;\Phi] = C_G[g] + C_Q[\Phi \mid g],\]
where:
\[\begin{aligned} C_G[g] &= \alpha \int R \, \sqrt{-g} \, d^4x, \\ C_Q[\Phi \mid g] &= \beta \int \mathcal{L}_\text{matter}[\Phi, g] \, \sqrt{-g} \, d^4x. \end{aligned}\]
Here:
\(C_G[g]\) is the geometric encoding cost (spacetime curvature), given by the Einstein–Hilbert action,
\(C_Q[\Phi \mid g]\) is the conditional encoding cost of matter or fields \(\Phi\) given the geometry \(g\),
\(R\) is the Ricci scalar curvature of the metric \(g\),
\(\sqrt{-g}\,d^4x\) is the invariant spacetime volume element.
Minimizing \(C_Q[\Phi \mid g]\) with respect to matter degrees of freedom \(\Phi\) yields the classical equations of motion for the fields:
\[\delta_\Phi C_Q = 0 \quad \Rightarrow \quad \frac{\delta (\mathcal{L}_\text{matter} \sqrt{-g})}{\delta \Phi} = 0.\]
For point particles, this reproduces the geodesic equation. For continuous fields (scalar, electromagnetic, fluid), it yields the usual Euler-Lagrange equations in curved spacetime. Conceptually, these are the minimal conditional encoding trajectories or configurations for matter given the geometry.
Varying the total functional with respect to the metric \(g_{\mu\nu}\) gives:
\[\delta C_{\mathrm{total}} = \delta C_G + \delta C_Q = 0.\]
The geometric term variation yields:
\[\delta C_G = \alpha \int G_{\mu\nu} \, \delta g^{\mu\nu} \, \sqrt{-g} \, d^4x,\]
where \(G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R\) is the Einstein tensor.
The matter term variation yields the stress-energy tensor:
\[\delta C_Q = -\frac{\beta}{2} \int T_{\mu\nu} \, \delta g^{\mu\nu} \, \sqrt{-g} \, d^4x, \quad T_{\mu\nu} = -\frac{2}{\sqrt{-g}} \frac{\delta (\mathcal{L}_\text{matter} \sqrt{-g})}{\delta g^{\mu\nu}}.\]
Setting \(\delta C_\text{total} = 0\) yields the full Einstein field equations:
\[G_{\mu\nu} = \kappa T_{\mu\nu}, \quad \kappa = \frac{\beta}{\alpha}.\]
In this framework:
\(C_G[g]\) represents the description length of the geometry itself.
\(C_Q[\Phi \mid g]\) represents the description length of matter or field configurations given the geometry.
Minimizing \(C_\text{total}\) finds the optimal compression of spacetime and matter histories.
This is a direct generalization of the spectral / wavefunction MDL argument from quantum mechanics to the full gravitational case. Just as low spectral complexity histories dominate the wavefunction measure, low-geometric-complexity spacetime configurations dominate the MDL measure.
The principle naturally explains geodesic motion, energy-momentum conservation, and the Einstein field equations as outcomes of information-theoretic optimality.
Nonlinear GR is the optimal geometric compression of histories in spacetime.
If the fundamental observational assumptions hold, the universe is revealed as static and informational. There is no dynamical generation, no ontological time, and no external metaphysics.
Configuration Space: For \(n\) bits, there exist \(2^n\) configurations. Traversals: Time is the ordinal of the observer’s sequence through the configuration space.
In a reality where all computable configurations exist, the central challenge is the sampling measure: From which informational structure is an observer most likely sampled? We posit that the measure concentrates on the most compressible structures. To quantify this, we employ Spectral Complexity (\(C_Q\)) and Geometric Complexity (\(C_G\)).
We define the complexity of the wavefunction \(\psi\) as a Sobolev-type seminorm. In the discrete domain \(\mathbb{Z}_N\): \[C_Q = \sum_{i} k_{\text{eff}}^2 |A_i|^2\] where \(k_{\text{eff}} = \min(|k|, N - |k|)\). This functional penalizes high-frequency components and sharp gradients. Particles appear wave-like because wavefunctions are the optimal spectral compression of microscopic informational correlations.
Observers require well-defined boundaries, which are most efficiently defined in a geometric manifold. Geometry (\(G\)) is the optimal compression of macroscopic relational structure. \[C_G = \alpha \int R \sqrt{-g} \, d^4x + \beta \sum |a + \Gamma(v,v)|^2\] Relational Cost: The Ricci scalar \(R\) represents the first-order cost of geometric curvature. Geodesic Enforcement: The term \(\|a + \Gamma(v,v)\|^2\) ensures that non-geodesic motion increases description length.
In classical General Relativity, \(R\) allows for singularities where curvature becomes infinite. However, in an informational universe, a singularity is a computational impossibility. Because the probability of a history is weighted by \(e^{-C_Q}\), states requiring infinite spectral descriptive power (infinite frequencies) are suppressed to zero probability. The spectral codec acts as a natural low-pass filter, rendering the "infinite curvature" of standard GR unreachable.
The measure over observers is weighted by the joint minimization of complexity functionals. The observer-conditioned probability of a history \(\gamma\) is: \[\mathbb{P}(\gamma \mid O) = \frac{1}{Z_O} \exp \left[ -\lambda \left( C_Q(\gamma) + C_G(\gamma) + C_{\text{int}}(\gamma, G) \right) \right]\] Physics emerges at the intersection of these simplicity priors. Spectral and geometric smoothness are statistically aligned, producing a coherent physical world.
Treating total complexity as a Euclidean Action \(S\), the observed dynamics are stationary points (\(\delta S = 0\)).
Varying \(C_Q\) with respect to \(\psi^*\) yields the Laplacian \(\nabla^2 \psi = 0\). In the presence of a sampling frequency (clock speed), this emerges as the Helmholtz/Schrödinger form.
Varying \(C_G\) with respect to the metric \(g_{\mu\nu}\) yields the standard Einstein Field Equations: \[R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R = 8\pi G T_{\mu\nu}\]
A critical consequence of the many-to-many mapping between \(C_Q\) and \(C_G\) is the emergence of Post-Quantum Gravity (PQG). Because a single wavefunction \(\psi\) can be embedded into multiple geometric configurations \(G\) with nearly identical complexity costs, the observer does not perceive a rigid coupling.
Instead, the interaction manifests as a fundamental “noise” floor or “quantization error.” The “Mass Jitter” predicted by modern PQG models is the irreducible variance that occurs when high-dimensional spectral information is compressed into a lower-dimensional geometric manifold.
The probability of an observer finding themselves in a specific configuration is determined by the joint probability of \(G\) and \(\psi\). Because spacetime can be described by many wavefunctions, and vice-versa, the universe is fundamentally stochastic at the interface of scale. This framework removes the need for dynamical axioms, replacing them with a single principle: The universe is the minimal information capable of containing us.
The preceding chapters derive the theory at the level of axioms, measures, and limiting principles. At this stage, the theory must be tested not philosophically, but operationally: we must demonstrate that its defining assumptions are sufficient to produce structures resembling observed physics.
To this end, we construct a proof-of-concept simulation. The simulation is not intended to reproduce the universe in full detail. Rather, its purpose is sharply defined:
Given only an observer-conditioned informational measure and a compression principle, do effective quantum behavior, inertia, and gravitational geometry emerge without being explicitly imposed?
No forces, equations of motion, or spacetime structures are hard-coded. Only information, ordering, and compression are permitted.
The theory is fundamentally observer-centric. Therefore, the simulation must begin with an observer, or more precisely, with an observer-compatible informational trace.
At the implementation level, this requires specifying:
an informational capacity (a finite number of bits),
Observer wavefunction - a minimal observer filter defining what constitutes observer continuity.
The simulation then explores informational configurations that contain such an observer trace. For each compatible configuration, all admissible observer walks (orderings of informational states consistent with observer survival) are considered.
Each observer walk is evaluated by compressing it:
spectrally (minimal independent frequencies/phases),
geometrically (minimal coherent spatial embedding).
From these compressions, a joint weight is computed. Observer walks admitting minimal joint compression dominate the measure.
The universe an observer most likely finds themselves in is the observer walk with maximal compressibility.
Importantly, this formulation does not require that the observer or the total number of bits be fundamental. In the full theory, both are emergent: the most probable observer is the one that arises in the most probable informational universe, at the most probable scale.
The simulation fixes these quantities only to make computation possible, not because the theory requires them as primitives.
A system with \(n\) bits admits \(2^n\) informational states. The number of possible orderings (observer walks) over these states is \((2^n)!\). This growth is super-exponential.
Even for modest \(n\), exhaustive enumeration of configurations or observer walks is impossible. Any claim to simulate the full space is therefore mathematically false.
This is not a limitation of the present work, but a structural fact. Consequently, every simulation of a theory of this type must:
sample,
compress,
constrain, or
generatively construct
the state space.
These are not optional optimizations; they are unavoidable. The relevant question is not whether a simulation is biased, but whether the bias reflects the theory itself or is externally imposed.
The constraints used in the simulation are not ad hoc. They follow directly from empirical facts and from the theory’s own measure.
For example:
Observed universes exhibit increasing entropy. Observer walks with global entropy decrease can therefore be excluded.
Observed physics exhibits inertia and continuity. Sampling is restricted to geometrically coherent configurations, corresponding to minimal geometric description length.
These constraints do not add physics; they eliminate observer walks that are already exponentially suppressed by the observer-conditioned measure.
In this sense, the optimizations enforce the theory rather than distort it.
General Relativity posses another obstacle. In order to implement a practical simulation, we linearize the full MDL-based GR functional around a background metric, typically taken as flat Minkowski space:
\[g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}, \quad |h_{\mu\nu}| \ll 1,\]
where \(h_{\mu\nu}\) represents small perturbations encoding deviations from flat spacetime.
In the simulation, we implement the variation of the \(R^2\) functional. Expanding the curvature proxy \(R\) linearly in terms of the metric perturbation \(h_{\mu\nu}\), the cost gradient simplifies to a form driven by the Laplacian of the field:
\[\frac{\delta C_G}{\delta h_{\mu\nu}} \approx 2R \approx \nabla^2 h_{\mu\nu}\]
- The **Laplacian term** arises naturally from the quadratic cost. - The simulation reaches equilibrium when the geometric "stiffness" (\(2R\)) balances the informational density of the observer (\(\alpha \rho\)).
Similarly, the matter encoding cost for point particles reduces to:
\[C_Q[\{x_a\} \mid h] \approx \beta \sum_a \int \left( \ddot{x}_a^\mu + \Gamma^\mu_{\nu\rho} \dot{x}_a^\nu \dot{x}_a^\rho \right)^2 dt,\]
where the Christoffel symbols are approximated linearly:
\[\Gamma^\mu_{\nu\rho}[h] \approx \frac{1}{2} \eta^{\mu\sigma} (\partial_\nu h_{\sigma\rho} + \partial_\rho h_{\sigma\nu} - \partial_\sigma h_{\nu\rho}).\]
- This can be taken as the **worldline cost functional** in the weak-field. - Geodesics are approximated by solutions to \(\ddot{x}^\mu + \Gamma^\mu_{\nu\rho} \dot{x}^\nu \dot{x}^\rho = 0\), justifying the discrete trajectory updates in the code.
1. The perturbation \(h_{\mu\nu}\) in the simulation corresponds to the **discrete, integer-based metric representation**. 2. The **Laplacian term** in \(C_G\) drives the smoothing of the metric. 3. The **worldline penalty** drives particles to follow approximate geodesics. 4. The linearized equations ensure **computational tractability**, while retaining the MDL principle: the simulation still minimizes total description length for geometry and worldlines, albeit in the linearized limit.
- The linearization allows the metric, Christoffel symbols, and worldline accelerations to be computed efficiently on a discrete grid. - Nonlinear terms \(\mathcal{O}(h^2)\) and higher are neglected, which is valid as long as perturbations remain small. - This provides a **rigorous connection between the MDL principle and the simulation**: each update step decreases an approximation to the total description length functional.
Remark: The full, nonlinear GR functional remains the ultimate target; this linearized supplement serves as a **formal justification for current simulation choices**, allowing the Python code to faithfully represent MDL-driven spacetime evolution in the weak-field limit.
The simulation proceeds as follows:
Initialize the observer state as a localized wavepacket, representing the minimal prior of an existing informational trace.
Enumerate or sample admissible observer walks. In practice, the simulation employs a local Metropolis-Hastings-style sampling of the path integral, where candidate trajectories are weighted by the Boltzmann factor of the total informational action. This allows the system to efficiently find the ’classical’ path without exhaustive enumeration.
Compute spectral and geometric compressions for each walk.
Assign joint weights according to the observer-conditioned measure.
Identify dominant structures and trajectories.
The full implementation is provided as supplementary material. Here we present only high-level pseudocode and representative outputs.
Despite the absence of imposed dynamics, the simulation exhibits:
interference patterns characteristic of wave mechanics,
inertial persistence of localized structures,
effective attraction corresponding to geometric compression gradients.
These behaviors arise solely from the dominance of minimal-description observer walks.
This proof of concept does not claim numerical accuracy or cosmological completeness. Its significance is structural:
Quantum and gravitational phenomena arise as statistical consequences of observer-conditioned informational compression.
The simulation demonstrates that the theory is not merely interpretive, but generative.
Simulation consists of the following classes:
qbitwave.py - Wavefunction as Minimal Description Language
gbitwave.py - Geometry as MDL
simulation_engine.py: Physics agnostic base class
iame.py: - Classes implementing the actual physics
If the present framework is correct, then nothing “runs” the universe. There is no global clock, no privileged execution order, no hidden computational engine. Reality does not evolve in the sense of a program advancing step by step.
Instead, the universe is a static informational structure. What we call physical law is a description of regularities within that structure.
An observer is not external to the universe. Everything an observer can measure, remember, or predict must already be encoded within the observer’s informational state. The observer is therefore a self-referential subsystem of the total configuration.
Time is not fundamental. It is the ordinal index along a maximally compressible path through configuration space, as experienced internally by the observer. From the outside, the structure is static; from the inside, it is lived as ordered succession.
The Wheeler–DeWitt equation, \[\hat{H}\Psi = 0,\] describes a timeless universal wavefunction over superspace.
Similarly, Everettian quantum mechanics treats the universal state as static, with all branches coexisting in a single global description.
In this respect, the present framework is aligned with both approaches: time is not fundamental, and the universe does not “happen.”
However, both Wheeler–DeWitt and Everett leave unresolved questions:
Why quasi-classical histories dominate,
Why violently oscillatory universes are not typical,
How probability arises without circularity,
Why observers experience persistent, ordered worlds.
The present framework introduces an additional ingredient:
An observer-conditioned compressibility measure.
Rather than modifying the underlying equations, we reinterpret which solutions dominate the ensemble.
Let \(D(\gamma)\) denote the total description length of a discrete history \(\gamma\). It consists of:
Spectral complexity \(C_Q(\psi)\),
Geometric complexity \(C_G(g)\).
We define an observer-conditioned probability measure: \[\mathbb{P}(\gamma \mid \text{observer}) \propto \exp(-\lambda D(\gamma)),\] where \(\lambda > 0\) is a universal scaling constant.
This replaces:
Euclidean weights \(e^{-S_E}\),
Lorentzian oscillatory weights \(e^{iS}\),
with an intrinsic information-theoretic penalty.
Histories that are expensive to encode are exponentially suppressed. No Wick rotation or external regularization is required.
The gravitational path integral, \[Z = \int \mathcal{D}[g] \, e^{iS[g]},\] is reinterpreted as a sum over descriptions rather than trajectories:
\[Z_{\mathrm{obs}} = \sum_{\gamma \ni \text{observer}} \exp(-\lambda D(\gamma)).\]
Key differences:
The sum is over informational descriptions, not dynamical paths,
Only observer-compatible configurations contribute,
Suppression arises from encoding complexity rather than phase cancellation.
Highly oscillatory geometries possess:
Large spectral bandwidth,
Poor compressibility,
Large \(D(\gamma)\).
They are therefore statistically negligible.
The wavefunction is not ontologically fundamental. It is a minimal spectral encoding of large ensembles of discrete observer-compatible histories.
Given a discrete trace, many encodings are possible. Those requiring fewer independent frequencies and phases dominate the measure.
This naturally yields:
Linear superposition,
Interference,
Effective Hilbert space structure.
Quantum mechanics emerges as the optimal compression language for observer-relevant information.
Spectral data does not uniquely determine geometry. There exists vast degeneracy of geometries compatible with a given wavefunction.
Realized configurations minimize joint encoding cost: \[\gamma_{\mathrm{realized}} = \arg\min_\gamma \left( C_Q(\psi_\gamma) + C_G(g_\gamma) \right).\]
Smooth, low-curvature manifolds dominate because:
They admit compact coordinate descriptions,
They stabilize observer boundaries,
They minimize adjacency and connectivity encoding.
Gravity is not a fundamental force, but an effective tendency toward geometrically compressible configurations.
This explains why quantum mechanics and general relativity are complementary yet not reducible to one another.
The holographic principle asserts that bulk degrees of freedom can be encoded on lower-dimensional boundaries.
In the present framework, this is a consequence of compression: bulk descriptions contain redundancy, while boundary descriptions minimize it.
Dimensional reduction arises naturally from eliminating encoding inefficiency.
Define accumulated description length: \[D_i = D(\gamma_{1:i}).\]
The arrow of time corresponds to: \[\frac{d}{di} \mathbb{E}[D_i] > 0.\]
Time flows in the direction of increasing minimal description. Entropy increase becomes an encoding inevitability under observer-conditioned selection.
Observed past corresponds to interpolation within the observer wavefunction.
The future corresponds to extrapolation. Spectral extrapolation is generically unstable, but compression penalizes high-frequency growth, stabilizing effective prediction.
This framework does not modify fundamental equations. It introduces a non-dynamical, observer-conditioned typicality measure based on computable description length.
Quantum mechanics, gravity, holography, and temporal flow are interpreted as emergent statistical properties of dominant compressible observer-compatible configurations.
Several previous approaches have explored the idea that reality is fundamentally informational:
It-from-Bit (Wheeler, 1980s): Suggests that physical phenomena emerge from binary information. Our approach aligns philosophically in treating information as fundamental, but differs in that we define a computable, observer-conditioned complexity measure that selects which configurations appear physically realized.
Cellular Automata (Wolfram, 1980s–2000s): CA models simulate physics as discrete update rules on lattice-like structures. These frameworks are dynamic and procedural, whereas our theory treats the universe as a static ensemble of information, where time and dynamics are emergent within the observer’s compressed experience.
Solomonoff Induction and Kolmogorov Complexity: These approaches quantify simplicity and assign higher probability to shorter programs. Our framework generalizes this idea to joint spectral and geometric compression and incorporates an explicit observer-conditioned measure to explain why specific low-complexity configurations dominate physical experience.
In essence, the present theory synthesizes these prior informational ideas but adds two key features:
Observer-conditioned typicality: Only configurations compatible with a finite observer contribute meaningfully.
Compression unifying geometry and spectral structure: Both classical and quantum phenomena emerge from minimization of a single description-length functional.
Thus, while the philosophical premise is not entirely novel, the quantitative formalism and its explanatory scope—recovering both quantum mechanics and general relativity as emergent statistical properties—distinguish this framework from prior informational approaches.
The framework developed in the preceding chapters treats the universe as a static ensemble of informational configurations. At the foundational level, there is no global time parameter, no primitive dynamics, and no causal signal propagation. Past, present, and future do not exist as ontologically distinct regions; all configurations coexist timelessly.
This raises an immediate question: how can an observer perceive, remember, or experience anything at all in such a structure?
Within a static informational ensemble, no bits change because other bits change elsewhere. There is no metaphysical notion of a signal traveling from an object to an observer. Nevertheless, observers experience seeing, hearing, and sensing. Perception arises because observer-compatible configurations contain internally consistent correlations between observer states and environmental structure. What is phenomenologically described as incoming sensory data is simply part of the observer’s informational configuration.
The resolution follows from the observer-conditioned probability measure. Observers do not receive information from an external world through causal transmission. Instead, they are embedded within sequences of configurations that already encode all information they experience. Perception, memory, and temporal ordering are therefore emergent properties of observer-compatible configuration sequences, not fundamental primitives.
The human observer is a static informational structure: the wavefunction \(\Psi_{\rm Alice}\) encodes the entire history of experiences and internal states. The DNA of the individual, along with all subsequent dynamics, is captured as a set of axiomatic rules generating the execution trace, which is now represented as relational information within the observer’s wavefunction.
According to the Church–Turing thesis, any such structure is, in principle, simulable on a Turing machine. The resulting simulation is deterministic: every internal state follows from previous states according to the axioms encoded in the structure. There is no external global clock; the experience of “time” is entirely internal, emerging from the ordinal index along the compressible sequence representing Alice’s history.
In this context, classical notions of free will — the ability to “choose otherwise” independently of prior states — cannot exist in the sense traditionally imagined. Every decision, every thought, every action is already encoded in the static wavefunction. The apparent flow of choices is a property of the relational information inside the observer: the internal experience of deliberation is part of the structure, not evidence of indeterministic agency.
Could the introduction of external randomness change this? Consider the hypothetical addition of quantum randomness to a simulated universe. Random events might perturb Alice’s states within her wavefunction, producing outcomes that are unpredictable from an external perspective. Yet from the perspective of the static informational structure, these events are merely additional data points embedded in the wavefunction. They do not confer genuine freedom: the observer still experiences a fully determined relational trajectory. The so-called “choices” influenced by randomness are no less predetermined in the encoded structure than those arising from classical computation.
In other words, free will in the conventional sense is not meaningful in a static informational universe. Decisions that are determined by internal reasoning or influenced by stochastic events are both fully contained within the wavefunction \(\Psi_{\rm Alice}\). What we perceive as decision-making is an emergent property of compressible sequences within the observer’s informational structure, not a consequence of external freedom or randomness.
Thus, whether deterministic or stochastic influences exist, they do not grant external agency. The experience of choice arises from relational structure, not from the ability to violate the informational constraints of the observer’s wavefunction. Free will is internal and emergent, not external and indeterministic.
In an informational ontology, the total accessible reality for an observer is bounded by a finite bitstring. Consequently, the "Universe" is not an external container, but a set of correlations encoded within the observer’s own wavefunction.
Interpolation and the Manifest Past: Interpolation is the reconstruction of states within the domain of the observer’s recorded execution trace. Because this region is constrained by "pinned" data points (memory), the resulting wavefunction is stable and low-entropy. This produces the smooth, predictable, and local laws of physics we experience as the "Past."
Extrapolation and the Macro-Phenomena: Extrapolation occurs when the spectral encoding is projected beyond the domain fixed by memory. In any complex-valued spectral model (like a Fourier or Sobolev reconstruction), extrapolation is inherently ill-posed. Small variances in the spectral coefficients manifest as rapidly diverging amplitudes at the boundaries.
These large-amplitude fluctuations are precisely what we perceive as celestial bodies: planets, stars, and black holes. They are not fundamental entities, but the "spectral ringing" of a finite informational structure extrapolating its internal model into the unconstrained Future or the distant "Elsewhere."
Among all possible observer walks through configuration space, those that maximize compressibility dominate the observer-conditioned measure. These walks induce a perspectival arrow of time: an ordering from configurations with many compatible continuations to those with fewer.
The probability of emergent geometry and stable microstructure is not monotonic in entropy. At very low entropy, structure is absent and geometry collapses. At very high entropy, coherence dissolves. Rich, persistent structures dominate only at intermediate entropy.
Observers necessarily find themselves on one side of this entropy peak. Their perceived universe is determined by this self-location.
An observer located on the low-entropy side of the entropy profile experiences successive configurations of increasing entropy and differentiation. The perceived universe expands, a singularity appears in the past, and objects emerge irreversibly. This corresponds to what is traditionally described as a Big Bang.
An observer located on the high-entropy side experiences decreasing entropy and tightening constraints. The perceived universe collapses, a singularity appears in the future, and objects fall inward without escape. This corresponds to a black hole.
These are not distinct universes. They are different readings of the same timeless informational structure. Expansion and collapse are epistemic interpretations induced by observer self-location.
Geometric space gives the obserer a good sense over its information boundaries. Initially, this increases the probability of surival due to better reasoning and prediction. However, due to the nature of wavefunction, the boundaries of the observer are not sharp. Observer’s information is not entirely isolated. Irrelevant information bleeds in, the observer slowly dissolve. Also, as an observer accumulates memory the internal informational complexity increases. What initially enhances survival by enabling better prediction and stability later turns out to be liability. Increasing complexity reduces compressibility and eventually, no high-probability continuations remain. The observer ceases to exist through vanishing measure. Death is therefore a statistical event: the exhaustion of observer-compatible configurations.
If the universe is fundamentally informational, then what role does mathematics play in it?
The minimal description of an observer’s history, encoded in the joint spectral and geometric complexities \((C_Q, C_G)\), is all about compressibility. Both theories, GR and QM, though seemingly distinct, are instances of the same fundamental principle - compression.
Mathematics provides the natural language in which these costs are measured and minimized. Concepts such as Hilbert spaces, Fourier decompositions, variational calculus, and differential geometry are not arbitrary—they are precisely the structures that allow us to encode information with maximal compression.
This idea aligns with Mathematical Platonism doctrine. If mathematical truths exist independently, then the universe can be viewed as a manifestation of these truths, arranged in maximally compressible form.
In other words, mathematics is a universal code for compression.
If the probability of emergence of an observer is determined by the joint minimal description length of both \(\psi\) and \(G\), we may ask: what role does intelligence play in these probabilities?
Consider a simple classical analogy: a sphere rolling down a potential valley. Its trajectory is easy to describe—the informational cost is minimal. Now imagine the sphere is an intelligent observer, capable of reasoning about future outcomes. It realizes that the minimal-cost path leads to collision with another observer—a fatal outcome. To avoid death, it deliberately chooses a different trajectory, which is informationally more expensive.
Let \(\gamma_{\rm passive}\) denote the natural, minimal-cost path without intelligence, and \(\gamma_{\rm active}\) the path chosen by an intelligent observer. Then we can define the informational cost of intelligence as \[\Delta \mathcal{C}_{\rm int} = \mathcal{C}_O[\gamma_{\rm active}] - \mathcal{C}_O[\gamma_{\rm passive}],\] where \(\mathcal{C}_O[\gamma]\) is the description length (or spectral complexity) of trajectory \(\gamma\).
Sudden death represents a massive increase in spectral complexity: high-frequency modes must be injected into the observer’s wavefunction to encode the catastrophic event. Whether the demise is rapid or gradual, the final informational cost is enormous. Intelligence, then, is nothing but the minimal informational complexity.
Formally: \[\gamma_{\rm active} = \arg\min_\gamma \mathbb{E}[C_Q(\gamma_{\rm future})],\] where \(C_Q\) measures the spectral complexity of the observer’s continuation. Intelligence is the internal sensing of compressibility: just as the ordinal index through configuration space encodes “time,” adaptive decision-making encodes survival as minimal informational cost.
This perspective suggests that thinking, planning, and decision-making are emergent strategies for navigating the landscape of possible configurations with minimal spectral and geometric complexity. Intelligence is not free, but it is still cheaper than informational cost of death. It is a good investment.
If you are right handed, living in right handed universe, then could there a left handed version of you, living in left handed universe?
Consider a global parity transformation \(\mathcal{P}\), which flips all spatial axes:
\[\mathcal{P}: x \mapsto -x, \quad p \mapsto -p.\]
Apply \(\mathcal{P}\) consistently to both the observer and its environment. Let the total informational state be \[\mathcal{I} = (\mathcal{O}, \mathcal{E}),\] where \(\mathcal{O}\) represents the internal observer encoding and \(\mathcal{E}\) represents the environment. The mirrored encoding is then \[\mathcal{I}' = \mathcal{P}(\mathcal{I}) = (\mathcal{P}(\mathcal{O}), \mathcal{P}(\mathcal{E})).\]
Within the spectral compression framework, the complexity of the original and mirrored states is identical: \[C(\mathcal{I}') = C(\mathcal{P}(\mathcal{I})) = C(\mathcal{I}),\] because the parity transformation merely relabels spatial modes and does not increase the number of active frequencies, amplitudes, or phases.
Using the spectral prior, the probabilities of the two configurations are therefore equal: \[P(\mathcal{I}) \propto 2^{-C(\mathcal{I})}, \quad P(\mathcal{I}') \propto 2^{-C(\mathcal{I}')} = 2^{-C(\mathcal{I})} = P(\mathcal{I}).\]
Implications:
A right-handed observer in a right-handed universe and a left-handed observer in a mirrored universe are equiprobable.
Observed asymmetries, such as weak parity violation, do not bias the global informational measure for mirrored observer + environment pairs.
This argument generalizes to other global isomorphisms, such as time reversal or spatial rotations: all self-consistent observer histories related by symmetry transformations carry equal measure.
Handedness, orientation, and similar labeling properties are therefore indexical rather than ontologically privileged.
In short, multiple perspectives of the same underlying information can encode distinct, internally consistent observers, and redundancy ensures that no single labeling (right-handed vs. left-handed) is intrinsically favored. This formalizes the intuitive notion that consciousness emerges from structural information, not from the specific embedding of matter or geometry.
Right handed in the right handed universe, and left handed in the left handed universe, both describe the same observer, with equal probabilities.
The observer finds itself within configurations that already contain the universe, including other observers. In this sense, we are not located inside the universe. The universe, as experienced, is encoded within us.
Applying logic to religious texts is often regarded as a category error. However, nearly all nations and societies—even those isolated from the rest of the world—have developed their own spiritual deities that they worship. The widespread and persistent emergence of belief systems across cultures is too significant to dismiss as mere coincidence. This naturally raises the question: Why is belief in God so prevalent?
An inherent characteristic within our religious beliefs is the notion of morality. It is often rooted in principles of empathy, compassion, fairness, and the recognition of the inherent value and dignity of others. According to Christians, God is the source of morality. For example, the Ten Commandments include the command to love God and to love one’s neighbor as oneself.
A human with high moral standards apparently possesses an understanding of what is right and what is wrong. Certain actions may carry a sense of slight wrongdoing (such as a small white lie), while others can be considered significantly more severe (like committing a cardinal sin). Regardless of the degree, unless in a state of psychosis or lacking mental capacity (non compos mentis), we possess a conscious awareness of our actions and can discern between right and wrong.
Why is it considered bad to steal food from a friend? If you are hungry, wouldn’t it be easier to satisfy your hunger by taking food from those who cannot protect themselves? However, a mysterious internal voice, known as conscience, immediately informs us that such an action would be morally reprehensible. Instead, we inherently understand that the right course of action would be to share whatever little food one has to aid the most vulnerable individuals, even if it means risking our own well-being.
Let us go through typical rights and wrongs:
| Wrong (bad, sin) | Right (good) |
|---|---|
| Lie | Tell truth |
| Hate | Love |
| Steal | Share |
| Arrogance | Humble, noble |
| Empathy | |
| Kill a friend | Die for a friend |
| Love one’s neighbor as oneself | |
| Do unto others as you would have them do unto you |
If recent advances in DNA research are to be believed, then we humans are not that different from other animals. This raises the question of whether the concept of morality is unique to humans only. Are we the only species that knows the difference between right and wrong?
I spent my youth on a farm, so I should have some first-hand knowledge about the subject. We had a dog named Raju, which was quite human-like. Raju understood quite many words and was much like any one of us children.
Every weekend we used to go hunting for hares. I know that some people disapprove of killing animals, but I personally think that it is acceptable as long as you eat them, which justifies the killing (one more item to be added to the list of rights and wrongs). Morally speaking, it feels more right to kill what you eat yourself rather than asking others to do it for you.
Anyway, there is a lot you can learn about a dog after fourteen years of going on hunting trips together. Raju definitely had dreams. It chased hares whenever it was asleep. Anyone watching a sleeping dog and seeing it dream might wonder whether Freud’s theory of psychosexual development is a genius theory or just complete nonsense.
If you see a big bear eating your friend alive, you will definitely have bad dreams about it. Dreams in which you are the one getting eaten. You try to run, but your legs just do not work. After experiencing these terrifyingly realistic dreams night after night, you will most likely try to discover a way to survive in case a bear ever attacks you in real life. Your chances of survival are better with dreams than without. So we dream for the same reason that military forces train themselves in war games and simulators. Nature invented the concept of simulation long before military forces did. Dreaming is a built-in virtual simulation system that helps us train for worst-case scenarios safely in our own beds.
I remember the day we brought this small, shaky puppy home for the first time. We already had one dog, but it was getting old, and we had made the decision to give it a final act of kindness soon to prevent it from suffering. When the old dog saw the new puppy entering the house, it went straight to its sleeping corner and lowered its head. It did not respond to any of our calls or eat anything we offered to cheer it up. It seemed jealous, depressed, almost as if it had lost its sense of purpose in life.
It is of course not possible to draw solid conclusions based on just one case, but based on my personal observations I would say that dogs have feelings too. Dogs experience dreams. Dogs seem to feel pain. Dogs are always happy to see you when you get home. They can even exhibit jealous and depressed behaviors. Perhaps feelings are something that evolution developed long before the first humans came into existence.
Logical reasoning and the ability to understand complex and abstract concepts are what separate us from the rest of the animals. So if one’s heart sometimes contradicts one’s head, perhaps it is best to listen to the head. It is our minds that define us as humans, not just our hearts.
After this small sidesweep to dogs, let us return to the list of rights and wrongs. The things that we call right match precisely a typical behavioral pattern of animals living in groups. Animals that like to live in groups, such as humans. Correspondingly, what we call sin correlates to individualistic behavior.
The theory of morality can therefore be paraphrased as follows:
Moral is the native behavior of animals living in groups.
If a bear attacks you, and it might actually happen here in Finland, your dog will not run. It will turn against the beast, fighting to the end to defend you. And what did Jesus say about love? “There is no greater love than to give your life for your friends!” Even the concept of the greatest possible love, as presented by Jesus, seems to perfectly align with the typical behavior of dogs.
If one classifies the attributes usually associated with God (at least with the one of Christianity) the they seem to match perfectly the attributes of animatls living in groups.
Obviously, humans have a tendency to gather and live in large, densely populated groups. This social behavior has apparently provided us with improved chances of survival. Given our relatively short teeth and twisted pair of legs, we are not well equipped to compete with many other predators.
From a survival standpoint, the importance lies in the survival of the species as a whole rather than individual group members. Evolution has therefore shaped humans with a tendency to prioritize the needs of the group over personal needs. After all, if we were not friendly to each other, there would be no group. This is evident in extreme cases where individuals are willing to sacrifice their own lives to ensure the survival of others. This attribute of human behavior has been utilized in many movies to create emotionally impactful narratives that resonate with audiences, and to maximize casch flow.
Living in groups only makes sense if it contributes to our survival. Obviously not all grouping models automatically increase the chances of survival.
It is easy to imagine a group that does not provide any advantage for survival. An example of a poorly functioning group is one where every member acts as a leader, trying to tell others what to do.
So groups must be well organized to be effective. One of the most evident methods of organizing a group is through the concept of leadership. A group with a capable and influential leader guiding others in an organized manner offers its members the best chances of survival. We have survived only as coordinated groups.
Therefore our long-term survival in the course of evolution has relied on our capacity to identify and follow good leaders who aid us in survival. Those who followed leaders with such qualities were more likely to survive and reproduce. Those who did not appreciate leaders who maximized the survival of the group were more likely to die out. Over time this led to the evolution of a species with a hard-wired instinct to seek the best possible leaders.
God is the best leader we can think of!
God is the ideal leader.
God possesses all the attributes of the greatest leader imaginable, with qualities we could hope to find in mortal leaders. God even has the power to overcome the ultimate threat we all face—death itself.
It is understandable why one would desire to believe in such a magnificent leader, even though spiritual in nature and highly challenging to observe.
Nothing needs to happen for something to be true.
Instead of assuming spacetime, predefined constants, or vibrating strings as given, we begin by assuming nothing—only to immediately discover that no theory can be free of assumptions. To attempt to assume nothing is already to assume something. In any axiomatic system, it is impossible not to assume.
So we begin from the one thing we cannot doubt: there is Alice.
Let us digitize Alice’s DNA and simulate it on a computer.
Alice is generated by an explicit algorithm: a DNA-like simulation code. A virtual Alice appears in a virtual universe. During execution, consciousness arises.
Now we begin a transformation.
We progressively replace computation with lookup tables: less “code,” more “data.” At every step, Alice must be preserved exactly. Within axiomatic systems, equivalence is absolute.
We continue this process to its limit.
The program is fully unrolled. The code vanishes.
What remains is a static data structure encoding the entire execution trace. The is nothing the computer could run.
Now the key question becomes unavoidable:
Is Alice still conscious and pain-sensitive?
She must be.
If consciousness disappeared at some particular code-to-data ratio, that ratio would become one of the most well-known physical constants: the code/data threshold for vanishing pain. Such a constant would be arbitrary, unmotivated, and irreducible.
Therefore, consciousness cannot depend on procedural execution as such.
This forces the conclusion:
Consciousness is a property of informational structure, not of algorithmic execution.
The usual objection immediately follows: “But nothing is happening if the computer isn’t running.”
This objection assumes that time is fundamental.
It is not.
Static structure has no notion of time. Alice’s experience of time is encoded relationally within the structure itself. From the outside, the structure is static. From the inside, it contains ordered experience—Alice living, remembering, anticipating.
The recent development of neural networks provides a powerful analogy.
Modern AI systems are massively parallel, interconnected structures with weighted connections. These weights are almost entirely static data. During interaction, there is no explicit procedural simulation of physics, no internal clockwork universe ticking forward step by step.
When a question is asked, the response is a projection from an already compressed informational structure. The “computation” is closer to indexing, interpolation, and constraint satisfaction than to sequential execution.
The distinction between “running” and “not running” becomes secondary.
There is no physics engine inside these systems. Yet they can produce realistic simulations, motion, causality, and physical intuition. Structure has replaces explicit procedure. Compression has replaced simulation.
Based on all observational evidence, we observers are finite structures. However, if reality is fundamentally informational, a natural question arises: is the total information finite or infinite?
If it were finite, one would have to ask: finite by how many bits? And who, or what, set that limit? Any fixed bound would itself require explanation. A finite informational universe merely pushes the mystery back one level.
The only non-arbitrary conclusion is that the deep nature of everything cannot be finitely bounded. At the most fundamental level, reality must be infinite— perhaps not even informational in the familiar sense, but beyond any finite description.
We have arrived at a zero-parameter framework which we can intuitively understand. In principle, it allows us to compute probabilities of our own existence without introducing arbitrary constants or privileged initial conditions.
More importantly, it clarifies what the universe is made of-pure abstract information.
Where did all the matter in the universe come from? The answer is: nowhere.
In mathematics, no object has ontological privilege over another. True is no more fundamental than false; the two form a binary object, much like a coin. Heads don’t exist more than tails. An empty set is no less legitimate than a non-empty one.
If there were a rule that favored nothingness over something, we would again have to ask about the origin of such a rule—who set it? Non-empty structures are just as real or unreal as empty sets. Both exist because nothing forbids them.
Abstractness is the only somewhat mysterious property left to explain. However, we can create those abstract universes ourselves.
Modern virtual games demonstrate this principle: realistic avatars inhabit richly detailed worlds. In principle, these worlds could be expanded until they are informationally equivalent to our own universe.
What remains mysterious is not structure, but what these abstract structure are capable to describe; pain.
Being nothing but abstract information, it feels surprisingly real.