The Universe of Codes
welcome to the physics website of
Andrea Gregori
A short introduction
The starting point of my research was the aim to find, within a theory providing a quantization of gravity, the configuration describing the physics of the universe we observe. I had in mind string theory. In that case, the physical content depends on the geometry of the compactification; by "configuration" I precisely mean a choice of compactification and possibly a set of constraints on the dynamics. More in general by configuration I mean the set "geometry plus the conditions one has to impose in order to produce a certain physical content dynamics".
The question I was asking to myself was: let us suppose one finds the solution; is there a rationale for it, some reason allowing us to say already "a priori" that this had to be the right configuration, besides the simple "why it works"? The idea was to avoid "ad hoc" terms, and to impose the smallest possible number of conditions. Starting from the observation that in the physical world all symmetries appear to be broken, I asked myself whether this could proceed from some entropy principle. Moreover, I wondered whether such an entropy principle had to be understood not as a selection principle, but as already itself working in a statistical way, so that no physical law, either of symmetry or of conservation, had to be considered "exact", but to be just the most entropic situation in some phase space of the configurations of the universe.
I arrived in this way to formulate my proposal that the universe is the superposition of all the geometric configurations one can think about. Inspired by general relativity, I started from the idea that any physical object corresponds to some geometry of space, which in turn is equivalent to a certain distribution of energy in space. Based on this I formulated the problem of the configurations of the universe in terms of distributions of units of energy in a space of finite dimension. I choose a discrete space, because the set of all energy distributions can also be seen in a more abstract way, as the set of all possible binary codes of information, i.e. of the type 0 1, to be interpreted as the set of maps from a space of energies into a space of positions. From this point of view, the universe would be nothing else than the set of all possible logical codes, and the physical world pure information - indeed the whole of information, that we interpret and organize in our mind in terms of energy, space, and from here fields, particles etc... In such a universe, the time ordering corresponds to the natural ordering the set of codes possesses, namely the one given by inclusion of sets. The set of codes corresponding to a given total energy contains the set of codes corresponding to a lower energy, in the sense that for any code belonging to this second set there is a code belonging to the set of higher energy which contains it as a subset. It may appear rather abstract, but this formulation of time ordering corresponds to the common perception of time flow "from past to future", in the sense that the present time has memory of the past, in that it "contains" it, but cannot have memory of the future, because the future is not contained in it.
Although all this may appear a pure formal game, the universe defined in this way seems to possess all the properties of the real physical universe. The various configurations can in fact be classified according to their entropy, i.e. according to the volume they occupy in the phase space corresponding to a certain "time", or total energy of the universe. One then discovers that the configurations don't have all the same weight. Indeed, one can show that there are no two configurations with the the same weight. Therefore, the universe resulting from their superposition shows rich structures. Configurations occur more often the more symmetric the geometric space is they correspond to. The most entropic configuration corresponds to the geometry of the sphere in three dimensions, in which the value of the curvature can be seen to correspond to what one currently expects from the contribution of the cosmological constant, plus the contribution of matter and radiation of our universe. In this way, one also obtains a natural explanation of the fact that we live in three dimensions.
Once observables are appropriately introduced through the interpretation of codes of information in terms of geometries, and therefore of wave packets etc... in the three dimensional space, the assumptions made so far imply a certain "fuzziness", an "unsharpness" in the value of any quantity. This unsharpness is precisely of the order of the Heisenberg Uncertainty at the base of quantum mechanics. In this framework, the latter is therefore viewed as the uncertainty due to the fact that not only all the observables, but also the three-dimensional space in which they are defined, are only mean concepts.
What one obtains is a universe in which everything, starting from the dynamics and the time evolution, is ruled by some kind of "entropic principle", which substitutes both the classical causality and the probabilistic evolution of quantum mechanics. Special and general relativity, and quantum mechanics, constitute approximations, that one recovers in appropriate limits. Also masses and couplings are related to a volume of occupation in the phase space. Once the spectrum of the theory is identified, though the passage to an approximated description in terms of string theory, it is possible to compute the masses of the elementary particles and their couplings from a principle of geometric probability.
In this scenario there are no free parameters that one can adjust: everything is determined in terms of the only "variable" of the theory, the age of the universe. In this way, not only the average curvature of the universe (and therefore the so-called cosmological constant), but also the masses of particles and their couplings turn out to depend on time. Once, through comparison with experimental data, one solves for just one of these quantities, or equivalently for the age of the universe, all the other ones are then uniquely determined. It is therefore quite remarkable that the values one predicts turn out to be in agreement with the experimental observations (for instance, already at the first orders of approximation one obtains a value of the fine structure α which is away from the experimental one by only one part over 106). Moreover, the time dependence of masses and couplings is in agreement with some deviations observed in the spectra of ancient quasars. Among the various properties, this scenario is characterized by the absence of low-energy supersymmetry (SUSY is broken at the Planck scale) and of Higgs bosons (masses receive here a different explanation, the description of interactions in terms of gauge symmetries is here only an approximation). Also the 125 GeV resonance detected at LHC, and attributed to the presence of a Higgs field, is here not only differently explained, but it comes out as a prediction of the theory.
Besides giving predictions both qualitatively and quantitatively in agreement with experimental data for what concerns the physics of elementary particles, one of the most interesting aspects of this approach is that it allows to obtain a true theory of quantum gravity, in the sense that it allows to quantize the geometry of space. In this scenario it is possible to study quantum systems with a non-trivial geometry. An example are the high temperature superconductors. In this theoretical framework it is possible to see the relation between lattice complexity and critical temperature, namely how the quantum fluctuations of the geometry reflect on the delocalization of electronic wave functions, thereby correcting the predictions of the BCS theory in order to account also for the geometry of the material. In this way, it is possible to predict the critical temperature of a material from its geometry, intended not simply in the sense of morphology, but in the relativistic sense of distribution of energy along space.
the role of prime numbers
Prime numbers can be seen as the "building blocks" of natural numbers. It is reasonable to expect that, in a physical scenario in which the values of masses, couplings, and amplitudes, are related to discrete combinatoric weights, prime numbers do play a fundamental role. Indeed, their distribution among the integers turns out to say something about certain similarities of physical structures at different scales, and about certain scaling properties, like, among others, the scaling properties of the couplings of the theory. But there is more: once the relation between distribution of primes and geometric structures of the universe is identified, one has a geometric method of investigating the distribution of prime numbers. The latter can then be derived through inspection of the combinatoric probability of forming certain discrete geometries of the universe. In this way one recovers the well known expression of the prime-counting function. But there is more: through a closer investigation of the way this class of configurations (which are the ones building up the long-range interactions) are constructed, one derives a bound on the regularity f their distribution, at a generic size of the universe. This is obtained by direct construction, and in principle it does not need a passage through a physical interpretation. The latter is indeed not required, as it just serves as guideline. Focussing on objects admitting a concrete physical interpretation is of help, but as a matter of fact no external physical assumption is required, as the problem is a purely geometric-combinatoric one. The bound on the regularity of the distribution of the geometries corresponding to the prime numbers is derived by constructing them inductively, around a generic discrete value of the radius of the universe, or equivalently around a generic number size N. It does not seem therefore to require any assumption on the distribution of prime numbers, and it can be seen to correspond to a sufficient and necessary condition for the validity of the Riemann hypothesis on the zeroes of zeta function. This means that, through the geometric representation, we get a recipe for constructing a representation of the prime numbers at any size, with an approximation on their distribution equivalent to the one implied by the Riemann hypothesis. In principle, if the derivation of the geometric structures of the universe of codes can be proved to have no loopholes, and to not inadvertently make use of the Riemann hypothesis, this could also be used as a proof of the Riemann hypothesis itself. But the main point here is that, for our purposes, we don’t need to pass through a formulation on the continuum (as also the analytic continuation of the zeta-function is). What seems to me to be the most interesting point is that we do get an approach to these problems, considered either in their mathematical or in their physical aspects, which is based on a formulation on the discrete. In this perspective, for the purpose of obtaining certain properties of prime numbers, also the connections to the properties of the Riemann zeta-function are somehow “accessory”.
This leads to a question, which is also a fundamental question of the entire physical construction introduced here: do we really need to think in terms of physics on the continuum? Is the continuum the natural environment in which physical problems have to be framed in their most fundamental formulation, or is it just a sometimes useful large-scale approximation of something which is basically (and best) defined on the discrete? In the first case, the discrete is ancillary to the continuum, and used as an approximation enabling certain computations. In the second case, the perspective is reversed. This reversion of perspective is not just formal trickery. I will try to give here a hint of why I am persuaded of this.
Intuitively, one would think that the continuum “contains more” than the discrete: the latter represents only isolated points in the continuum. However, real (and therefore also complex) numbers are introduced through limiting procedures applied to the natural numbers. In these passages (limits, regularizations, etc…), information is lost. Think for instance at the fact that, when applied to symmetries, these processes increase entropy: a continuous transformation group has more points of symmetry than a discrete one (indeed, it has infinitely many). Now, coming back from the continuum to the discrete, and indeed not to just a subset of it, but, as in the case of the whole set of prime numbers, to the “generators” (by multiplication) of the whole set of natural numbers, looks then like a non-uniquely determined operation, like pulling-back a projection. If the passage to the continuum is performed keeping trace of what is going to be lost during the process, the continuum can be fruitfully used as a tool enabling certain computations (as we also did). If instead one starts from a problem already defined on the continuum, we have no recipe to select what to retain as useful to the solution, and what to discard because of its no more being a modelling of a possible full formulation on the discrete. One may therefore question whether it is the idea of viewing the continuum as fundamental to be inappropriate, and at the origin of some “impasses” of modern physics. In particular, this would also mean that the physical quantum models inspired by the Hilbert-Polya conjecture, constructed and investigated in the hope of providing an insight on the proof of the Riemann hypothesis, suffer a lack of predictive power because of their being defined on a cut-off environment.
if you like this research you can