1998 was said to be “a very good year for cosmology”  , because of the unexpected discovery that the expansion of our universe is accelerating    . It became also clear that our universe contains about 4% of ordinary matter, 23% of Dark Matter (DM) and 73% of Dark Energy (DE). However, the nature of DM and DE, as well as the cause of the accelerated expansion of our universe are still unknown. We show here that these problems are related to one another and to elementary particle physics. The semi-empirical Standard Model calls itself for explanations and has to be completed to account at least for DM particles.
This is possible, by generalizing Relativistic Quantum Mechanics. The value a of the smallest measurable length is unknown, but the resulting theory of Space-Time Quantization (STQ) is logically consistent. Moreover, it is sufficient that, to justify the Standard Model and to complete it in regard to DM particles   . Predicted properties of the cosmic DM gas are confirmed by astrophysical observations  . Here, we examine other consequences, especially for the accelerated expansion of space and Big Bang processes.
The idea of an expanding universe arose from realizing that our universe should fatally collapse, since all masses attract one another. Einstein’s theory of General Relativity (GR) revealed that space itself would contract because of gravity. This resulted from the fact that a homogeneous and isotropic universe allows that all measured distances are proportional to the same scale factor. It could be a function of universal cosmic time. Since our universe seems to be stable, Einstein assumed that gravitational collapse is prevented by another force. He characterized its strength by the cosmological constant Λ. Lemaître realized that Einstein’s equation for allowed for an expansion of our universe that starts at. Although it was then possible to set, Lemaître considered that this would be arbitrary. He treated Λ as a parameter of unknown value and showed that the expansion of space would eventually become accelerated when. This has now been established and suggests that the cosmological constant should be related to DE  .
Since the underlying physics is not yet known, the discovery of the accelerated expansion of our universe was very important. It led in 2011 to the Nobel Prize in physics  , but also to great perplexity. Where could the required energy come from? Does space contain energy  or is it provided by some yet unknown substance? It would then have to be present everywhere in the whole universe and has been called quintessence  . This refers to the ancient concept of a fifth element, but requires the existence of some hypothetical field and corresponding particles  . Actually, there are various models and scenarios, providing equivalent descriptions  , since any fluid that is evenly spread out in the whole universe leads to a cosmological constant. The pressure p of this fluid depends on its mass-energy density, according to the equation of state Since, it would be necessary that. The “cosmological constant problem” seems then to be reduced to imagining a substance that has great negative pressure. Actually, it is necessary to determine the real value of Λ by means of measurements and to explain it in a physically coherent way.
The cosmologist Michael Turner, who coined the term “dark energy” in 1998, stated in 2002 that it is “the causative agent of the current epoch of accelerating expansion”. Since for matter and for photons, DE is “more energy-like than matter-like… The challenge is to understand it”  . He considered that this problem is essential for future developments  : “Dark energy is just possibly the most important problem in all physics… As a New Standard Cosmology emerges, a new set of questions arises: What is physics underlying inflation? What is the dark-matter particle? How was the baryon asymmetry produced?… What is the nature of the Dark Energy?… The big challenge for the New Cosmology is making sense of dark energy.”
The standard Lambda-Cold Dark Matter (ΛCDM) model assumes non-rela- tivistic collision-less DM particles. This yields. However, DM particles do interact with one another    . To determine the nature and properties of DM particles and to explain how it is related to DE seems to be of fundamental importance. Actually  , “both dark matter and dark energy require extensions of our current understanding of particle physics… The existence of nonbaryonic dark matter implies that there must be new physics beyond the standard model of particle physics.” It has even been stated that the scene may be set for a Kuhnian paradigm shift  . The historical context suggests, indeed, that “a new theory will emerge, sooner or later” and that it “will radically change our vision of the world”  . Since STQ generalizes present-day theories and determines essential properties of DM particles, it is necessary to explore its possible consequences in regard to DE and the accelerated expansion of space. We try to do that by expressing relevant ideas as simply as possible.
In Section 2, we begin with a review of the evolution of ideas concerning the expansion of space, to locate the basic problem before proposing a new solution. It replaces the conventional model of DM and DE by another one, based on the theory of STQ and the resulting properties of DM particles. They allow for fusion and fission processes. This accounts for the production of DE and the accelerated expansion of space. Section 3 recalls some basic results of STQ, to prepare Section 4. It relates the Big Bang to particle physics. Section 5 summarizes results and raises some new questions.
2. The Accelerated Expansion of Space
2.1. The Cosmological Constant
The basic problem results from the fact that masses can only attract one another. Newton postulated that these forces are everywhere identical in the whole universe. Their strength is determined by the constant G and they vanish for great separations of the interacting masses, but they are additive. The astronomer Hugo von Seeliger noted in 1895 that this leads to an inconsistency  . It is reasonable, indeed, to assume that the average mass density ρ is everywhere identical in the whole universe. A very large sphere of radius r would thus contain a mass, where. Any object of mass m that is situated on the surface of this sphere is then attracted towards its center by the force
Because of the large-scale homogeneity of our universe, the center of this sphere can be arbitrarily chosen. The whole universe should thus collapse. According to classical mechanics, this would even happen with respect to absolute space. However, Einstein realized in 1905 that space and time can only be defined in terms of possible results of measurement, since the velocity c is a universal constant. His theory of Special Relativity (SR) disclosed that mass and energy are equivalent. From now on, we consider thus that ρ defines a mass-energy density (c = 1). Einstein became also aware of the fact that gravitational forces can be replaced by accelerations of the chosen reference frame. His theory of General Relativity (GR) related thus gravitational forces to the local metric of space and time.
This theory was published in 1915 and in 1917, Einstein applied the new concept of gravity to the whole universe  . Because of the cosmological principle, stating that our universe is everywhere identical at sufficiently large scales, the coupled equations of GR were reduced to a single one. It can be established in a more direct and conceptually simpler way, by considering again a very large sphere of volume. Its mass-energy content. According to SR, the test-mass, situated on the surface of this sphere has some rest mass mo and because of Newtonian gravity, its energy is
We consider the case where the kinetic energy is small compared to the rest energy, since this is possible and will be sufficient. The negative potential energy accounts for gravitational attraction towards the center of the sphere. We set, to avoid a blow-up for the total energy of many test-masses. Although (1) implies Euclidean geometry, all distances r could be proportional to the same scale factor. This can be justified by assuming that our 3-D space is the surface of an immense hypersphere in 4-D space. Any value of r is then proportional to the radius R of this sphere. It could even be a function of cosmic time t, which is everywhere identical in the whole universe. Since, we get then a differential equation for. The value of mo is irrelevant, but. We get thus two relations:
K is a constant. The theory of GR yields the same result, but K depends then on the curvature of space. Einstein considered a closed space of constant curvature, as for the surface of a hypersphere. K is then positive, but would be negative for an open, hyperbolic space. The intermediate “flat” space yields Euclidean geometry and. Anyway, derivation of Equation (2) with respect to t will eliminate the constant K, but the resulting equation is then
Considering only ordinary matter, distributed with the same mass-energy density in the whole universe, we get. Since this mass remains constant when, Equation (3) is reduced to a simpler one, which resulted also from Einstein’s theory of GR:
This means that even the new theory of gravity cannot prevent gravitational collapse. The acceleration is greater for small values of R, because of stronger forces. However, it is now attributed to variations of the scale factor instead of motions in absolute space. Since our universe seems to be stable, Einstein thought that something is missing. His conjecture was that
The so-called “cosmological constant” Λ can be viewed as defining the strength of a repulsive force that would be opposed to gravity at cosmological scales. It should be noted that (5) results from (3), when we assume that and that, but. That would be very strange, since vacuum would not only correspond to some ether-like substance. Its mass-energy density ρ would even remain constant when. Nevertheless, this assumption was implicit in Equation (5). Einstein circumvented the resulting physical problems, by assuming that and, but this does not necessarily imply that our universe is stable ().
Willem de Sitter noted already in 1917 that variations of are not excluded, since Equation (5) would even allow for an exponential increase of, when we assume that our universe is empty () and that is finite. Georges Lemaître recognized that the real universe allows also for an increasing function when we assume that its expansion did start at. It is then sufficient to assume some initial speed, to account for the present finite value of. Since the first term on the right side of Equation (5) is predominant for small values of R, the initial expansion would be decelerated. However, the last term becomes predominant for large values of R. While Einstein assumed that, to get a static universe, it would now mean that the expansion of space will eventually get accelerated.
Lemaître published this theory in 1927  and republished it in 1931  . He translated himself the text from French to English, but dropped some minor parts and added another text. Lemaître insisted always on the assumption that our universe is homogeneous, but his most brilliant idea was that, because of the “arrow of time”. Entropy is increasing, since disorder is more probable than order, and structuring yields even more degrees of freedom. Extrapolating backwards, there should have been “fewer and fewer quanta, until we find all the energy of the universe packed in a few or even a unique quantum”  . This was a logical, but revolutionary deduction from observable facts.
In 1927, Lemaître had transmitted the initial paper to Einstein, before they met at the fifth Solvay Conference in Brussels. Einstein accepted the mathematical treatment of Equation (5), but rejected the idea of an expanding universe. He thought that such a bold interpretation of (5) is not plausible. He told Lemaître about similar work of Alexander Friedmann, who had studied mathematical physics and published in German. Friedmann considered also a function, but was mainly interested in the mathematical consequences of GR for different curvatures of space. He solved Equation (2) in 1922 for a closed universe  and in 1924, he considered the evolution of an open one.
Lemaître ignored this work in 1927. Since he studied engineering, he was concerned with the real world and considered the problem of a possible variation of also in the context of thermodynamics. The initial state of our universe had then to be the simplest possible one. Lemaître could thus justify his assumption that, but he wanted also to know if our universe is really expanding. He knew that the speed of recession of stellar objects can be determined by measuring the red-shift of spectral lines. Vesto Slipher did this already in 1913 for the Andromeda nebula  . Edwin Hubble evaluated the distances of neighboring nebulas by means of Cepheid variable stars and established in 1929 that the recession speed of 22 nebulas is proportional to their distance  . Lemaître had predicted this relation two years earlier, but the observational confirmation was also essential. The discovery of the expansion of our universe resulted thus from independent, but complementary scientific research.
The term “Big Bang” was introduced by Hoyle in 1949, to ridicule this idea. He preferred a “steady state” cosmology to the concept of a universe that emerged from a single point and could even blow-up when the cosmological constant. Those who accepted the idea of an expansion of our universe considered that it was sufficient to set. Equation (4) and the same initial conditions would then yield a function that passes through a maximum and decrease until. Einstein thought also that the cosmological constant is not needed anymore, while Lemaître considered that instead of postulating that, we should consider the value of Λ as being unknown  .
Today, we know that the Big Bang occurred about 17.8 billion years ago and that the accelerated expansion of space began about 5 billion years ago. Thus, but an accelerated expansion of space is baffling. It requires a yet unknown energy source. It may be related to the existence of DM and DE, but it is necessary to clarify what these terms do really mean and how this might be possible. In such a situation, the first rational step is to describe what is known in terms of usual concepts.
2.2. The Conventional DM and DE Model
We can assume that DM and DE are substances that have together a mass- energy density. It has thus to be added to the mass-energy density of ordinary, baryonic matter. This implies that the mass M in Equation (3) is composed of two parts:
We have thus to determine the value of. This seems to be simple, since energy conservation requires that. Added thermal energy increases the internal energy of any substance, which is enclosed in the volume, but it could also do work by means of its pressure p. This relation applies to usual gases. The total number of molecules remains then constant, but their average kinetic energy () could be modified. Cosmic DM and DE, contained in a huge volume are not thermally isolated, but the inflow and the outflow of heat are balanced. Since, we expect that. It follows that, where Equation (3) leads then to
This is equivalent to (5) when the cosmological constant
Since, it would be necessary that cosmic DM and DE have everywhere a sufficiently great negative pressure, so that. This leaves room for many speculative propositions, but we want to find out if the concept of DM particles, which results from STQ, could be helpful to understand the enigmatic accelerated expansion of space.
We showed that DM particles interact with one another by exchanging N2 bosons and that this does usually lead to elastic scattering  . The cosmic DM gas behaves then like a usual molecular gas. The actual nature and mass of DM particles are irrelevant here. Their average kinetic energy would be (since classical Maxwell-Boltzmann statistics apply also to fermions and bosons at low densities). The cosmic DM gas has thus a pressure, where n is the average density of DM particles. The mass-energy density, where m is the average mass of DM particles. It follows that, where. These concepts are confirmed by astrophysical observations  , which proved that the cosmic DM gas is cold. We mentioned in the introduction that the implies that, but DM particles interact with one another. Recent cosmological measurements  implied that for (8). The local value of the Hubble constant, measured with improved precision, revealed also that the universe is expanding 5% to 9% faster than expected  .
2.3. The Model of Adaptive DM and DE
Since, the previous relation could be replaced by
This may seem to be unbelievable, since it requires that DM and DE have the capacity to keep their mass-energy density at a constant level, even when space is expanding. Nevertheless, this possibility has to be considered. It would imply that. Replacing p by in (8), we get
This yields even when, but is it possible to prove that, which seems to be an extraordinary claim? It follows from STQ that DM particles allow also for fusion and fission processes  . Fusion liberates energy, while fission requires energy. We have thus to examine the properties of the common mass-energy density. To focus our attention on the essential mechanism, we consider the particular case where the cosmic DM gas contains only two types of particles. Those of mass m are present with a density n, but they can be fused together to constitute compound DM particles of mass. The mass defect results from the binding energy but we set. It is convenient to define a dimensionless binding parameter. Thus. The density of these compound particles is, but they can be split, to yield again particles of mass m. Fusion requires the encounter of two particles. The probability is then proportional to, while fission of the particles of mass can be spontaneous. This yields the rate equation
The density n would remain constant when. To determine the actual values of n and, we need a second relation. It results from the fact that the total number of DM particles in the volume V is and their total mass is. The ratio defines the average mass , which is independent of V. Constancy of implies that. It follows that. Equilibrium is thus possible for this system. Figure 1 represents it by means of transitions between energy states, defined by the masses m and m' of two types of particles.
To verify if this equilibrium is stable, we consider a local perturbation for constant values of the parameters as well as. Thus,
The rate Equation (11) is then reduced to
Setting, we get and. Thus,
The constant A is determined by, which could be greater or smaller than n1, but equilibrium would always be restored. It will be reached more rapidly when is great. It could never be reached, of course, if fusion were irreversible (). The cosmic DM gas is in a state of homeostasis. It constitutes an adaptive system, where DM particles produce DE in such a way that the density n of unfused particles and the density of fused particles remain constant. The common mass-energy density is invariant when space is expanding. Even the mass-energy density of DM alone and the mass- energy density of the total liberated DE remain constant. The ratio
Figure 1. Fusion and fission processes of DM particles yield an equilibrium that accounts for the accelerated expansion of space.
and the total mass-energy density are also constants:
According to reported values of and, the remarkable ratio
. This would yield values between 2.7 and 3.4. Setting, it follows from (16) that, while. Since q and have to be positive,. When, for instance, we get. It follows that and. The 2015 results of the Planck measurements of the cosmic background  would imply that. This allows for, which implies that and. According to the proposed theory and cosmological observations, there are thus more fused than unfused DM particles. The quotient q is of the order of 2 or 3.
It also appears that fusion of DM particles would be characterized by a high binding energy, so that. This is unaccustomed, but not impossible. Great values of b facilitate the liberation of the enormous amount of energy, which is required to allow for the accelerated expansion of space. Even equilibrium would be reestablished more rapidly if it were disturbed somewhere. Nevertheless, it is necessary that fused DM particles can be broken up, spontaneously or by collisions. This could result from the excitation of collective oscillations of neutralons inside the compound DM particles.
The accelerated expansion of space is due to the adaptability of the cosmic DM gas. When the volume V increases, fusion and fission processes continue to equilibrate one another by producing more DM and more DE. Since DM particles are electrically neutral, they cannot produce photons and they are not heated or cooled by contact with ordinary matter  . Invisible cosmic DM gas is thus isothermal in the whole universe. Even when space is expanding, its density and temperature is regulated everywhere by mutually controlled fusion and fission processes. Thermal agitation leads to pressure effects. They are important for the constitutions of DM atmospheres  , but nearly negligible for the accelerated expansion of space. By measuring Λ, we could determine the average mass-energy density ρ.
Einstein’s conjecture (5) was very remarkable, since the existence of DM and DE was totally unknown. The conjecture that the scale factor had to be finite for the present universe was correct, but it is not constant. It has been stated and often repeated that Einstein told Gamow that the introduction of the cosmological constant was his “biggest blunder”. This is not sure anymore  and may result from a misunderstanding. Indeed, Einstein could only regret that he did not realize himself that our universe might be expanding, even when. However, this required the additional idea that cosmic expansion could start with. Maybe, Einstein did not consider this possibility because of (5). Anyway, it is interesting to know that Lemaître was fully aware of the physical meaning of the cosmological constant. He wrote  : “Everything happens as though the energy in vacuo would be different from zero”. He could not explain how apparently empty space might produce energy, but he did not simply believe that this is impossible.
3. Space-Time Quantization
3.1. The Positive Energy Content of Our Universe
To prepare the following chapter, we present some essential consequences of the theory of STQ in a short and different way. The basic idea was that Nature could impose a third restriction in addition to those which led to the development of relativity and quantum mechanics. In a nutshell, they are summarized by Einstein’s energy momentum relation and de Broglie’s redefinition of and:
The function applies to free particles in any inertial reference frame. It depends on the rest-mass mo of these particles. Louis de Broglie discovered that every particle has an “associated wave”. It is its wave function, which allows us to express knowledge. It defines the probability distribution for possible positions in space, but provides also information about motions in terms of possible values of p and E. Relativistic Quantum Mechanics (RQM) combines the relations (17) and accounts thus for c and h, but it is assumed that the wavelength could be infinitely small. This is equivalent to believing that the energy E and momentum p could have arbitrarily high values. If there did exist a finite limit a for the smallest measurable distance, we would have to accept that.
The value of a is thus determined by the highest possible momentum. Because of (17), it would be obtained when and when the energy E has the highest possible value. This requires a photon and that its energy cannot be increased. It would thus have to be equal to the total (positive) energy content of the whole universe. Its value would be. Although it is gigantic, it could be finite. Einstein’s energy momentum relation (17) has to be generalized when, but it yields also when. STQ confirms that energies, even for material particles  .
3.2. All Possible Elementary Particles
They can be distinguished from one another by means of their wave functions when the quantum of length. Indeed, there are two sets of possible results when the coordinate is precisely measured along a given reference axis: and,… The “normal lattice” contains the origin, but a symmetrically intercalated lattice is also possible, since the orientation of the x-axis is arbitrary. The wave function has to be defined for all values of x, but it can have the same or opposite signs on the intercalated lattice with respect to the normal lattice. Although the functions are different, the probability distribution will be unaffected. There are even more degrees of freedom, since can be multiplied everywhere on the intercalated lattice by a “sign-function”. It corresponds to a vector of magnitude 1 that can be rotated by an integer number of half turns towards the left or the right, to yield the values ±1 for the sign-function.
Such a modulation of wave functions is possible for any reference frame in our four-dimensional space-time. This yields four quantum numbers. Each one of them can be a positive or negative integer number, but they are everywhere identical for particles of given type. Every pattern of possible modulations at the smallest possible scale in space and time defines a “particle state”, while large-scale variations define possible “states of motion”. This is compatible, by multiplexing  . Small and large-scale variations can be combined.
The electric charge is always determined (in units e) by. The Standard Model of elementary particles accounts for three generations of quarks and leptons. They correspond to and display the same family structure, in terms of triplets. When states of type correspond to up-quarks, while states of type correspond to down-quarks. In both cases, there are 3 possible permutations, defining different color states (R, G or B). The electron is an elementary particle in the state, where. Antiparticles are characterized by opposite signs for all quantum numbers.
The Standard Model is not complete, since the u-quantum numbers are not only equal 0 or ±1, but also to ±2, for instance. Moreover, there are states of type with 6 possible permutations and two states, when. This octet defines particles and antiparticles of charge. They are elementary DM particles. Since they behave like neutral quarks, we called them “narks”. They are the supersymmetric partners of gluons. Supersymmetry results from the fact that the z-component of the spin vector along a given z-axis is defined by large-scale angular variations of -functions around this axis. These variations are independent of small-scale variations, defined by u-quantum numbers. Every particle state for fermions corresponds thus to a state for bosons and vice-versa.
3.3. Conservation Laws for Possible Transformations
Elementary particles can be transformed into one another by means of annihilation and creation processes. However, the sum of u-quantum numbers has to be conserved for every one of the four space-time axes, as well for bosons as for fermions. This accounts for the fact that a quark can change its color by creating or annihilating a gluon. For instance,. Narks can also create or annihilate gluons, since, for instance. Narks and quarks are thus particles that are subjected to strong interactions. They yield attractive forces that can lead to scattering or binding.
All compound particles have to be “color neural”. This means that the three spatial reference axes have to be involved with the same probability. Nucleons are thus constituted of 3 quarks in R, G and B color states. Narks can constitute a greater variety of compound particles. We called them “neutralons”, since they are electrically neutral. Nucleons interact with one another by exchanging π mesons, while neutralons interact with one another by exchanging N2 bosons  . The cosmic DM gas is composed of neutralons and compound neutral particles. They interact most frequently by elastic scattering, which leads to pressure effects and explains astrophysical observations  . DM particles allow also for fusion and fission processes. They are important for cosmology, but STQ has also other consequences.
4. Big Bang Processes
4.1. The Primeval Photon
Georges Lemaître justified the idea of an expansion of space, starting at, by considering the thermodynamic “arrow of time”. Since the initial state of our universe should be the simplest possible one, it would correspond to a unique quantum  . He called it the “primeval atom”  , which meant only that it should be an elementary particle. Is it one among those, which are possible according to STQ? The best candidate would then be a photon. Its energy-momentum relation is reduced to, but a unique “primeval photon” had the highest possible (positive) energy. Since STQ requires only that, to account for all possible elementary particles, the value of the quantum of length could be a function of cosmic time. Its initial value would thus determine the energy of the primeval photon. According to quantum mechanics, it is not possible to specify its position with absolute precision in our 3-D space. However, the probability distribution could be uniformly distributed over the surface of the smallest possible hypersphere. This means that its radius.
The primeval photon had even to be in a quantum mechanical state where all orientations of its momentum vector were equally probable. Their magnitude was defined by and, where the wavelength was determined by periodic boundary conditions. The available 3-D space was reduced, indeed, to the surface of the hypersphere of radius. The wavelength was thus equal to the length of any great circle of the hypersphere. All possible waves were propagating there with equal amplitude in any direction and the average value of all momenta p was zero, although. The energy E of the primeval photon was equivalent to a mass M, since
The primeval photon was confined by gravity on the surface of the hypersphere of radius. Indeed, the distributed mass was attracted by the equivalent mass, situated at the center of the hypersphere. The total energy of the primeval photon was thus
By combining (18) and (19), we see that the initial radius of the hypersphere was equal to the Planck length, according to its usual definition:
This yields and. Since our universe was in a zero-energy state, it is reasonable to assume that it arose from vacuum fluctuations. The resulting state was stabilized, but gave then rise to an amazing sequence of transformations and a highly astonishing evolution at different levels of complexification. It began with the conversion of the primeval photon into many material elementary particles, according to the conservation law for u-quantum numbers. These particles interacted with one another, which led to the formation of compound particles in negative energy states. The total amount of negative energy increased, as well as the total amount of positive energy, but energy conservation implies that our universe is still in a zero-energy state.
4.2. The Matter-Antimatter Asymmetry
According to STQ any particle state for a fermion implies the existence of an antiparticle state. Nevertheless, there are no (or nearly no) antiparticles in our universe. It is customary to assume that the Big Bang created particles and antiparticles in equal proportions, but that all (or nearly all) antiparticles were annihilated at some early stage. We propose another explanation. Pair production of particles and antiparticles is not required, since the conservation law for u-quantum numbers requires merely that their sum is unchanged for every one of the four space-time axes. For the first generation of elementary particles (), we could thus get for instance
This would respectively correspond to the production of a neutron, combining quarks in R, G and B states, as well as the production of a proton, where (uud) quarks are in R, G and B states. We add an electron and an electron neutrino, to preserve the electric charge and to account for weak interactions. There are many other possible primary or secondary transformations, but they had to create compatible fermions. This means that their fields could “coexist” in the extremely compact initial 3-D space. They did not appear when they could annihilate one another. Pair-production is possible today, since the fields of the resulting particle and its antiparticle can rapidly be separated from one another, but this was not possible on the initial hypersphere.
Elementary particles of the first generation were most frequently produced, since they have lower energies. They had also the greatest chance to survive. Moreover, quarks in R, G and B color states had to be created with equal probabilities, since the 3 spatial reference axes are physically equivalent. DM particles could not be directly created by the primeval photon, but any quark can create a gluon, by changing color. This gluon is annihilated by other quarks or creates two narks in other color states. This allowed for the creation of a compatible ensemble of narks, participating in the general particle-antiparticle asymmetry  .
We can also explain why the Big Bang “opted” for particles instead of antiparticles. Their production would have been equally probable, but that does not imply simultaneity. Once the primeval photon started to be transformed into so- called “particles”, this process was immediately amplified by stimulated emission. Einstein discovered this process in 1917, when he tried to explain Planck’s law for the frequency distribution of black body radiation. Emission of photons can be spontaneous, but also stimulated by the presence of other photons of the adequate type. In quantum mechanics, this is justified by means of great intensities of quantized fields and would also apply to material particles.
4.3. The Initial Inflation
The Big Bang was triggered by the transformation of the primeval photon into quarks and narks. Since they are spin- fermions they could not coexist close to one another in identical states. They darted away, but continued to interact by exchanging gluons. Even very brief binding implied negative energy states. The initial “soup” of interacting particles got thus very hot and the radius of the hypersphere grew extremely fast in gigantic proportions. Eventually, all available quarks were definitively bound to one another inside nucleons and all narks inside neutralons. Because of the enormous frequency of these fusion processes and the very great binding energy for strong interactions, the resulting “inflation” of space was gigantic and extremely rapid. However, it had to stop when all available elementary particles were definitively bound to one another. Fusion processes became irreversible, but during a very short period, the positive energy content of our universe and the scale factor grew extremely fast.
Science progresses by successive approximations. Lemaître solved Equation (5) and developed the concept of an expanding universe by adopting the simplest possible hypothesis:. He assumed also that the expansion began in a linear way, which defines an initial speed of expansion:. Alan Guth improved this theory  , since he strongly felt that homogeneity and isotropy of today’s extremely vast universe had to be explained. A very rapid and enormous increase of during the Big Bang would be sufficient. This inflation can now be justified in terms of Bang processes.
Initially, we wanted only to find out if space and time are continuous or not. It appeared that Space-Time Quantization (STQ) is possible and accounts for elementary particle physics  . This theory applies also to DM particles and we showed here that they account for the accelerated expansion of space. This results from fusion and fission of DM particles. They liberate and require energy in such a way that the common mass-energy density remains constant, even when space is expanding.
The conventional model called for strong negative pressure. They cannot be justified by elastic collisions of DM particles and are not necessary to account for the cosmological constant Λ and the resulting accelerated expansion of space. It is also possible to account for Big Bang processes and to prove that the initial value of the quantum of length was determined by the initial radius of the 4-D hypersphere. It appeared that is the Planck length and that the smallest possible wavelength was.
It should be noted that the reality of STQ is justified by many remarkable empirical observations, summarized by the Standard Model of elementary particle physics  . STQ leads also to the concept of pressure for the cosmic DM gas, which is confirmed by astrophysical observations  . STQ accounts even for cosmological processes, but many new questions are emerging. It is necessary, for instance, to clarify the mechanism of fission and fusion processes. Observations of the evolution of Λ and the masses of DM particles in galactic halos may be helpful. This will eventually lead to the development of DM physics, partially similar to nuclear physics. At least DM-electron interactions allowed already for direct detection of DM particles  , but we are only beginning to unravel the mysteries of the dark sector of our universe.
Instead of worrying about the ultimate fate of our universe, we have to realize that its accelerated expansion cannot continue forever. This would imply an unphysical divergence, which can only result from an approximation. Actually, the accelerated expansion cannot exceed the capacity of DM to generate more DM. It depends on the transition probabilities α and β, as well as the parameter q. Since fusion of DM particles constitutes an energy source, at least in the form of creating more DM particles, we may wonder if it can be harnessed. At first sight, this seems to be impossible, since they are electrically neutral, but creation of hybrid particles is not excluded  . Could much older ET civilizations be using this energy source for interstellar space travel? Anyway, the essential conclusion of this article is that the validity of STQ is confirmed by applying it to cosmology. The intimate connection between cosmology and elementary particle is strengthened and invites to challenging research.