The ultimate objective of physics is to describe the world in which we live, and this is carried out through the construction of models (theories) which codify the phenomena that are investigated. Despite the complexity of the subject matter, its structures are captured via a set of (relatively) easy formulae, thereby confirming that the conceptual scaffolding of physics is auto-consistent. In a similar fashion, a model is accompanied by an interpretation, which explains the physical meaning of a theory and creates the correspondence between mathematical symbolism and reality. More often than not, however, the success of a particular model has been confused with the ultimate reality, thus making scientific process illogical. It is for this reason that we have learned that each theory must be coupled with a validity domain. In this sense, the development of a model is a refinement process, which enlarges and perfects its horizon.
In the history of physics, a clear separation is drawn between classical physics (Newton-Maxwell formalism) and modern physics (Quantum Relativity-Mechanics). However, no separation has been drawn between physics before and after the quantisation (discretisation) introduced by Planck constant h, and after the formulation of zero-point energy: this is the immutable energy that every physical system witnesses at the absolute zero, i.e. when it is thought that every fluctuation at the microscopic state ceases. According to modern physics, the zero-point energy is not an intrinsic property of the universe, rather it is a consequence of Heisenberg Uncertainty Principle. W. Nernst was of a different mind. According to him, the zero-point energy is a peculiarity of the universe. Therefore, although quantum mechanics “is the best theory we have” and it has already been around for a century, we cannot overlook the physical nature of zero-point energy.
In the middle of the last century, some bold physicists (a.o. T.W. Marshall, T.H. Boyer), developed the Electrodynamic Stochastic Theory, which takes into account Maxwell’s electrodynamics with the addition of zero-point energy as a boundary condition. This theory―which has shown interesting developments, obtaining major results previously only held by Quantum Electrodynamics―is especially remarkable for its change in perspective. The inclusion of zero-point energy opens up new scenarios for classical physics, expanding its horizons.
2. Thermodynamic Notions
Entropy is a state function that computes the aptitude of a physical system to exchange energy. Every system which is subject to transformations tends to change in order to increase its entropy, or, at least, not to decrease it (maximum entropy principle).
The first definition of entropy is linked to thermodynamics and it was introduced by Clausius in 1864. Suppose a physical system exchanges a certain quantity of energy , at (absolute) temperature T, the variation of entropy is defined as the ratio between the exchanged energy and the temperature at which the exchange takes place, with respect to the unit of mass or one mole:
where the equality sign is valid for reversible processes classically.
Similarly to energy, entropy cannot be classically calculated in absolute terms, because it is defined up to an additive constant , which we cannot calculate. This, however, will not cause major difficulties, as the main interest lies in the variation of entropy in status changes of a thermodynamic system.
2.2. State Equation
In 1834, Clapeyron formulated the state equation of ideal gases, synthesising the empirical laws by Avogadro, Boyle, Charles and Gay-Lussac. In its simplest form, the equation is written according to
where p is pressure, V is volume, n is the number of moles, R is the gas constant and T the absolute temperature. This equation refers to a hypothetical ideal gas, which consists of non-interacting point particles, where the only form of energy which is present is the kinetic one of those same particles that constitute the gas.
The gas constant can be expressed in the form
where Avogadro number stands for the number of entities which are contained in a mole, where the following relation holds
where N is the total number of entities in the actual system.
3. Medium Planck
In the previous article  it was shown how Planck particle is characterised by the natural units formulated by Planck, and it constitutes the medium Planck. Consider the following relations for the medium Planck:
where is Boltzmann constant and is Planck temperature:
where is Planck mass and c is the speed of electromagnetic waves in the vacuum:
where is the reduced Planck constant ( ), is Planck angular frequency and is Planck frequency.
As shown, expressions (5), (6), (7), are equal between them and equal to Planck energy
Therefore, they can be unified in a single relation, both quantitively and qualitatively
thus bearing witness to the energetic unification of medium Planck.
The three aspects in (9) are clearly evident. The vacuum, the domain of medium Planck, contains all the energetic forms, which are only distinguishable through the application of the particular theory that captures its specific character (thermic, particle, wave).
Considering the different equalities between the terms in (9), this results in the following relations:
from which it can be obtained:
The relation (11) says that is an energetic constant which is linked to the thermal-like behaviour of Planck particle; the relation (12) says h is an energetic constant, which is linked to the wave-like behaviour of Planck particle; finally, the relation (13) says that is an energetic constant which is linked to the particle-like behaviour of Planck particle.
Against this backdrop, and especially considering the physical meaning of (11), the entropy of Planck particle, or entropy quantum, or zero-point entropy is defined as follows:
Moreover, the following relations can be established:
which will prove important later on in the development of the theory.
If what has been so far obtained, i.e. is a quantum of entropy and is the number of particles contained in a mole, is applied to (3), which expresses the gas constant, it follows that the gas constant expresses the entropy of a mole of a medium Planck.
To verify this statement, the state equation of ideal gases is applied to medium Planck:
It is important to bear in mind that:
(Planck pressure) (19)
(Planck volume) (20)
(Planck temperature) (21)
where is Planck length. It is to be noted―as it should be―that the product is an energy, exactly like Planck energy:
From (18), it obtains that:
and from this equation, it follows that:
Applying equation (11) to (24), it results:
and taking (3) into consideration:
In order to verify the equality, it must be that , that is is regarded as the entropy of a single Planck particle.
In fact, it is not a new concept that can express the quantum of entropy. Albeit indirectly, it was regarded as such by Leo Szilard (1929) in his famous “Problem of Maxwell’s demon”. From a classical point of view, the term identifies the average energy of a gas, (equipartition principle), with a continuous distribution. However, the quantum of entropy introduces a discretisation on thermic energy, on a par with Plank constant h with respect to wave energy. Therefore, thermic energy can assume discrete values with minimal separation .
4. The Ideal Gas Is the Medium Planck?
Consider now an ideal gas. It is a simple thermodynamic model, characterised by non-interacting point-like particles. Following experimental evidence, Boyle and Mariotte found a relation between the pressure and the volume of a gas at a constant temperature:
whereas Gay-Lussac hypothesised the dependence on temperature of the volume of a gas at constant pressure:
The quantities are respectively the pressure, volume and temperature of an arbitrary and unknown thermodynamic state, which is however fixed. It is interesting to know whether this thermodynamic state can assume any configuration or it is exclusively specific, and thus an absolute intrinsic reference state. For such purpose, one wonders what kind of relation holds between the pressure, volume and temperature, if one were to move from the initial state, characterised by the quantities to the final state ―what follows can be found in any thermodynamics textbook.
First of all, let us change the pressure at constant temperature, up until reaching pressure p, when the volume is obtained:
Now, let us change the temperature at constant pressure, thereby obtaining:
By replacing the intermediate volume of (29) in (3), it obtains:
One can note that is the ratio between energy/temperature, and therefore an entropy. Furthermore, given that the entropy is an extensive quantity, it follows that it has to increase proportionally to the number of gas particles. Thus, the entropy must be equal to the number of particles for the quantum of entropy
Now we should ask ourselves: how many ways to choose the triad do we have? They would surely be infinite if we one were to expect fixed values which are void of physical meaning. However, we are not looking for a triad which is linked to any ratio with a fixed value; rather, it must be linked to the (minimum) quantum of entropy . This will greatly narrow the quest. The strategy we will follow is to look for the minimum value of and the maximum temperature , from which―through interpolation―a precise pressure will follow.
4.1. Estimate of Vo
Einstein’s General Relativity stems from the necessity of reconciliation between Newton gravitation with special relativity, and it excludes any quantum effect. Indeed, quantum theory also excludes any gravitational effect. As well as differing on their foundational paradigm, and therefore on their perspective on how the world is, these two theories have another obstacle preventing them from being linked together: general relativity is a deterministic theory whereas quantum theory is a probabilistic theory.
Suppose we would like to develop a theory that strives to riconciliate general relativity and quantum theory―this being the actual aim of quantum gravity―it is reasonable to expect that three fundamental constants c, G and , which are present in both theories, will be involved.
In 1899, Planck, overturning the anthropocentric approach of researching universal measuring units, starts off from these three fundamental constants in order to define those which will be referred to as Planck units. These units hold their meaning in every space and time. Yet, the limits they posit are insurmountable, the crossing of which results in a complete loss of meaning for physics as we know it.
In quantum theory, every mass m is associated with Compton wavelength , which sets out the distance scale. This is important to understand the behaviour of a particle of a given mass:
It is also true that in general relativity every mass m is associated with Schwarzschild radius , which establishes the distances scale. This is useful to understand the behaviour of an object of a given mass:
In general relativity, the information is inferred through observations of very heavy objects (with ), while in quantum theory the information is inferred through observations of very light objects (with ). This makes it impossible for us to riconciliate these two theories. Unless, there exists a “place” where and are the same, thereby identifying a scale of lengths that allows for their co-existence.
Analysing the two scales of distances (34) and (35)―respectively intrinsic to quantum theory and general relativity―one notes that both are characterised only by mass, because the rest of the terms are constants. One thus wonders: is it possible that this “place” is characterised by a particular mass?
The medium Planck is characterised by Planck length and Planck mass:
Consider now Planck mass and insert it in (34), obtaining:
which is equal to Planck length, . In the same fashion, insert Planck mass in (35), obtaining:
which is equal to Planck length, .
We can therefore conclude that these characteristic lengths, and , are unified when mass m is Planck mass , and they are then equal to Planck length .
This result hints at the fact that in Planck scale, general relativity and quantum theory could be finally unified. This is all very well, as we have reached the conclusion we were looking for. Planck length is the smallest length that interests us, and consequently Planck volume is the value we were looking for.
4.2. Estimate of To
In this case, the job should be a piece of cake. Quite simply, Planck temperature is an insuperable limit in quantum theory. Furthermore, according to the standard model of cosmology, Planck temperature is thought to be the initial one at which the Big Bang happened. Our choice is therefore almost forced on us: Planck temperature is the temperature we are looking for.
4.3. Estimate of po
Finally, after obtaining Planck volume and Planck temperature , we will calculate pressure through interpolation from (33), assuming
Given that and , and considering (11) in the form
from (39) we obtain
With this result, we have completed the triad we were looking for. To summarise, we can estimate , and .
As shown for a single particle, from (11) and (33) we can deduce:
This result identifies the initial state with the medium Planck, in turn characterised by the triad .
As for an ideal gas, equation (2), enriched by (25), becomes:
The relations (43)-(44) corroborate what has already been maintained in the previous article  : our measurements are referred to the medium Planck, with respect to which our results are valued.
5. Quantum or Proportionality Constants?
In an effort to obtain an interpolation between Wien spectrum and Rayleigh-Jeans spectrum which can describe the black body radiation spectrum in a unitary way, Planck identifies two constants. One is Planck constant h, associated to the total spectrum of black body. The other one is Boltzmann constant ―to which Planck gave this name in Boltzmann’s honour―which associates energy to temperature . However, the first to use Planck constant as a proportionality factor was Einstein in 1905, who introduces the concept of light particle (photon) with an energy connected to frequency , thereby explaining the photoelectric effect.
Theoretical physicists do not take Boltzmann constant to be a universal constant, unlike Planck constant or the speed of light in vacuum, because they consider it a scale factor between energy and temperature. However, also h and c are scale factors between some parameters and energy, such as and . This position derives from the conviction that there does not seem to exist a physical domain in which could assume a limit value. Against this attitude, one could argue that―to this day―there does not exist a paradigm which allows to elevate this point of view to principle which serves to discriminate between a simple proportionality constant and a universal constant.
From the classical point of view, one could assume that when , or , these limits change physics in a fundamental fashion. But the assumption or has a purely qualitative meaning, not quantitative, because it can never be that or .
6. Planck’s Oscillator
The harmonic oscillator is a very important theoretical model―especially in statistical mechanics―in which it is assumed that the interactions between oscillators are weak, in order for them to exchange energy and reach an equilibrium distribution. In the five articles published by Planck during 1897-1899, in an effort to explain the origin of the universality of the thermic radiation (Kirchhoff), he takes a Hertzian oscillator into consideration. This consists of a dipole which vibrates in the presence of an electromagnetic field. In the next article (already in preparation) I will be able to analyse in detail Planck law on radiation of the black body.
Consider now the medium Planck as formed by infinite harmonic oscillators 1D in equilibrium at temperature . The energy of a single oscillator will consist of the kinetic energy and the potential energy
where k is the spring constant. It is known that k is linked to the angular frequency from the relation
Hypothesising that the oscillator of the medium Planck can oscillate at Planck characteristic length, that is , with Planck speed , (47) will become
Because Planck mass is
from (48) we obtain
Moreover, we know that
with being Planck wavelength, therefore
with being Planck frequency, therefore
Finally, given that Planck angular frequency is
it follows that
using what we know from (8), this results in
From the relation (57) we can see that the kinetic energy of Planck particle is equal to the potential energy
as is expected for a classical harmonic oscillator.
Finally, entropy can be easily calculated from the definition:
Before going further into the development of the theory, it is interesting to show a peculiarity. We have seen in (46) that this relation holds
We would like to obtain the value of the drive force k, where we will use (49) and the second part of (55)
Given that Planck force is
that is, k expresses the field of action for one unit of length and it is equivalent to the classical result of a spring constant
Turning back to the theory, it is appropriate to first recall some concepts that are necessary for a better understanding of the development.
7. Historical Results
7.1. Kinetic Theory of Gases
Classical statistical mechanics was developed by applying statistical concepts to classical mechanics. The theory is a consequence of classic kinetic theory, which is based on the hypothesis that thermic equilibrium is reached through the exchange of energy between point-like masses by means of collisions. In these conditions, potential energy can be disregarded compared to kinetic energy. A single harmonic oscillator can be considered in a thermic equilibrium with a heat bath if we imagine that the oscillator exchanges energy with the particles of heat bath. At the equilibrium point, the kinetic energy of the oscillator will correspond (on average) to the kinetic energy of the particles of the heat bath. In this way, the average energy of the oscillator immersed in the heat bath, provided for by the collision with the particles of the heat bath itself, is directly connected to the average kinetic energy of the particles of the heat bath.
Classical electrodynamics developed in the same period as classical statistical mechanics developed. However, very much opposed to classical mechanics, in classical electrodynamics there is no such thing as energy transfer due to collisions among point-like objects. Abrupt collisions among charged particles determine a great loss of energy via radiation. This is because classical electrodynamics employs long-range Coulomb forces, which do not adapt to classical statistical mechanics. Energy transfer within electromagnetic systems involves the forces of electromagnetic fields associated to charged particles or to electromagnetic waves. As opposed to what happens in classical statistical mechanics, oscillating electric dipoles, at different frequencies and interacting through electromagnetic fields, are not obliged to have the same average energy in the context of steady-state behaviour.
7.2. Planck-Einstein’s Relation
In 1900 Planck successfully hypothesised that the quantisation of energy, that is the hypothesis that energy, both for mechanical systems and for electromagnetic radiation, cannot be possessed or exchanged in arbitrary quantities; rather only through elementary “packets”, further indivisible―the so-called quanta. The idea stems from the necessity of obtaining the average energy of oscillators, thanks to which the electromagnetic radiation reaches the equilibrium with the walls of a cavity (black body problem). The implications of this hypothesis were further brought to their logical consequences thanks to its interpretation by Einstein and Debye, the first ones to attribute physical reality to quanta. In the final assumption, the possible energies of whichever oscillator, even mechanic, with angular frequency , must be discrete and equispaced of a quantity , and therefore given by the relation , with n a positive integer.
8. Model of the Medium Planck―Cosmic Background Medium Planck (CBMP)
Hypothesise that the medium Planck represents a massive heath bath formed by an ideal gas, in which a system S is immersed, which works like a thermometer, and which is allowed to absorb or to loss energy , through discrete jumps at constant intensity, that is . This could be an atom executing the transition , from the final energetic state to the initial energetic state , or any more generic case for that matter. We will name “oscillator” a single component of system S, which will be able to execute an energetic jump when it reaches the empirical temperature , thus being able to absorb the energy . Moreover, the system S, immersed in the heat bath at temperature can absorb energy without altering the temperature of the heat bath.
Because nothing is known about the number of particles forming the system , we will limit ourselves to deal with a single Planck particle and a single oscillator. To signify this, we will add the subscript “1” to each measured quantity.
The whole system will have total energy , consisting of the sole kinetic energy of system , corresponding to the particle of the
medium Planck, , and of the energy of the oscillator of
S, , from which:
with as the characteristic frequency of the oscillator of the system S.
Following the standard development, we extract the partition function necessary to obtain the information, keeping in mind that :
We know that the expression
is the infinite sum of a geometric series, with the general term lesser than 1, therefore
Hence, the partition function assumes the form
To calculate the average energy we resort to the thermodynamic relation
Calculate the logarithm first
and computing the derivative with respect to :
This expression is analogous to Planck second formula, where the zero point energy appears.
Coming back to standard notation, we obtain
Computing the limit signifies that the temperature of the oscillator of S cannot receive the necessary energy to execute the energetic jump, because it is in the condition , which does not allow for it to absorb energy
. Therefore, for , the term , and (75) becomes
At low temperatures, close to the absolute zero, there exists a zero point
energy for the whole system.
To calculate the entropy of the whole system we resort to additivity, that is .
As regards the heat bath this contributes to the entropy obtainable
directly from the first term of (65), i.e. , whereas concerning the
oscillator of the system S we use the relation between Helmholtz free energy F and the entropy, at a constant volume:
Use Helmholtz free energy
and (69) without the term in :
Computing the derivative with respect to T one has:
From which we calculate the entropy
The entropy of the whole system has value
To study the behaviour of entropy, substitute :
In the limit , as already pointed out for the energy, . For the first term in square brackets, given that this is a ratio between infinites, but whose denominator is of order greater than the numerator, we will have:
For the second term in square brackets, the argument of the logarithm is also a ratio between infinites, which are, however, of the same order, therefore
Therefore in (85) square brackets is dismissed and
This result may seem to discord with the third law of thermodynamics (Nernst heat theorem).
In scientific literature, there exist three interpretations of the third law of thermodynamics: two are attributable to Nernst, the “father” of the third law of thermodynamics, and one attributable to Planck, engendering a heated debate, still alive to this day, about the possibilities of either. Planck’s interpretation states that when the temperature tends to zero, , the entropy of any system tends to zero, . This is the currently accepted formulation, because the concept of entropy allows an immediate connection with Boltzmann’s work and the results of statistical thermodynamics. In contrast, Nernst in his first enunciation, which serves to compute the work produced in a reaction, maintains that the entropy of a system at the absolute zero, , is a universal constant that can be assumed to be equal to zero, . In his second enunciation, which could seem different from the first one, Nernst maintains that is impossible to reach absolute temperature with a finite number of transformations. Nernst’s second formulation is much more limiting and more fitting to thermodynamic theory, in that it follows from the second law of thermodynamics, especially when the entropy is conceived of as degradation of energy, better known as Kelvin’s principle.
Suppose we want to cool down a thermodynamic system at a lower temperature than room temperature. To obtain this, clearly we would have to isolate the system from its surroundings, otherwise heat would naturally flow from the environment toward the system. Therefore, it is necessary to carry out an adiabatic transformation. Yet, this type of transformations is also isotropic, which means that entropy has to remain constant during the transformation. Because the entropy is not equal to zero from the beginning, it means that it cannot be zero at the end of the transformation, thus showing that the absolute zero is unattainable. Furthermore, a thermic machine with its inferior isothermic equal to zero would transform all the heat into work, becoming a perpetual motion of the second kind, which is impossible.
In conclusion, the two enunciations by Nernst do not exclude a non-null entropy at the absolute zero for the whole system , and the impossibility for the oscillator of the system S to reach the absolute zero.
9.1. The Quan Tum
Modern physics does not tend to consider Planck constant in the sense of a classical proportionality constant, whilst a more restrictive treatment has been directed to Boltzmann constant. Even less researched is the physical meaning of quantum, although it is the “seed” that modern physics has utilised to carry out a cultural revolution. If energy, in all its forms, derives from a common source, it is not understood while in the macrocosm this is a continuous quantity while it is a discrete quantity in the microcosm.
Contemporary physics proposes a world that suffers from this dichotomy between the classical behaviour (macro-scales) and the quantum behaviour (micro-scales), thus not being able to unify nature. The current physical thought tends to conceive that this is a structural fact, determined by the diversity between deterministic theories and probabilistic theories, and herewith connected, between continuous physical quantities and discrete physical quantities. The introduction of Boltzmann constant as a quantum of entropy enlarges the discretisation also to thermic realm, and thus to classical physics, adding itself to the undulatory discretisation operated by the Planck constant h. Both can be interpreted, in the context of classical physics, as multiplying scale factor associated to the medium Planck, and as such, as entities which express the minimum reference energetic content in absolute terms, with the respect to which energy can vary through discrete jumps of a precise intensity.
9.2. Continuity or Discontinuity?
Another aspect, not less important, is the change of paradigm from continuity to discontinuity.
The existence of a quantum of entropy sheds new light on the third law of thermodynamics, in its enunciation by Nernst, implying that the absolute zero is unattainable. From the point of view of physics fundamentals, the presence of two versions of the third law of thermodynamics, apparently contrasting each other, finds its explanations in the different mathematical approach one uses. Historically, both chemistry and thermodynamics have resorted to simple mathematics and a direct contact between notions and empirical data, introducing concepts in an operative fashion and thus refusing the abstract processing of classical mathematics. One could suggest that the choice on the formulation of physical theories should move toward constructive mathematics , more adherent to physical methodology. The third principle stems from the necessity to reformulate the thermodynamics principles through differential equations (Gibbs-Helmholtz equation), and it thus represents more of a methodology principle. For this reason, it should precede the other laws, as it determines the type of mathematics which is more fit to express concepts and contents of thermodynamics itself.
The vacuum is much more than we know. It is a very crowded space, characterised by zero-point energy, an active entity that has earned its role on the physical stage in an incontrovertible way. It is a universal phenomenon, which is uniform and isotropic; it pervades all, penetrating every structure in the whole universe.
Through notions originating from classical mechanics, we have analysed the energetic relations of the medium Planck, showing that at this level energy is unified by the relation (9). This allowed us to express Boltzmann constant as the quantum of entropy, and has provided us with an explanation of the gas constant R as the entropy of one mole of the medium Planck.
Through notions of classical thermodynamics we have shown that the medium Planck could be the ideal (perfect) gas, called for in every thermodynamic dissertation as the reference gas. This opens new perspectives to the study of thermodynamic systems and corroborates the idea―stated at the beginning―that our measurements always refer to the medium Planck as the reference system of absolute zero point. This fundamental backdrop, provided for by the medium Planck, will form the Cosmic Background Medium Planck (CBMP).
The introduction of the quantum of entropy confirms, in addition, the third law of thermodynamics as enunciated by Nernst, and thus poses the question of an operational mathematics for the study of physics, that is between classical continuous mathematics and constructive discrete mathematics. This problem finds its sense when―arbitrarily―we posit or . These expressions only have a purely symbolical meaning; they are a purely formal relation without logical physical content.
Consequently, we have hypothesised a model consisting of the medium Planck as a heat bath, in which a classical harmonic oscillator is immersed and therefore developed, through notions of statistical thermodynamics, the study of the model itself. The conclusions drawn were the same as Planck’s in his study of the black body. This suggests that the medium Planck, only contributing to the
system through its kinetic energy, , as hypothesised in
the kinetic theory, is an intrinsic entity of the universe and not the result of the uncertainty principle of quantum theory. At Planck’s level, no indeterminacies exist; rather, there are fluctuations of the system around the average value, which are determined by the statistical nature of the theory in consideration.
A current of thought developed in the 70 s, which uses this approach with the zero-point field, was named Stochastic Electro-Dynamics (SED), in contrast to the standard Quantum Electro-Dynamics (QED). It is possible to summarise the difference between the two as follows  : “From the philosophical point of view, if the universe were filled―for unknown reasons―with a zero-point field and only a group of physical laws (classical physics consisting in mechanics and electrodynamics) it would appear just the same as a universe governed―for unknown reasons―by two distinct physical laws (classical and quantum) without zero-point field”. In physical terms, the SED based on the intrinsic cosmological zero-point energy is able to compute and classically interpret the black-body spectrum, Heisenberg uncertainty relation, Schroedinger equation, and explain the wave-like nature of matter. These same concepts, interpreted without zero-point energy, will conduct to QED concepts. It is then possible to explain some phenomena with QED and some with SED, and the explanation comes down to aesthetics, because the theories provide the same answers. The best result from the SED will be to demonstrate that classical physics plus a classical, electromagnetic zero-point field can replicate all the quantum phenomena.
Before drawing to a conclusion, we want to take into consideration Nernst’s speculative hypothesis of 1916, on what he used to refer to as the “thermodynamic approach” to the study of the universe, regarding the problem of “thermic death”. The key hypothesis was an active ether in constant interaction with matter, characterising both material object and the radiation permeating the ether: “Even without the existence of radiating matter, that is matter heated above the absolute zero or somehow excited, the empty space is filled with radiation”. According to Nernst, the law of conservation of energy only had statistical validity, “indeed just as the second law of thermodynamics”. Conservation of energy is not necessary for a single atom or molecule, given that the material object exchanges energy with the energetic reservoir which is hidden in the vacuum. An important part of Nernst’s hypothesis was the computation of the zero-point energy following the ordinary theory of statistical mechanics, insofar as the quantity would be replaced by . This implied that for each degree
of freedom, to which the classical theory assigned energy to zero-point energy, which would become . For example, the fundamental state of a
one-dimensional oscillator becomes and not, as it was in Planck’s theory
. He commented: “Every atom, and every agglomerate of atoms, able to
oscillate at frequency in its mechanic conditions, will have for every degree
of freedom a kinetic energy , and this even at the absolute zero.” As opposed
to the usual thermic motion, but in agreement with thermodynamics, zero-point energy is for Nernst, as every type of energy at the absolute zero, a “free energy”. In the cosmological context, Nernst’s hypotheses reappeared in the 1960s, when T. Boyer proposed the SED theory. As Boyer himself points out in a 1969’s article, some of the features of his theory had been anticipated by Nernst some fifty years before.
We conclude this article by maintaining that it is our idea that there exists one single world, and this world always behaves in one single manner. The non-reducibility (unification) of current theories is a consequence of the exclusion of the primary ingredient, the zero-point field. The existence of this field, with its zero-point energy, has been demonstrated―in an unambiguous fashion―by various effects (Casimir effects, Lamb-shift effect). This field assumes reality at the absolute zero of temperatures, and its minimal expression is the zero-point energy.
I believe that in order to be able to express oneself, one needs not only a good amount of knowledge, but the mastering of a language (English, in this case) to convey that knowledge is also essential. I therefore thank my nephew, Guido, for his patience and time.
 Drago, A. (1991) The Alternative Content of Thermodynamics: Constructive Mathematics and the Problematic Organization of the Theory. In: Martinas, K., et al., Eds., Thermodynamics. History and Philosophy, World Scientific, Singapore, 329-345.
 Haisch, B., Rueda, A. and Puthoff, H.E. (1998) Advances in the Proposed Electromagnetic Zero-Point Field Theory of Inertia. 34 Conference of American Institute of Aeronautics and Astronautics, Cleveland, Ohio, 13-15 July 1998, AIAA Paper 98-3143. arXiv:physics/9807023v2 [physics.gen-ph]