On the nature of electromagnetic radiation (ER) and its unique properties in relation to blackbody spectral emissions, Planck’s work introduced the notion of quantized energy packets that led to a better understanding of light. He postulated that ER adhered to a strict quantal rule in the absorption and emission rules with photon energies given by E = nhv where n is the number of packets, h is Planck’s constant, and v is frequency  . The constant h was later understood as the smallest action that could exist in nature and with it, Planck developed expressions for the fundamental units of length, mass, and time:
  , (1)
  , (2)
  . (3)
Planck’s formulation is built on a nondimensionalized framework of fundamental units of measure and succeeds in revealing important fundamental relationships, but does not establish a grounded understanding of their significance. The values, lp, mp and tp are derived from measures of ħ, G and c. Because the values are interpreted to represent a smallest measure, we may infer that they can never be directly measured. To do so, it would imply an ability to measure a value by means of physical elements that were equal or larger than the target. There has been no significant progress addressing this issue since Planck’s initial publication.
In this paper, we will present experimental data that demonstrate the physical significance of these measures. The approach departs from modern theory with a view that the underlying reference frame adheres to rules which are more discrete than quantum. This opposing perspective, a model based on a background independent framework  of discrete indivisible units of length, mass and time, is what separates this approach. Referred to as Informativity, this model is an approach based on the idea that phenomena are described as discrete units, that is, integer values of a fixed amount of measure. Observations of light are considered in geometric terms that may be used to describe gravity, dark energy, visible and observable mass, inflation, the big bang and the cosmic microwave background (CMB) as a whole-unit interpretation of physical phenomena.
The model parallels modern theory, in design, as it is similar in principle to that taken by Albert Einstein. Einstein’s model for special relativity (SR) arises from an understanding that the speed of light lp/tp is a physically significant bound. This model recognizes that mp/tp is a physically significant bound.
With the model, we present a fundamental expression that relates length, mass and time. For each measure, the expression is at best a composite of the other two. Self-referencing measures (i.e. measures defined in terms of other measures) provide a framework for developing expressions of observed phenomena. However, when working with dark energy, a difficulty arises in understanding phenomena that are properties of the universe. This research recognizes where phenomena “within” the system are understood with respect to the self-referencing measurement framework then phenomena that are properties “of” the system are understood with respect to a self-defining measurement framework.
We will present expressions for measures defined relative to the system. The new framework considers what phenomena look like when the definitions of measure are presented as self-defining. Using the invariance of measure with respect to a moment in time to constrain this approach, expressions are presented as a demonstration of the approach, i.e., a quantum expression for gravity, evidentiary support for the physical significance of fundamental units, a demonstration of the relationship between Planck’s units and the fundamental units of measure, a calculation of Hubble’s constant, the age, size, mass and density of the universe, how much matter is visible, observable and what can never be observed. Expressions describing expansion, dark energy and dark matter are also presented and explained.
In the process of resolving an understanding of mass, we also present new expressions describing the birth of the universe, what starts and stops inflation (which we will distinguish from the modern understanding of inflation with the term, quantum inflation), what causes the big bang and the underlying mechanics that cause expansion. To validate the approach, among other outcomes, the model is used to calculate the age, energy, density and temperature of the CMB and present an understanding of the processes that constrain those values. A study of CMB measurements confirms the results to four significant digits, our best measurement data available.
In addition to cosmological phenomena, Informativity also allows for the development of expressions in quantum physics. For one, using a Bell state model presented by Shwartz and Harris  , expressions are presented that describe the presence of a distorting effect at work in the measure of G and ħ. Understanding this effect leads to the resolution of contradictions in the evaluation of certain expressions and provides a foundation with which to develop expressions that describe phenomena both very large and very small.
The approach is based on the idea that three fundamental measures may be relatively defined: length, mass, and time. The measures are identified by symbols lf, mf, and tf, but at this stage are not assigned values. The subscript f is used to distinguish the fundamental measures of Informativity from Planck’s units, Equations (1)-(3).
Although the values of lf and tf are unspecified, their ratio may be understood and constrained by the elapsed time on an atomic clock relative to a distance traveled by a pulsed laser beam in vacuo, where c = lf/tf with respect to any inertial frame. It is recognized that this ratio is fixed given the experimental support for the invariance of the value of c. Where nLflf = nTftfc, a count of lf, i.e., nLf, will equal a count of tf, i.e., nTf. Note that lf or tf may take any value and as such an arbitrary value may be chosen such that it is the largest value for which no smaller value can be observed. Other physical quantities, in turn, are obtained as counts of the fundamental measures.
The model does not provide a description of mass as a discrete unit of measure. Hence, the phenomenon of mass will be treated as being either a whole or fractional count of mf. Apart from this modified definition for mass, each of the measures is taken as relative to a fundamental unit, which is only meant to say that they are countable.
Note that time has been subtly tied to distance and for that reason our definition is not only inclusive of the three spatial dimensions but extends to the temporal dimension. Without time, there exists no means to define space.
We summarize these two statements, which with two others formalize our model:
O1: Quantities exist which are whole-unit counts of the fundamental measures lf and tf.
O2: Mass may be a whole or fractional count of mf.
O3: Any remainder of a whole-unit calculation of lf or tf describes an action.
O4: Distances for which the Pythagorean Theorem applies, the shortest side a is fixed with a count of one lf, against which counts of lf along sides b and c are made.
An underlying premise of the model is that in O1 and O2 all measures are defined relatively. With a unit system applied, there exists an agreed upon framework by which phenomena may be described by counting fundamental measures. O3 recognizes the possibility that some expressions may solve for fractional counts of fundamental measures. Fractional counts violate the premise by which lf and tf are established. Therefore, any expression that describes a change in distance equal to the remainder of a measure must also describe an action.
O4 presents a tool, the Pythagorean Theorem, for estimating length. Where the measure is relatively defined, the theorem incorporates a reference which must be equal to a unit count of one; that is a = 1 unit is the reference. The other short side b is any given unit count of distance measures whereas the long side c (hypotenuse) is the distance measure of unknown unit count. Each side is a count of the reference measure defined by a. This establishes a foundation for a background independent framework that acknowledges the need for a reference within the definition. The expression 12 + b2 = c2 describes an unknown distance c relative to b in terms of a count of a. As desired, solutions to c result in values that are fractional leaving us to test the hypothesis that gravity may be described as the lost excess over and above the whole-unit count of distance measures.
While we recognize that the measurement of quantities is limited, this does not mean they are not significant. Validating their significance against existing data is one goal of the model. Furthermore, although we use the Pythagorean Theorem to understand distance, there is no specific argument to suggest that another geometric expression would not serve the same purpose. The theorem provides insight only so long as it allows the presentation of expressions that cannot be reduced by another means. Finally, the term fundamental was chosen as a general term with the connotation that a measure has the characteristic of being countable and that measurement may be characterized as a count of fundamental measures. The term also serves to distinguish the units of measure adopted by Informativity from those given in Planck’s base system.
Note that Planck’s formulations are referenced for context, but are considered only as a guide in the development of our model. The model adopts values for the fundamental measures that are resolved entirely within Informativity.
3.1. Length Measurement and Gravitational Acceleration
For long side c and short sides a = 1 and b of any chosen integer count of a right-angle triangle (Figure 1), we may resolve a count for the length measure representing the uncertain distance,
Any non-whole-unit count relates to a change in distance and may be described by rounding up (repulsion) or down (attraction). The remainder lost to rounding will be denoted by QLf. For all solutions, QLf is less than half and thus attractive. An example of repulsion will be explored in the Section 3.13. The model provides a count of distance measures that is closer by
at every instant in time. For example, if , then . Because side c always rounds down, we find that rLf always equals bLf. In the following, we shall always refer to the “observed measure count” as rLf. Moreover, note that the reference measure against which all counts are measured is defined by aLf = 1. With this we have composed an expression for gravity such that the loss of the remainder relative to the whole-unit count is QLf/rLf.
Together QLf and rLf are conjectured to represent an important dimensionless ratio that describes gravity. We proceed with that hypothesis by presenting the ratio in meters per second squared (m/s2), where we multiply by lf for meters and divide by together describing the distance loss at the maximum sampling rate of one sampling every tf seconds per second,
Figure 1. Count of distance measures between an observer and target where bLf = 4.
We now note that this quantity is scaled and hence requires a scaling constant; we multiply by the speed of light c and divide by a scaling constant S. Setting r = rLflf and c = lf/tf, Equation (6) reduces to
As the ratio c/S may be understood as 1/kg or a maximum count of mf per kilogram, it may also be thought of as the corresponding mass frequency associated with gravity. Where S = 3.26239, this expression is now equivalent to G/r2 to five significant digits for all distances greater than 103lf. Where quantum differences are not a consideration, we may set Equation (7) equal to G/r2 and therefore
We may interpret S as momentum; hence the units for these expressions will match accordingly. Nevertheless, recognizing that S is a dimensionless scalar is an important and critical detour that shall be central to the discussion below. Two applicable interpretations will be shown. We first investigate S as a momentum, and then perform a similar analysis as an angular measure.
Consider Equation (9); after rearranging and reducing the term on the right with r = rLflf, we use as noted in Appendix A. In passing, the term QLfrLf, referred to as the Informativity differential, plays a key role in describing how fractional values less than the theoretical limit describe a distorting effect in measurement. Consideration of the Informativity differential at a limit is a matter of convenience, but to maintain a precise expression, values for QLf and rLf should always be entered specific to the phenomenon being observed. The values determined can cover the entire physical regime from one lf to infinity. From Equation (9), we have
Multiply both sides by tf and reduce to obtain a mass,
Hence the momentum of a fundamental measure of mass (light) moving at c may be expressed as
We understand S as being half the momentum of a fundamental measure of mass. This may also be written as
Any count of lf must equal the count of tf, hence requiring that S must correspond to mf being fractional. There exists no prerequisite that Informativity expressions be composed of whole-unit counts of mf; see O2 of Section 2. With this resolved, we now consider S as an angular measure. With the Pythagorean Theorem supporting an understanding of S as momentum, then a circle supports an understanding of S as an angle.
Consider Equation (1) organized such that . Take Equation (10), replace with and replace . Hence
where S = ħ/2lf, then the arc length of a circle of radius lf and angle S is
In Figure 2, we find that an arc-length with θ = 2S radians is precisely the value of ħ as meters. Each of the terms has a suitable geometric description:
・ lf radius of a fundamental circle in meters,
・ 2S angle in radians that subtends a segment with an arc length of ħ meters,
・ ħ arc-length of a segment corresponding to the momentum of a fundamental Measure of mass.
Applicability to an existing geometric expression is just the first of several tests. Next, we consider support for the equivalence of these two interpretations. We begin by resolving S in terms of our initial description of gravity from Equation (10),
Next consider S = ħ/2lf as resolved in Equation (14). With lfc3/2G a momentum and ħ/2lf an angular measure, we set them equal giving
Figure 2. Arc length of a circle of radius lf and subtending angle θ = S radians.
Isolating lf to the left-hand side and
  . (18)
The expression clarifies our conjecture of the equivalence between the two interpretations: momentum and angular measure. The expression also demonstrates that the comparison is in fact a modified form of Planck’s universally recognized formulation for length.
Finally, to verify this interpretation, we now seek a quantity where S is:
・ An invariant characteristic of light at a threshold,
・ Described as lfc3/G with respect to momentum,
・ Described as ħ/2lf with respect to angular measure.
A quantity was measured by Shwartz and Harris in 2011 regarding the quantum entanglement of light at the degenerate state  . Using polarization entangled photons in pure Bell states at X-ray wavelengths, they were able to take advantage of the intersections of the component curve (as a function of the square of the current density) to resolve pump angles θp where the magnitudes of the components of each Bell state are equal. With this, solutions to the phase matching and current density equations were resolved to determine the sign of the components at the intersection. Then solving the phase matching equations for the signal θs and idler θi with respect to the atomic planes, substituting the related electric fields, the current density is a function of just the pump angle. With these conditions in place, the momentum of a fundamental measure of mass is then equal in value to the angle of the signal and idler lfc3/2G with respect to the atomic planes where the pump is at its maximum angle.
There are five pump angles representing two of the Bell states that can generate entangled photons and lfc3/2G is uniquely distinguished where θp is at its maximum. Shwartz and Harris recognize these Bell states, where is the polarization of the electric field of the X-ray in the scattering plane and is the polarization orthogonal to the scattering plane which contains the incident k vector and the lattice k vector G. Subscripts p, s, and i, respectively, denote the pump, signal and idler.
The expressions in Table 1 arise from Equation (16) each describing an equidistant angle either side of 0, π or 2π and are precisely identical in value to the Shwartz and Harris measurements. Using the most recent CODATA  as a guide for the value of lf, we find that
But, where we have made use of Planck’s relation in Equation (14), . We conclude that our understanding of lf as expressed by
Table 1. Angle setting in radians of the k vectors of the pump, signal and idler for maximally entangled states at the degenerate frequency with corresponding Shwartz and Harris values (Reference  ).
Informativity precisely matches the values presented in the Shwartz and Harris model, but our understanding of ħ when applying Planck’s expression is incomplete. The issue that affects Planck’s reduced constant will be resolved in Section 4.
The correlation between S and the angular measures of the Shwartz and Harris Bell state is not unexpected. Where the signal and idler are resolved specifically to obtain the polarization angles necessary for entanglement, seeking the pump angle follows naturally, thereby resolving each of the conditions where entanglement may occur. Informativity is not a coincidental alignment of one of these values, but a means of resolving the maximum angular measure corresponding to light in terms of the fundamental measure mf. With that, resolving the angular measures for each of the limits described by the Bell state follows in a straight-forward manner.
Where the expression 2S describes the momentum of a fundamental measure of mass, the term S describes an angle. Both interpretations are valid. The juxtaposition of units describes a relationship that is conflicting, similar to Einstein’s relation E = mc2, which expresses energy in joules as a form of mass in kilograms, and vice versa. This presentation demonstrates that momentum and angular measure are one and the same.
With this understanding we consider replacing the scalar term S with θsi. The term alludes to recognizing the angular measure of the signal and idler under some conditions and momentum under others. Although both interpretations are applicable, θsi is retained emphasizing that we are not working with a theoretical value, but an invariant macroscopic measure. Additional research regarding the measure of θsi has been reported  , where the error in angular measurement is estimated to be less than 2 micro-radians.
In reflecting on Planck’s work, one might argue that this approach is a basic presentation of a derivation of Planck units. One must take note that the role of fundamental measures at this point is a mathematical construct, a proposed interpretation of the existing argument. The measures exist only in their expression until formally resolved in the next section. Whereas CODATA estimates may be used to guide our understanding of S, up to this point no theoretical values are assumed. Our confidence in correlating S to θsi rests in the correctness of the two interpretations of S and their correlation accounts for Planck’s expression for length.
One might also view this approach as an innovative alternative to Newtonian vector calculus thus side-stepping what might be an otherwise traditional understanding of gravitation. However, this argument would also work against an underlying premise of this paper, that higher-order operators (other than the four basic arithmetic operators) disguise the fundamental constructs. Where a treatment using vector calculus would resolve the traditional presentation, the quantum relationship to θsi would be lost or at least well-disguised.
3.2. Fundamental Measures
With the θsi correlation, we may now resolve the fundamental measures lf, mf, and tf, not as a theoretical construct, but with physical expressions constrained by the characteristics of light and gravity, consisting entirely of macroscopic measures. We start with Equation (10) by solving for lf,
where time follows from the definition tf = lf/c. Replacing lf with Equation (20) gives
Finally, reordering time to resolve mass
where interpreting S = θsi as a momentum yields the appropriate units. Most importantly, whereas the value for θsi is obtained from a macroscopic measurement, Planck’s approach is achieved as a theoretical construct. Establishing that these values are the same provides a new foundation with which to build a model based entirely on physical measurements. Also note, whereas lf and tf are proposed to be the smallest significant measures, mf is not; mf does play a central role in many expressions because it is a product of lf and tf, i.e. mf = 2θsitf/lf, and for that reason the term is retained.
The Informativity formulation parallels Planck’s although the expressions are presented in quantized terms and are entirely formulated under the background independent framework of Informativity. An additional characteristic of the fundamental measures is that they are not a product of Planck’s formulations, Planck’s constant or any quantum term. Rather, values are derived using only the geometric expression . The expression depends on c, r, G and θsi and all are resolved macroscopically. Where and from Appendix A, the relation is rearranged to give . If the expression were not derived by observations regarding light and gravity consisting entirely of macroscopic measures, the fundamental measures would be unconstrained.
Planck’s approach provides a valid way to constrain the fundamental measures, but the solution provides no mechanism to confirm the approach through measurement indirectly confirming the significance of the measures. The Informativity approach recognizes the significance of θsi and uses that to resolve lf, mf, and tf. Whereas both approaches arrive with the same conclusion, the ability to resolve the fundamental measures as a product of macroscopic phenomena is an important and decisive difference.
It should finally be noted that the fundamental measures lf and tf can never be directly resolved with identical or greater precision. This would defy their definition. The fundamental measures are the references against which everything is defined. It would be neither possible nor meaningful to measure a reference where the most appropriate reference is the reference itself. Fundamental measures can only be inferred as a characteristic of nature that indicates their significance.
3.3. Newton’s Constant
Using Equation (8) at the quantum scale to calculate G will show a difference with Newton’s presentation. Is G variable  ? No. The difference is a reflection of the precision between the geometric model of Informativity in comparison to Newton’s presentation. Newton’s expression does not include the geometric distortion effect inherent in the Informativity differential QLfrLf as numerically assessed in Table 2.
For clarity, we work through a calculation. A distance of 1 meter is intentionally selected such that . Using Equation (20) for lf, the inverse gives us a count in 1 meter such that ; that is,
With only our understanding as prescribed by the Pythagorean Theorem and expressed in Equation (7), this is G/r2 at a distance of 1 meter, and therefore numerically equal to G. The formulation depends on an understanding of lf, which may be resolved from the expression presented in Equation (20). The value is also identical to the most recent CODATA estimate of Planck’s formulation of length. The CODATA   estimates of Planck units have changed over recent years, but those estimates continue to give support to the expressions
Table 2. Informativity difference in G/r2.
While Informativity does provide a concise geometric expression for G, the term may also be understood as a physically significant product of two limits.
To understand, we will need more formal definitions for the fundamental limits. Begin by considering time as a convenient measure with which to better understand length and mass. We may then ask what is the upper limiting relation of lf and mf with respect to time, tf?
The first expression embodies the number of meters traversed by light in a second. Similarly, the second expression embodies the number of kilograms that may be traversed in a second. By implication, each describes the maximum rate of change that may be observed in one measure relative to the other measure.
As such, we may interpret lf/tf as an upper bound to speed; such an interpretation is quite valid. Nevertheless, the focus of the expression is that the value is an upper bound to this observation. In a system with a fixed rate of change 1/tf and an upper bound to the observation of units of lf per tf, once that bound is reached, the observer can no longer distinguish a greater number of events. To do so would violate our understanding of the fundamental measures of length and time implying that we would be able to observe measures smaller than the fundamental measures. Moreover, it is not that physical phenomena cannot overlap in space-time, but that an observer has a specific upper bound to the number of events that may be distinguished.
Before we begin, note that up to this point we have distinguished counts of Planck units (i.e. nLp) from the fundamental units of Informativity (i.e. nLf). Moving forward, all counts that reference fundamental units and will not carry the subscript f following the designated measure. The only exception to this rule will be components of the Informativity differential, QLfrLf.
To express a count of lf, mf and tf, we would divide the rate by the respective unit measure.
Thus, observe that
O5: A count of each of the fundamental measures with respect to any shared measure is the same.
where one may describe matter as constrained to traverse no more than 2.99792458 × 108 meters per second (i.e. the speed of light), we should say that observation is constrained to no more than 1.85492 × 1043 units of measure per second. The comparison brings to our attention that change is constrained. For all measures, there exists a maximum frequency such that a target may have:
・ A maximum length frequency of lf/tf,
・ A maximum mass frequency of mf/tf and,
・ A maximum count frequency of 1/tf.
Where G is expressed in terms of maximums such that
we now recognize that Newton’s expression is a formal description of the maximum rate of change in space (i.e. the three dimensions) with respect to (i.e. divided by) the maximum rate of change in mass. In that there are no other combinations of the fundamental measures with respect to space, we find gravity a unique and singular phenomenon for which there are no other examples.
To further our understanding of gravitation, consider a cube with sides measured in terms of lf equal to the distance that light travels per second. We find that this cube contains a count of c3 units of daughter cubes, each having sides equal to lf. The parent cube provides a grid-like understanding of an inertial frame describing the maximum frequency of lf relative to tf.
Next, consider mf/tf as a scalar quantity defining a count of mass units―the maximum mass frequency―that may exist along the edge of the parent cube. Thus, dividing the cubic length frequency (lf/tf)3 by the mass frequency mf/tf gives a fixed relation for mass relative to an observer in space; in other words, this is the most appropriate understanding of the gravitational field relative to an inertial frame.
We expand on this approach by also noting that, where G expresses a property of gravity, then the speed of gravity, sgravity, may be resolved by a process of factoring out known components. First, multiply G by the mass frequency thereby removing the mass component. Next, reduce space-time by dividing the cubic length frequency in two of the three dimensions, c2, such that the linear speed of gravity is
Some might argue that this is merely a means of resolving the speed of light. In some sense this is true, but not in the context of expressions concerning the relative relation between length, mass, and time. In that context, we are seeking the maximum rate of change of mass with respect to space. To succeed in this endeavor, we must first factor out the mass frequency and then reduce space to a measure in one dimension. In this context, we find the rate of change for the phenomenon of gravity relatively in one dimension is c. This can already be seen from Equation (30) where a count of length measures lf will equal a count of mass measures mf such that nL = nM.
3.4. Planck’s Constant
To build on our understanding of G presented in Section 3.3, we investigate how the use of macroscopic and quantum terms affects the calculation of physical measures. We begin by formulating a known Informativity expression that may serve as a reference. Dividing Equation (9) by G, substituting r = rLflf, and factoring c3/G, we obtain
The measure for θsi matches the angular measurements made by Shwartz and Harris. We derived lf because of the correlation of S and θsi. Comparing Planck’s formulation in Equation (1) where and substituting it into Equation (33) yields
where G and r2 are macroscopic factors whereas ħ has a quantum origin; all values of fundamental units, such as lf, are neither macroscopic nor quantum, but treated as constants. For convenience, the macroscopic limit of the Informativity differential is taken, and not its quantum limit. This is acceptable when working with macroscopic terms, but when working with quantum terms, including the Planck constant, that limit produces inaccurate results. The Informativity differential needs to be retained and expressions properly calculated regarding the actual distance of the measured interaction. With respect to the conditions that lead to the measure of ħ, distance as a count of lf may be resolved by solving for rLf using its expression in Equation (C7) as derived in Appendix C,
Note that rLf must be a whole-unit count, 85lf. Resolving a quantum distance provides an understanding as to why Planck’s constant as it is currently defined is appropriate in expressions such as the fine-structure constant  . Hence, use of Planck’s reduced constant at an Informativity differential distance of 85lf produces the correct result. However, the expression is not appropriate in expressions consisting entirely of macroscopic measures. With G, both factors vary depending on the Informativity differential.
We understand this effect better by solving for ħ using Equation (34) and then comparing that to the currently recognized value of  . For this purpose, we see in Table 3 that the variation in ħ changes quickest within the first few lf.
If we wish to use ħ with macroscopic terms, then we need to resolve the value of ħ at a macroscopic distance. A good example would be Planck’s theoretical relations for length, mass, and time, which involve macroscopic terms G and c. Solving Equation (34) for ħ at a macroscopic distance , then
With this value, the distance-adjusted Informativity and Planck formulations are now mathematically equivalent and the variation in G and ħ cancel out:
Each expression may be reduced to
where uncertainty exists in the derivation of lf, mf, and tf, we may express the fundamental measures in terms of θsi instead of ħ. This expression confirms that any geometric distortion in ħ is proportionally compensated with the same in G. We are also more aware of the important role played by the Informativity differential and have confirmed that these two very different approaches arrive at precisely the same result. Note further that this is a well-grounded physical expression that may be used to resolve each of the fundamental measures, thus providing significance to each. Finally, with a distance adjusted value for ħ, we can return to the Shwartz and Harris results as presented in Table 1 and cast them in terms of the ratio of arc length and diameter of a circle.
With from Equation (36) and from Equation (20) (where their ratio corresponds to units in radians as resolved in Equation (15)), these expressions precisely match the Shwartz and Harris values  . Whether presented in macroscopic or quantum terms, we find the angular measures presented in Table 4 may be resolved.
Table 3. Informativity difference in Planck’s reduced constant ħ.
Table 4. Angles in radians for the k vectors of the pump, signal, and idler for the maximally engangled states at the degenerate frequency with corresponding Shwartz and Harris values (Reference  ).
3.5. Fundamental Measures Correlated
In Equations (20)-(22), solutions to the fundamental measures are resolved and while they are appropriate for use in macroscopic terms, it is not representative of a distance-sensitive formulation. Here, we resolve distance-sensitive expressions and demonstrate their use in several well-known expressions. We begin with Equation (11) and expand the right-hand term restoring the Informativity differential to mass that was factored out in Equation (10) with the limiting process (Appendix A),
We may now translate mass to a length and time by applying the fundamental transforms (Appendix B) where (B2) and where (B3) to obtain
We may further reduce length with (B3), time with (B2), and mass with (B1).
Thus, we have each of the fundamental measures in their most robust form. Where then each expression may be reduced to
This expression provides the simplest understanding of length, mass and time and there relation. The correlation is used often and will be referenced heretofore as the fundamental expression.
Note also that we may also resolve expressions for other measures such as energy. Using mass from Equation (41) and Einstein’s equation where nM is a count of the fundamental mass measure, then
Here and ; we may then write
In comparison, if we reduce as expressed in Equation (14), then Planck’s formulation is
The energy of one fundamental measure of mass is from Equation (49), and the energy of one photon is . Substituting from Equation (14) and resolving for Em, we have
Whereas Planck associated the energy of quantum states with harmonic oscillators that modeled the atoms lining the cavity, the correlation of energy between a fundamental measure of mass and Planck’s blackbody spectrum is precisely a product of 2π. While comparing the two is not a precise contextual match, the correlation does reinforce our prior observation that angular measure and momentum are one and the same and only as such can we fully appreciate a value of n = 1/2π.
3.6. Quantum Uncertainty
With the fundamental measures at hand, we turn our attention to Heisenberg’s uncertainty principle  first presented in 1927. The principle may be described as an expression representing a suite of mathematical inequalities that prescribe a fundamental limit to the precision with which pairs of physical properties of a particle can be known. These pairs are known as complementary variables. Pertaining to the position and momentum of a particle, the uncertainty principle states that the more precisely the position is determined, the less precisely its momentum is known. A more formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp,
was derived by Kennard  later that year and Weyl  in 1928.
With respect to Informativity, we find that our understanding of this product remains unchanged, but the components may be further refined. With the standard deviations in position and momentum, denoted by f(rLflf) and f(mfv), respectively, we may use mass as expressed in Equation (41) and replace the arc length ħ/2 with Equation (36). With mass incorporating the Informativity differential QLfrLf, we introduce the same in position,
Here the Informativity differential in position and momentum cancel out; the differences are the individual uncertainties. This observation is predicated on our modified understanding of mass.
Another issue concerns what happens when we reduce this formulation. First, we cancel out the Informativity differential, θsi and lf. With c = lf/tf and v a count of lf traversed per tf (denoted as nL), nM is a count of the mass measure and rLf a count of lf between the observer and target, then
With this, we see that uncertainty is threefold: mass, position and velocity. There are several notable outcomes. Where v = c, the uncertainty is reduced to just mass. Second, note that time is not a term associated with uncertainty. Third, the boundary for these three terms is lf which until now seemed only a convenient theoretical unit of measure. Therefore, where we find physical support for the Heisenberg uncertainty principle, we must also find lf to be of physical significance, defining the threshold.
Measurement quantization may be applied to Einstein’s dilation expressions, both SR and General Relativity (GR) where recognizing the dilation metric and substituting the respective fundamental measures. Where nLc is the count of lf traveled by light in a second and nL the count of lf respective of the velocity v between the observer and target, then
Where subscripts o identify the local frame and l the observed frame, then the corresponding quantized dilation expressions for SR are
Likewise, where escape velocity and where from Equation (31), then
where , Equation (18), is a subset of v2, Equation (12), then
Notably, 2, nLc, nM and nLr are fixed system values; the count nLc of lf traveled by light in a second, the count nM of mf representing the system mass and the count nLr of lf between an inertial frame and the center of gravity each contribute to describe a change in position nLlf per second in SI units (i.e. when multiplied by lf/tf).
Where v2/c2 from Equation (58) includes in its domain we don’t invoke the equivalence principle. Rather, the dimensionless ratio (2nM/nLr)1/2 establishes the relationship between nL and the upper bound nLc with respect to nM, Equation (66). Thus, we describe gravitational dilation (i.e. GR) by replacing the SR term with the value equivalent system ratio.
While Einstein disliked the concept of relativistic mass  , measurement quantization skirts the issue describing measurement in a gravitational field without undefined values for the entire measurement domain.
3.8. Hubble’s Constant
We will now take the principles of fundamental measure and look not at the very small, but the very large … the cosmological properties of our universe such as the expansion of space. The exploration will begin first with a more defined understanding of expansion in the traditional terms presented by Hubble and then build on that foundation to explore dark energy, the inflation prior to expansion and then assemble everything that transpires from the birth of the universe to what we see today. Notably, the expressions of Informativity do not suffer from infinities or limits that restrict our understanding of phenomena.
We may resolve Hubble’s constant using the principles of Informativity by considering the simplest relation between length, mass, and time, i.e., the fundamental expression, lfmf = 2θsitf. But first, we will need to expand the expression to include counts of the fundamental measures, nL units of length, nM units of mass and nT units of time such that
If the system is the observable universe, we may propose that the elapsed time (an increasing count in nT) must correspond to an equivalent increase in counts for either length or mass. Note that these are system properties and are not necessarily applied in a scalar fashion from the point of view of an inertial frame. System properties are applicable only with respect to the system as defined in the above expression. We will go over the process of applying system properties to derive values in subsequent sections. For now though, with respect to the fundamental expression, note that:
O6: The values of lf, mf, and tf are invariant. Given each of the component measures as resolved where Equations (1)-(3) are known to be invariant, we find support for invariance of the fundamental measures in the local frame. Where c = lf/tf is invariant, it follows that the ratio of lf to tf must also be invariant, with the one constrained by the other.
O7: The measure lf is physically significant. Support for the physical significance of lf may be found in the example of momentum and velocity as applied to the uncertainty principle. Using Informativity, the product may be reduced such that nMrLfnL ≥ lf (Eqs. 53-57), thus demonstrating the significance of lf.
O8: Any count of lf must equal a count of tf. As c = nLlf/nTtf, support for an invariant value for the speed of light c cannot be maintained unless nL = nT.
O9: The count nM must equal an invariant count of 2θsi (in this case, nM = 1). Any variation in the count of nM is in conflict with supporting the conservation of momentum.
Where these constraints are strongly supported, we conclude that the elapsed time (an increasing count of tf) must correspond with a universe that also has increasing length (an increasing count of lf). A better description of space is not a process of stretching, but a geometric relation that corresponds to an increasing count of length measures equal to the same count in time measures. New units of lf are being added to the reference system uniformly and in a discrete manner. The process is best understood as a reference system that increases in volume in proportion to an increase in time, the two measures being defined against one another where the ratio of the counts of lf with respect to tf are fixed.
Let us take this moment to reaffirm our understanding of space. Specifically, any inertial frame that presents no net force on an observer defines the origin of a reference frame for that observer that is at rest with respect to the measure of space. When we say that space expands, we say that static points of reference in relation to the inertial frame experience an increasing relative distance without experiencing a net force.
With this, let us also take this moment to differentiate the expansion of space (universal expansion) from the expansion of matter within space (stellar expansion). There is no specific correlation between the two. At this point, we can only interpret the expressions above in such a way that space expands and that matter rests within space moving relatively by whatever means depending on its initial conditions.
It follows that a background independent system that has aged by a time AU = nTtf must expand correspondingly by an equal count of lf such that lf/tf = c for all inertial frames within the system. This may occur specifically when nL = nT. Thus, with respect to an inertial frame, the expansion must occur at the rate H = 1/AU. To place H in the proper form, we multiply the inverse of the age of the universe in seconds by the unit conversion 1 = km/Mpc.
Given the age of the universe is 13.799 ´ 109 years, where there are 3.15576 ´ 107 seconds in a Julian year and where there are 3.0857 ´ 1019 kilometers per megaparsec  , then space expands at a rate of
We denote the expansion of space, the universal expansion, by H to distinguish the value from Hubble’s descriptor H0, which describes the rate of expansion of the universe obtained from the recession of galaxies from one another in space, i.e., the stellar expansion. Converting H to SI units, we may also present the universal expansion as a frequency where
Given the general expression as applied to stellar expansion is typically calculated using Hubble’s law, expressed as v = H0D, we find that this law may be understood in terms of Informativity as a factoring of the fundamental expression presented in Equation (47). As such, the value for H may be resolved for any moment in time as a necessary outcome in preserving the relation between length and time such that lf/tf = c.
We also find that the value for H (but not necessarily H0) decreases as the universe ages. Although each of the prior Informativity expressions describes the expansion of space and not the expansion of matter, both measures H and H0 demonstrate a significant correlation as demonstrated in several studies. An analysis of the Wilkinson Microwave Anisotropy Probe (WMAP) data obtained over a seven-year period combined with other cosmological data using the simplest version of the ˄CDM model has produced a complementary value of  . Another study using time delays between multiple Hubble space telescope images of distant variable sources produced by strong gravitational lensing, resolved a value of  .
Whereas the rate at which galaxies are moving away from one another may move faster or less than the expansion of space, most studies interestingly show a stellar expansion that nearly coincides with the Informativity calculation of H, the universal expansion. This correlation provides support for a model where matter has had almost no motion relative to space since the Big Bang and has been carried along with the expansion, stationary with respect to space, minus the effects of gravitational attraction.
Lastly, it should be noted that these calculations do not take into account a period of expansion referred to as inflation, but we may also note that the inflationary epoch is less than the precision currently known for the age of the universe. This will be revisited in Section 3.14.
3.9. Self-Referencing and Self-Defining Measures
The universe as a self-defining system of measures is an important frame of reference when developing expressions that describe the universe. Our current model of measurement is premised on a framework of self-referencing measures. That is, we define each measure as an understanding of other measures. When the frame of reference is the universe, that methodology presents a problem. The universe has no framework with which to define measure. Specifically, the universe is that which has no relation to any other thing. The issue pushes us towards considering the measurement of the universe with measures defined relative to the universe.
In this section, we consider the idea that a framework of self-defining measures can describe characteristics of the universe (i.e. dark energy). Phenomena then consist of both self-referencing and self-defining terms in respect to two frames of reference. To provide a grounded understanding of these differences, we present measures for both, starting with the self-referencing expressions, Equation (47):
To resolve self-defining expressions, we then expand these expressions in terms of fundamental expressions, set the target measure equal to a value of one (that is, a measure defined against itself) and solve for counts of the remaining two measures.
To avoid confusion, we denote the self-defining measures as well as their counts with the subscript u. As an example, for length lu = 1, for mass mu = 1 and for time tu = 1. This approach provides physically significant expressions that describe properties of our universe with respect to the universe. In the interest of brevity, corresponding derivations for length and time are carried out in Appendix D.
To understand the counts of length nLu and time nTu we start with the self-referencing expression for mass from Equation (75) and then
where c = nTtf/nLlf then nL = nT. To resolve this expression for mass, we set the value of mf equal to one. Substituting the representative term, mu = 1 for mf, reducing and then substituting the self-referencing term for mass back into the expression (i.e., mf = 2θsi/c), then
The approach presents an expression that is no longer self-referencing, but a self-defining count ratio, keeping in mind that nLu/nTu is also dimension-free. Hence, whereas the ratio is equivalent in value to mf, the expression has no units. Nondimensionalization is a physically significant feature of Informativity that is a product of the self-defining properties of a system.
From this ratio multiplied by the speed of light, it follows that the value for Hubble’s constant HU using self-defining terms is
m/s per universe. (80)
To distinguish the value from H, which is measured per megaparsec, we use here a subscript U to indicate that we are using the self-defining reference, the universe. Likewise, where mfc = 2θsi from Equation (75), we may write the value-equivalent expression
Although equivalent in value, the expression differs in units. Expressing the Hubble constant in this way may be convenient, but presents a confusing mix of terms that are both self-referencing and self-defining. When expressions have mixed terms that derive from different frames of reference, unit analysis fails. The observation differs dramatically from an error in calculation. Errors are associated with a difference in value and units. Informativity expressions that mix differing frames of reference differ only in units. This aspect is further explored in Appendix E.
3.10. Size of the Universe
Having resolved that the time frequency must correspond to an expanding space, as described in Equation (71), we may now substitute and group the counts of the fundamental units to resolve expressions for the diameter and age of the universe. First, multiply both sides of the fundamental expression, Equation (47), by (nTtfnLlf), and then regroup terms. Next substitute the self-defining expressions for the diameter of the universe for DU = nLlf (billion light-years) and its age AU = nTtf (billion years); we then have
Next, we move DU to the right and break down the right portion to determine its value. With AU = nTtf, DU = nLlf, and mf = 2θsi/c from Equation (75), then
The result is a self-referencing expression. We may formally recognize the frame of reference as the system by replacing the count terms with their respective system terms, nLu and nTu. The expression may then be reduced with the self-defining expression mf = nLu/nTu from Equation (79),
Thus, where 2θsiAU/DU = 1, and mfc = 2θsi from Equation (75), then Equation (84) may be reduced to
where the second expression follows directly from the first, then 2θsiAU/DU = 1 may be substituted into Equation (84) and reduced to produce the later. These expressions also confirm that the system constant between diameter and age is precisely 2θsi, as resolved in the self-defining expression, Equation (81). The expansion of space advances at (1/2θsi) ´ 100 = 15.326% of the reference. Notably, without the introduction of self-defining measures, the expressions are mere extrapolations of measures in the local frame. Only by setting our frame of reference to the universe can we produce valid descriptions of the universe from our perspective.
And finally of notable importance, in addition to our analysis of the Heisenberg uncertainty principle (i.e. Equation (57)), the later expression demonstrates the physical significance of mf.
In the physics literature, measurements by Riess et al.  show a stellar expansion of 10% - 15%. Measurements for DU and AU are also measured at 91 billion light-years and 13.799 ± 0.021 billion years, respectively  . In 2011, formulations by Barrow and Shaw  comparing the cosmological constant and the age of the universe had been worked out predicting a corresponding relationship. In 2015, analysis of WMAP data by Gasanalizade and Hasanalizade  also confirmed the correlation between the age of the universe and its expansion.
It is important that there is no confusion between these expressions which describe a geometric expansion of space as opposed to the expansion of galaxies in space. Without knowing the on-going conditions between matter and space there exists no means of correlating the two and providing a means of measurement.
Note that, whereas each expression describes a property of the universe, the terms and their associated units can be misleading. As noted previously, system expressions are appropriately expressed in self-defining terms. Mass has already been resolved in Equation (79) as mf = nLu/nTu. From Equation (81), (nLu/nTu)c = 2θsi with DU and AU in meters and seconds, then the self-defining presentation of Equation (87) is
The expression confirms itself and our understanding of self-defining measures. For DU = nLulf, AU = nTutf, and c = lf/tf, the expression simplifies to 1 = 1. Like self-referencing measures, self-defining measures are also measured against themselves. Note also that removing the ratio nLu/nTu will give the scalar expression for an expanding volume with respect to an inertial frame. Finally, the expression in Equation (81) for Hubble’s constant (m/s with respect to the universe) combined with Equation (89) are, depending on the frame of reference, value-equivalent to each of the following three equalities,
Most notably, we now see that the repeated appearance of 2θsi is in fact also a form of Hubble’s constant defined with respect to the universe. The term shows up not only in Equation (87) as the system constant that incorporates the co-moving element of universal expansion, but in many other expressions such as the fundamental expression lfmf = 2θsitf, which relates length, mass, and time in their most simple form. The term shows up in the definition of Planck’s reduced constant ħ = 2θsilf from Equation (36) and Newton’s Gravitational constant G = lfc3/2θsi from Equation (16). In short, the system constant is less a constant of the universe and more a descriptive count ratio that serves as a conversion metric between the local frame and the universe. With this broader understanding, the physical constants may be understood as a collection of measurement ratios, each a variant of the system constant.
3.11. Fundamental Properties of the Universe
The fundamental expression lfmf = 2θsitf is a system definition that not only provides the foundation for how length, mass, and time interrelate within the system, but also defines both upper bounds and properties for the system. In this section, we present expressions that enable the calculation of some of those upper bounds to measurement. Upper bounds are important as they will allow us to calculate the visible, observable and total characteristics of our universe.
In the same way that length frequency is an upper bound, we may resolve system properties by multiplying the age of the universe AU and the corresponding radial system constant θsi (where AU corresponds to half of 2θsi) by the two system frequencies and corresponding system count. With seconds  , i.e., the current epoch, then
Each expression is best understood as an observational bound, but may also be understood as a physical property where θsi is the radial system constant of a co-moving reference in the expansion beyond which measurement information cannot be distinguished for greater frequencies. For Mf, the fundamental mass of the universe is a scalar bound only in the present and does not reflect the size of the observable universe. Where the fundamental mass would theoretically define what mass can and cannot be observed, we must apply specific geometric principles to account for the expansion, for limitations in the transmission of light and adjustments related to a skewed view of measurement from our self-referencing perspective to properly resolve visible and observable mass.
Also note, where Mf is a function of mass frequency, the fundamental mass mf differs from the other measures as it is not the smallest measurable mass. Rather, mf is a composite of our understanding of lf and tf and therefore an important countable measure relative to the other measures. The ability to measure phenomenon smaller than mf does not violate the mass frequency bound, but understanding how it constrains observable mass requires additional steps to be discussed in the next section.
Conversely, the values nT and RU are scalar bounds; nT, for instance, is a count of time units elapsed, whereas RU is the co-moving radius that corresponds to that count. The radius RU = AUθsic and the dimensionless nature of the radial system constant θsi can be verified by starting with Equation (89), substituting in Equation (81), dividing by two to obtain the radius RU, and then multiplying AU by c to convert to SI units.
One may have noticed the arbitrary introduction of θsi, the radial system constant. This is an important system parameter that applies to most Informativity expressions and may be resolved from Equation (87). Without the radial system constant, the expression would represent a calculation applicable to a volume expanding at the speed of light, but would not express the expansion we see in the universe, which requires a self-defining frame of reference. One might also consider applying the radial system constant to Equation (91) such that nT = AUθsi/tf. As 1/tf is a count and not a measure, the radial system constant is not applicable.
Interpreting the units for θsi can be challenging as well. This aspect is discussed in Appendix E, but for now this consideration may be factored out. Given AU = RUtf/θsilf from Equation (92), then the fundamental mass is
Moreover, mf/lf is the last unaccounted for frequency bound. Reducing the fundamental mass expression into more familiar physical measures where the age of the universe is AU = nTtf in seconds and mf/tf = c3/G from Equation (31), then
The four ratios 1/tf, lf/tf, mf/tf, and mf/lf each describe an important property of our universe. Furthermore, Equation (96) is valid only because the measure of G is made macroscopically and as such corresponds to the proper value reflective of the Informativity differential.
We may next resolve the volume of the universe VU using its radius RU from Equation (92),
To resolve the corresponding fundamental mass density with respect to the universe using the expression ρf = Mf/VU, we substitute in the expression for volume and reduce with RU = AUθsic from Equation (92), where Mf = RUmf/lf from Equation (94), h = 4πθsilf from Equation (14) and from Equation (47) such that
With Planck’s constant adjusted for the Informativity differential from Equation (47) then the corresponding fundamental mass density is
In recognition of traditional density calculations, the fundamental mass density is not a value that has relevance to a mass density that we may observe. Note, we are taking a fraction of the total mass that exists in the universe and then dividing that by the total volume of the universe. While the result is inapplicable to experimental measure, the ratio fills an important void in some calculations.
When working with self-referencing and self-defining phenomena we will find that knowing the mass, size and age of one or another framework can tell us nothing about the relationship between these two frameworks. Having expressions that decisively describe that relation can be instrumental when we wish to translate measures between frameworks (i.e. calculate the volume of the universe where we can only measure the mass that can be seen).
Lastly, we have not fully explored how units should be resolved in Informativity. Unit analysis is a property of the reference frame being used and can dramatically differ from the traditional assumptions one makes where all units are resolved as self-referencing measures. For a greater understanding of unit assignment where two frames of reference are at work in tandem, please refer to Appendix E where an example expression from this section is used to clarify the process.
3.12. Dark Energy and Dark Mass
On the topic of dark energy, several values for properties of the universe have been calculated and presented. The same may be accomplished to resolve the distributions of visible, observable and dark mass (that which is not observable) and an understanding of dark energy. These calculations have been possible with the ˄CDM model but such calculations have been met with several shortcomings such as “coincidence” and the cosmological constant. At the same time, arguments presented by Karl Popper have also brought to our attention the possibility that ˄CDM as currently understood is built upon a foundation of conventionalist stratagems, rendering it unfalsifiable  . This is not to say that ˄CDM does not provide for physically significant insight into several observed phenomena, but does suggest that there is opportunity for new discoveries.
To provide direction where several extended theories of gravity have had success, Christian Corda presents a paper concisely demonstrating that the field of possible models may be reduced in relation to an understanding of gravitation with respect to GR. In his paper  this may be achieved where better measurement data in the study of gravitational waves can be obtained. Through a concise analysis of interferometer response functions, classes of gravitational theories may be mitigated and even removed from consideration.
Informativity provides a new opportunity that appears to avoid many of the aforementioned shortcomings and directly addresses several of the foundational issues mentioned. Notably, Informativity offers the opportunity to describe and resolve values for properties of phenomena we currently rely on ˄CDM to understand. For example, where mass distributions lack a physical marker defining their relation (i.e. a fundamental mass), a distinct resolution of mass distributions is now possible. As well, Informativity is quantum in its presentation providing for expressions that are not bound in scope, valid for all measures in the physical regime.
Before proceeding, we are introducing a new term, dark mass Mdkm, to distinguish physical characteristics that cross two domains. Dark energy is presently used sometimes in the context of a mass/energy representing a part of the total mass/energy of the universe. In other situations dark energy is used to discuss the energy associated with the expansion of the universe. These two properties are related, but they are best understood separately using their traditional definitions.
With Equation (93) we have introduced the notion of mass frequency mf/tf as an important ratio that describes a measurement bound beyond which fundamental mass events can no longer be distinguished. The idea of mass frequency is that there is a maximum count of mass events that may be distinguished relative to a count of time events. As expressed in Equation (28) that count is 1.85492 × 1043 units/s.
Unlike the speed of light lf/tf, we may not use the scalar interpretation of mass frequency to resolve a specific boundary (i.e. visible, observable or dark mass). The relationship is complicated by our point of view and requires a translation between the self-referencing and self-defining measurement frameworks.
We may overcome this challenge by approximating the mass density ρm of the universe as the product of the expression for critical density ρc and the mass distribution Mobs associated with observable mass. Although mass density is contingent on the spatial curvature of the universe, based on observations of the CMB from the WMAP data, the curvature of space is measured to be close to zero. Hence, observable mass MO may be expressed as
We may now calculate the dark mass distribution Mdkm. Where we know the observable mass relative to the critical mass and we know the fundamental mass, we may take advantage of their relation to resolve the relative distribution of observable mass above the fundamental mass. The relation (Mobs − Mf)/Mf describes the relative distributions with respect to fundamental mass.
Where , where G = c3tf/mf from Equation (22), where RU/AU = θsic from Equation (92), and where the critical density of the universe ρc = Hf2/8πG is a function of the Hubble frequency from Equation (72), then
Next, given that the sum of the observable Mobs and dark Mdkm mass distributions must equal one, i.e. the two measures account for all mass in the universe, then
The respective distributions are then
To complete our understanding of these distributions we will further refine our understanding of Mobs. Presenting vU as the self-defining rate of expansion between the end points of DU (twice the radial velocity) we may write Equation (87) in SI units as DU = 2θsicAU such that
Note that Hubble’s constant which is also understood as a measure of expansion is defined with respect to a self-referencing locally defined fixed distance with respect to elapsed time. Defined as such, the rate of expansion is not constant, but decreases with the passage of time. Conversely universal expansion with respect to the self-defining frame is fixed.
Where vU = 2θsic m/s is the velocity of twice the radial expansion, the expression may be expressed as a percentage of a total. We construct the ratio by first noting that the self-defining total distribution is Mtot = 100% which is Mtot = 1. Similarly, velocity of twice the radial expansion vU is the ratio of the observable to visible distributions Mobs/Mvis with respect to the total. Thus where the locally defined speed of the expansion at the outer edge of the universe is the speed of light vU = c, then the visible mass distribution is
This is the mass distribution that we may measure in the present (i.e. what we can see). Likewise, the visible mass MV in kilograms as a function of the observable MO and universal MU mass is
Recognizing that an inertial frame may observe only the expansion adjusted percentage of the observable mass with respect to the total, we have taken the distribution ratio and applied the speed of light constraint to resolve the visible mass distribution as a property of the self-defining frame.
Finally, we may subtract the visible from the observable mass to resolve the unobserved mass Muobs that will be observed but has not yet reached the observer.
As such, we have resolved that mass will fall into one of the following distributions: 68.3624% dark mass, 31.6376% observable which includes 4.84884% visible and 26.78876% unobserved (each with respect to the whole). The values match modern calculations where mass/energy distributions of 68.3% dark energy, 31.6% observable (visible + dark matter + 0.1% neutrinos) which includes 4.8% visible and 26.8% dark matter  have been resolved.
Naturally we are compelled to ask why the dark energy distribution would match dark mass and the dark matter distribution would match the difference between observable and visible. The terms identifying these phenomena have been attributed characteristics that are not all entirely related to a common phenomenon. For example, properties of gravitational attraction within galaxies with respect to rotational characteristics have been associated with the term dark matter. But, the energy properties of these distributions are related to the energy associated with each phenomenon. While modern theory has identified the distribution values, it is a deeper understanding of their geometric origins that needs assessment.
Also relevant is a study of the CMB published in 2015  which presented compelling data that dark matter is fine dust and can be measured by studying its gravitational effects on galaxies. To integrate the results of this research, we combine dark matter with visible matter and find a mass percentage of 31.6% associated with the observable distribution, which does match our expectation.
For historical interest, an expression that relates visible, observable and total mass may be organized in the form
where Mtot = 1 (100% of the mass) then we find that dividing the observable distribution by the visible distribution gives us 2θsi. It is interesting in how many ways the system constant makes itself accessible, a straight-forward ratio of two measures at the center of a long search to understand dark energy.
Of particular interest regards the invariant nature of these distributions. To demonstrate, first consider the fundamental expression from Equation (47). Presented in the form θsi = mfc/2 we recognize invariance in θsi where it is also agreed that the speed of light is a constant and our understanding of fundamental mass does not change. The possibility of a variant measure of mf is possible if balanced by an equal change in θsi. But there are serious conflicts if such a model is entertained. For one, if we look to Equation (31) which expresses gravity as , then any change in mf would also show a corresponding change in gravity. Similarly, where E = 2θsic from Equation (49) we see that any change in θsi would also result in a change of energy in a system. Thus far, there is no support for a change in gravity over time or a violation of conservation of momentum. Current observations support the conclusion that θsi and mf are invariant. Where θsi is the only non-integer value in the expressions that describe mass distribution (i.e. Equations ((109), (110)), we may state that the distributions are also invariant.
The relation between the two frameworks may now be resolved with the following equality, each an expression for dark mass,
Presented in Figure 3 we can demonstrate the relation and with that gain a better understanding of dark mass. For instance, if we consider the case where the total mass equals the fundamental mass, then the expression reduces to
As such, the visible, observable and total mass are value equivalent and there is no dark mass. Notably, the expression demonstrates there is no affinity for different forms of mass. The distributions are a geometric consequence of the total in relation to the fundamental. Where a total greater than the fundamental will present two mass distributions, one observable and one not, we find no
Figure 3. Relative measure of mass.
support for a physical difference between the two. Thus, dark mass is also mass that cannot be seen because it is in excess of the mass frequency constraint with respect to the self-referencing frame.
We may resolve the critical density ρc of the universe by taking the Friedmann equations (the 00 component of Einstein’s field equations) and set the normalized spatial curvature, k, equal to zero such that
to estimate the mass density associated with each distribution:
Solving for observable mass MO and dark mass MD we find
To place oneself contextually, note that these values are intrinsic properties of our universe, outcomes of the logical relationships established and described in this paper as expressed by the fundamental expression. When compared to measurement data, these values correspond to what we see.
3.13. Mass Accretion
Thus far, the expressions of Informativity have been consistent with modern theory. But, drawn between the expressions is an unexpected result, a universe with increasing mass. In this section we will describe the issue. In the next section we will describe how the result presents a complete story of inflation, the resulting CMB and the expansion that follows. And finally in Section 3.15, we will calculate the age, energy, density and temperature of the CMB, those calculations matching our best observations to four significant digits.
As noted in Equations ((109), (110)) the observable and dark mass distributions are invariant. Specifically, the dark mass distribution as presented in Equation (117) is
But, we also know from Equation (95) that the fundamental mass Mf increases with time.
In short, where the dark mass distribution is invariant, the fundamental mass cannot increase unless the observable and thus the total mass of the universe are also increasing. The conflict may be avoided if either θsi or mf offset the increase in the count of fundamental units of time nTu, but as has been argued following Equation (116) such a proposition presents a world where gravity changes over time and momentum is not conserved. There is no support for either of these effects. The mass of the universe must be increasing.
To gain a greater understanding of mass accretion, let us solve for the total mass using the observable mass as resolved from Equation (127),
Using Equation (118) to solve for total mass then
where the total mass of the universe may be expressed such that Mtot = nMumf, then a formal definition for system mass accretion Macr may be expressed as
Finally, where mass accretion occurs at a constant rate then the volume of the universe is accelerating at a volumetric flow rate of
This tells us that mass accretion does not demonstrate a linear relation to space.
3.14. Quantum Inflation
Inflation is a predicted consequence of a referential constraint that is unique to the early universe. The conditions that lead to inflation are specific to limitations in distance measurement as defined by the Pythagorean Theorem. Where a frame of reference is confined to an initial size equal to one fundamental unit of length (i.e. where rounds to a whole-unit value of 1), the ability to reference a point in space becomes an instrumental factor in its expansion, constraining the spatial frame while accreting mass within. As such, an inflationary epoch characterized by a quantum decelerative process transpires until points of reference may occur outside of the existing referential framework (i.e. where rounds up to 2) at which time the inflationary epoch ceases and the universe expands at the speed of light releasing the mass accreted within as CMB.
Informativity describes an inflationary period that is distinctly different than that of modern theory as well as previously entertained models such as steady state theory. For example, whereas the steady state model has mass accreting with a linear relation to space, Informativity does not. Most importantly, Informativity allows for verifiable calculations of the age, size, density and temperature of the CMB. Informativity also has significant differences from modern theory. Notably, inflation precedes the Big Bang. The rate of expansion during inflation is very slow. And, at the conclusion of inflation mass/energy initially remains isotropic with respect to the expansion of space. For these reasons, we will distinguish this period with the term quantum inflation.
Before we present expressions describing quantum inflation, let us first gain a better understanding of the mass accretion that is occurring during the quantum inflationary epoch. To do so, we will need a formal understanding of the upper bound to mass density in terms of fundamental units. Using escape velocity ve where
we substitute the speed of light c for ve, distance as a count of lf, r = nLlf, mass as a count of mf, M = nMmf, and the gravitational constant as defined in Equation (31), G = tfc3/mf.
Ignoring spatial curvature, we may then say that the upper bound to mass density may be characterized as a region of linear measure where there are at most two units of mf for each unit of lf. In terms of three dimensions then there is a bound of units of mf per cubic unit of lf such that mass has an upper density bound of
As defined by the fundamental expression, lfmf = 2θsitf, a greater count of mf with respect to lf would present a relationship between lf and tf that is greater than the speed of light. This is not observed and as such ordinary baryonic matter may not exist with a greater density.
Using Equation (134) for total mass where nT represents a count of tf corresponding to time elapsed since t0, for volume and Equation (92) RU = θsiAUc for the radius of the universe in SI units, then the mass/energy density of the universe may be expressed as
The next natural step might be to solve for time elapsed until the energy density of the universe falls below the critical density of mass. Setting ρU equal to the upper bound for mass density ρ and solving for the time elapsed from t0, then
The calculation is a novelty in that it ignores an important principle of self-defining referential systems. To this point there exists no frame of reference external to the outer edge of the universe. Where spatial referencing is defined in terms of the Pythagorean Theorem, the ability to reference a radial point outside of a whole-unit count of lf is constrained. We will need to modify the expression to take into account the constraint. By example, we may calculate the rate of expansion at one second.
We begin with the fundamental expression in expanded form as described in Equation (70). Note, we’ve added the prefix u to denote that the expression is a self-defining representation of the universe as defined with respect to itself. We then organize the expression into a form that more similarly corresponds to Equation (85).
As such, the expression describes a universe that expands at the speed of light. To appropriately modify the expression, we must note a differing count of lf other than that afforded by the relation defined by c = lf/tf. To avoid confusion we will use the term ~nLu to represent the count of length units during the quantum inflationary epoch.
where G = lfc3/2θsi from Equation (31), where RU = AUθsic from Equation (92), where from Equation (122) and where we have elected to resolve the rate of quantum inflationary expansion vi at AU = 1 second, then
As AU increases in the denominator the expansion decelerates. But, as we will demonstrate, that is not what brings quantum inflation to an end. For that, we must return to the definition of fundamental length.
3.15. Cosmic Microwave Background
Taking the integral of the velocity expression describing quantum inflation, we may obtain a corresponding expression for the radius of the universe,
Note that the radius is negative until AU is equal or greater than one second. A better term for the radial distance during this period might be “undefined”, but the result is best understood as a less refined description of “uncertainty”. Also note that for 1.09833 years the radius is less than the fundamental measure for length. That said, mass accretion occurs irrespective of the lack of a measurable spatial frame (i.e. a time when all length measure is defined against the reference). The quantum inflationary epoch is characterized as occurring no earlier than t = 1 second and ceases precisely when RU reaches the square root of three units of lf. It should also be noted that t = 1 is not coincidental or interesting. The value is an artificial product of our established measurement nomenclature.
Let us now investigate the importance of measurement counts and why the square root of three is so instrumental in quantum inflation. Notably, the value is significant because the phenomenon of length is a relatively defined quantized geometric construct. Until the referential system can define a count of lf greater than the base reference, there exists no means for mass to accrete outside of unity. Where relative distance may be expressed by the Pythagorean Theorem, it follows that a system with a base side a equal to 1 and a distance side b equal to 1 describes a relative distance defined against the base. In other words, the net distance (i.e. the hypotenuse) of 1.41421 units of lf rounds down to one lf, a distance phenomenon defined against itself.
Where the system may be characterized with a base side a equal to 1 and a greater distance for side b where the Pythagorean Theorem resolves a hypotenuse equal to the , 1.73205 units of lf (2.79934 ´ 10−35 m), then the count of lf rounds up to 2. At this precise moment, the referential system allows for the accretion of matter outside of unity causing the quantum inflationary period to end and allowing the universe to expand at the speed of light. The universe is born!
As a side note, all measures of relative distance greater than the will round to the closest whole unit count of lf, which will be up or down. The variable outcome of distance measurement precludes a black hole from being truly black. With the black hole surface always changing, we find that there will always be some loss of radiation at the surface. The effect is typically understood as Hawking radiation which is built on the foundation of the trans-Planckian problem  . Some have argued that the argument lacks physical support  which is now provided in Equation (57). In the calculations that follow, Informativity further supports the argument with observationally supported values for the age, quantity, density and temperature of the CMB as a consequence of the quantized nature of length measurement.
We calculate the self-defining fundamental age of the universe at this particular moment, , by solving for AU from the expression above such that
Thus, at AU = 363,309 years, the universe reached a state where the spatial reference frame was no longer defined against itself. Most interesting, this age is also distinguished in modern theory whereby recombination appears to have occurred at approximately 378,000 years.
In the same way that the fundamental mass serves in the capacity of a conversion metric, the self-referencing age must also be resolved. As described in Equation (92) where the radius of the universe is expanding at RU = AUθsic, then the difference between the self-referencing age As-ref and the self-defining age AU is a function of the spatial frame in three dimensions (i.e. volume where V = (4/3)πR3) such that
This corresponds to a referential age of 678,889 years. The expression is representative of time in the local frame at the conclusion of quantum inflation and the birth of our universe’s spatial reference frame.
The corresponding mass/energy of the universe at As-ref follows such that
Given the constraints of Equation (142) where accreted mass may not have a density of more than , we find that mass which is confined to unity, i.e. , may not exist in ordinary baryonic form during quantum inflation. At nT = 1 mass density begins at 10145 kg/m3 and then increases as the rate of mass accretion is greater than the rate of quantum expansion. The most likely candidate form for mass during this period is ER. The radiation is isotropic at the conclusion of quantum inflation. We may solve for the resulting CMB energy density by dividing the total mass-equivalent of the CMB Mtot by the volume of the universe today VU (converted to joules per cubic meter by multiplying by c2) such that
Before resolving the corresponding temperature, it should be noted that the radiation constant “a” should be resolved using Planck’s adjusted constant from Equation (36) and Boltzmann’s constant where . Using the present day volume of the universe as presented in Equation (97), then the total CMB energy radiated as described with respect to blackbody radiation (i.e. the Stefan-Boltzmann law) is
It follows that the temperature T of the CMB is then
A study of temperature measurements of the CMB literature was published by D.J. Fixsen  in November of 2009. He found that the best measure of temperature corresponded to a value of 2.72548 ± 0.00057 K. The study supports the Informativity expression to four significant digits.
It should also be noted that the Hubble constant as presented in Equations ((71), (72)) should be adjusted for quantum inflation in order to accurately reflect the observed expansion. Because the age of the universe is known only to a precision of 106 years and the time elapsed during the quantum inflationary epoch is a value of 105 years, the difference is still less than the least significant digit for the age estimate. As such, the Hubble value presented in Equation (71) is not affected.
3.16. Support for Mass Accretion
While the process of expansion differs from modern thought, there are no conflicts with observational data. Rather, where modern theory has had mitigated success, Informativity has resolved a straight-forward understanding of several phenomena:
・ Mass distribution that is both isotropic and homogeneous
・ No center where mass appears to originate
・ Significant correspondence between stellar and universal expansion
・ An understanding of the expansion of the universe (dark energy)
・ Calculation of the respective visible, observable and dark mass distributions
・ An understanding of how quantum inflation began and why it ended
・ Calculation of the age, quantity, density and temperature of the CMB
Perhaps the most notable discipline not yet explored regards nucleosynthesis, a model that presents an explanation of the relative distributions of observed hydrogen and helium. While Informativity is consistent and supported by several measurement studies, a study of nucleosynthesis will not be explored at this time. That said, there are physically significant expressions in Informativity that present evidence that the distributions on which modern nucleosynthesis is built need reevaluation.
Firstly, dark mass is also matter that cannot be observed as a consequence of the speed of light and mass distributions a function of mass frequency. Where support for a universal mass of 3.05211 × 1054 kg (the sum of Equations ((125) and (126))) is significantly greater than current thought, the difference may affect some nucleosynthesis calculations.
More importantly, equation (118) argues that there is no affinity for a specific division of observable and unobserved mass. The resulting implication is straight-forward; modern calculations of the relative distributions derive from an initial mass/energy of the universe that is present at t0, but do not consider that the initial mass also includes hydrogen and helium in the region recognized as dark mass nor that mass has accreted evenly over time. The expected mass distributions and respective quantities will not match.
Lastly, with respect to quantum theory where it is interpreted that a vacuum is not empty space, there is now data that shows that quantum fluctuations do generate particles which may then decay. In 2015 a paper detailing the detection of quantum fluctuations in a vacuum was presented describing the first measure of this prediction  . While the experiment was designed to detect the decay of particles in a vacuum, there was no specific information indicating that all particles decay. Informativity supports the possibility that quantum fluctuations represent a phenomenon that may also describe the underlying processes that give rise to mass accretion.
3.17. Reducing a Physical Expression Back to the Fundamental Expression
The Informativity conjecture is that every physical law is an expression of, and therefore can be reduced to, the fundamental expression. We have used this principle as a basis to explain dark energy and to resolve several properties of our universe. In this example, we demonstrate this by taking the fundamental mass, which we may use to resolve the observable and non-observable mass distributions, and reduce it back to the fundamental expression.
Where Mf is the fundamental mass of the universe in kilograms, VU the volume in cubic meters, RU the radius in meters, and ρf the fundamental mass density―an estimate based on the product of observed mass as a percentage and the critical density of the universe―then
We may then reduce the expression where , where ħ = h/2π, where ħ = 2lfθsi from Equation (36) and where expressions from Equations ((92), (93) and (100)) are
substituting for Mf and cancelling terms yields
Hence, the expression Mf = VUρf is a derivative representation of the fundamental expression.
The lack of a formal expression that determines system laws and properties leads to questions that are in hindsight meaningless, such as what exists outside of the universe. In the context of a logical construct, such questions can now be more clearly defined. In the same way that the maximum speed is c = lf/tf and the system volume is , when applying expressions that describe physical phenomena we find that each is an outcome of the fundamental expression. To ask what is outside of the reference system is meaningless because there is no means to define phenomena outside of a background independent system of relatively defined measures. Such limits, though, may differ with respect to a self-defining framework and/or offer the possibility that the fundamental measures are inherited from a multi-verse.
Investigations of the scalar constant S, that is, θsi, are central to understanding Informativity. With a physical correlation to θsi, we may resolve a distance-specific expression for Planck’s constant, build fundamental expressions for length, mass, and time and equate Informativity to Planck’s formulations to realize a new unifying expression. The foundations of Planck’s formulations and Informativity have little in common, but each model may be used to resolve the fundamental measures. The defining difference and advantage the Informativity approach offers are that each term carries a grounded physical correlation. The fundamental measures of Informativity, as such, are not a theoretical construct.
Much of what has been presented strongly focuses on the idea of fundamental measures as a means to expressing descriptions of nature. It is conjectured that fundamental measures are the best-suited means to understand nature, but it may equally be noted that the translation of measurement to such a unit system is no more significant than factoring or scaling an expression. The underlying structure that makes such a scaling uniformly convenient is expressly neither a phenomenon which may be directly measured nor a requirement to understanding the expressions of Informativity.
A likewise and equally notable construct of the presentation is the incorporation of non-dimensionalization. While an important step in exposing an understanding of θsi, non-dimensionalization in itself is not a prerequisite to Informativity. There are other approaches that may be used to achieve the same results. The approach taken is intentional as a precursor to breaking the bonds of a straight-forward refactoring of the Planck expressions. The approach also exposes a novel way of understanding gravity from an alternative geometric perspective. While no physical evidence is specifically cited, if evidence were to be found pointing to an underlying spatial fabric that is both logical and quantized, then this approach would dictate that gravity is an inevitable byproduct of whole-unit quantization.
To all of this, we build a foundation with geometric interpretations that describe light and matter in simple terms of a radius and circumference of a circle. With QLf, r, c3, and G used to describe gravity, we find that the scalar constant S is QLfrc3/G, which is also ħ/2lf. Moreover, we find that compared with the arc length set off by θsi precisely describes the minor arc of that circle in terms of angular measure or momentum. Where E = mc2 and E = nhv, we also find that a fundamental measure of mass and a photon are separated in energy precisely by a factor of 2π. With this, in the analysis of Heisenberg’s uncertainty principle, we resolve that certainty is defined relative to one significant measure, lf.
Informativity has also allowed us to explore how certain properties of our universe are expressed and constrained by the relations they obey, such as the speed of light (the length frequency), mass and time frequency, and how G is a space-time composite of these frequencies. We have explored how these relations prescribe frames of reference that in turn prescribe an understanding of space as expanding in a precise and consistent fashion with elapsed time. That expansion is not only required to be outward but isotropic and homogeneous.
Also noteworthy is the observation that physical constants are variations of the fundamental expression, lfmf = 2θsitf. Whether that is the gravitational constant, Planck’s reduced constant or Hubble’s constant, there exists a foundation of understanding that these values are convenient arrangements of the fundamental measures as defined by the fundamental expression relative to the system constant 2θsi. The observation is expanded to include more general expressions by taking the expression for the fundamental mass of the universe and reducing it back to the fundamental expression.
Perhaps one of the most significant discoveries is not that our expanding universe is a phenomenon that can be expressed as a measurement bound, but that phenomena have measurement bounds. Keeping in mind that length, mass, and time are related and relatively defined, to understand observable mass as a function of mass frequency is as valid as to say that the age of the universe is an increasing upper bound constrained by being related to length frequency. The conjecture is a mathematical equivalence that offers no frequency affinity. On such a foundation, it is possible that the apparent age of the universe is a frequency bound, that there exists a multi-verse that extends indefinitely in time, and that the phenomena we observe are products of frequency bounds of the multi-verse, not the universe. Nevertheless, a proof of this conjecture is an open problem for future investigation.
In conclusion, note the following tests of Informativity.
4.1. Measurements of Predicted Values
The measure for the signal and idler θsi is so prevalent in Informativity that it may have been taken to be a fundamental constant of nature. This is not a role that is necessarily established; θsi is a prediction of this model that does not arise until after Equation (19) where it is introduced. As a matter of clarification, θsi is a derived value based on existing expressions that describe gravity and supporting evidence such as Planck’s understanding of fundamental measures. In this light, θsi is a predicted radian measure of particular importance to our understanding of light. The reverse argument may also be made, that our knowledge of θsi allows us to derive an expression for gravity. However, both arguments cannot be made simultaneously. Where one phenomenon is understood, the other must be an outcome. For this reason, an argument is put forward that leads to verifiable expressions of physical phenomena.
Shwartz and Harris  reported angular measures needed to entangle photons in pure Bell states based on their measure of exactly equal to that predicted by Informativity. Their model conforms to their observational data from nonlinear X-ray optics experiments, which provides measures of relative angular precision to 10−5 radians. Measurements with accuracies of up to 10−6 radians will be possible in 2017 at the European X-ray Free Electron Laser facilities (XFEL) in Hamburg, Germany.
4.2. Measurements of Gravitational Lensing
There are several good examples of gravitational lensing within the universe, but for the purposes of Informativity, the best measure of this effect is relative to the Sun. The issue with other targets is the considerable uncertainty in distance in relation to the Informativity differential effect. In general, if accurate measures are needed to be resolved, our Sun would most likely be the backdrop to such measurements.
To provide context, we present the effect as a difference from GR in the deflection of light grazing our Sun. With θ the angle of deflection, r and M the radius and mass, G the gravitational constant, and c the speed of light, then
We see that measuring the effects of Informativity only requires that we are able to detect the difference between Newton’s expression G/r2 and the Informativity expression QLfc3/rθsi and then use that to solve for the radian difference between GR and Informativity,
The effect resolves to six orders in magnitude less than the effects of GR. A search through existing data does not show precision that would reveal this effect, but with future efforts the difference may be resolved.
4.3. Measurements of Universal Expansion
A measure of an expanding space has particular value as it can greatly assist in understanding the difference between the expansion of space and the expansion of galaxies away from one another with respect to space, two distinct effects that do not have a specific correlation. Moreover, such an experiment would confirm that the expansion is a phenomenon that also occurs in the local frame and not a quality that appears only on a cosmological scale. In addition, where such measurements show no effects related to Special Relativity, the experiment supports the idea that this is a geometric property of space and not a property of the inertial frame.
Specifically, space is not a tangible, measurable phenomenon. Rather, the process of measurement is geometric in origin. Furthermore, the reference system against which everything is defined, the fundamental expression, consists of measurement counts that change with elapsed time and therefore change our understanding of length.
The expansion of the universe in the local frame is not as small as one might anticipate. Using Equation (72) as a starting point, we may resolve the expansion between Earth and a satellite in the same Earth orbit on the other side of the sun. With the expansion of space, the trip distance Earth-Satdist, twice the average distance d between Earth and Sun ( )  increases at a rate Hd. The displacement D is then
In other words, excluding the effects of gravity, the distance between the Earth and the satellite increases by 69 nm as a result of universal expansion during the trip.
4.4. The System Constant and Its Effect on Mass Distribution
At the center of Informativity is the observation that θsi = 3.26239 radians from Equation (33) is a measure correlated with the polarization of an electric field with respect to the scattering plane needed to create quantum entanglement of X-rays in specific Bell states. The measure may also be correlated with the
Table 5. Distribution of mass in the universe respective of mass frequency found.
momentum of half a fundamental measure of mass. Evidence has been presented that shows θsi conforms to the Informativity interpretation whereas estimates based on the standard model interpretation of Planck’s reduced constant suggest a value of θsi = 3.26250 kg m/s from Equation (34).
Fortunately, there are several tests that bring a greater understanding of θsi. As resolved in Equations ((109) and (110)) and presented in Table 5, the value of θsi has a significant effect on mass distribution. An analysis of the distributions to two orders of magnitude greater than current measurements decidedly favors only one of the interpretations.
We thank Edanz Group (http://www.edanzediting.com/ac) for editing a draft of this manuscript.
Appendix A. Numerical Limits to QLfrLf
Throughout the paper, we find the term QLfrLf repeatedly. This term is referred to as the Informativity differential in recognizing the central role it plays in describing how fractional values less than the theoretical limit reflect a distortion effect in distance measurement. Knowing the limits to QLfrLf is also essential in resolving the fundamental measures.
The product of QLfrLf is Equation (5) multiplied by b.
Note, what is measured always equals a whole-unit count of a fundamental measure, and with a = 1 we find that b = rLf for all values. This is easily verified in that the highest value for QLf is obtained for b = 1 where and the “observed” distance of c presented as a count rLf is always rounded down to the highest integer value equal to the count b with QLf = 0.414 at its highest and quickly approaching 0 with increasing b. Therefore,
The lower limit where rLf = 1 is easily produced, . Conversely, if we divide by rLf, then add rLf, square, subtract , and divide by 2, we find that
QLf decreases with increasing rLf until the left term drops out. Distance does not need to be significant to reduce the Informativity differential to 0.5. At just 104lf, QLfrLf rounds to 0.5 to nine significant digits.
Appendix B. Fundamental Transforms
On occasion, we find the need to translate from one measure to another. For instance, we may have an expression given in terms of time, but want to create an expression in terms of length. This may be accomplished by multiplying by c, the speed of light. In this paper, this process is referred to as applying a fundamental transform. Each of the transforms may be derived from the definitions of the fundamental measures presented in Equations (20)-(22).
To transform length to time , then compare Equations ((20) and (21)),
Therefore, and as such, lf/c = tf. This transform is not typically mentioned as it is a definition of the model. To transform length to mass , then compare Equations (20)-(22),
Therefore, and as such, lfc2/G = mf.
To transform time to mass , comparing Equations ((20) and (21)) gives
Therefore, giving tfc3/G = mf.
Appendix C. Effective Count of lf in the Measure of ħ
The measure of Planck’s constant requires a physical interaction at a specific relative distance. That distance may be resolved as a count of lf using Equation (5) where bLf rounds to rLf and Equation (33) where we have substituted from Planck’s relation in Equation (1). We have
Appendix D. Resolving System Counts of Self-Defining Measures
We may also resolve the respective counts of length and time by expanding each expression and setting the target measure equal to one. We distinguish system measures as well as system counts by a subscript u. Hence, where lu = 1, we may using the equality lfmf = 2θsitf reduce the expression of the count ratio to its simplest form,
The same approach may be taken with time:
Appendix E. Unit Analysis as a Function of the Frame of Reference
As with many Informativity expressions, unit analysis is challenged because self-referencing and self-defining terms are mixed. This happens wherever we present an expression that incorporates a system characteristic, such as the system constant. The issue differs significantly from a calculation error where both the value and units of an expression are incorrect. Properly resolved Informativity expressions will give the correct value. That is, unit issues arise as a result of expressions that mix two frames of reference with respect to a common phenomenon.
Here within, we use Equation (100) as an example to demonstrate the context and methods involved in resolving units for Informativity expressions. We begin by first demonstrating that the resulting value is the same value as would be resolved if we had solved the initial expression. With Mf = 5.7353 × 1053 kg from Equation (93) and VU = 3.2360 × 1080 m3 from Equation (97), then
The values are the same. Conversely, the unit issue began with the introduction of the radial system constant θsi.
The self-defining dimensionless nature of θsi is important when we make substitutions like mf = 2θsi/c as we did in the final reduction. In consideration of this variation of the fundamental expression, where from Equations ((78) and (79)) has the units of kilograms and such that from Equation (81) where (nLu/nTu) = 2θsi/c has units seconds/meter, then kg = s/m is the “conversion metric” between the self-defining and self-referencing value 2θsi/c.
Moreover, where the frame of reference is a circle with an angle of θsi then h = 4πlfθsi from Equation (14) has a similar “conversion metric”; its units are meters. Making the substitution s/m = kg and meters for h into Equation (100) resolves the conflict in units,
The practice of mixing self-defining and self-referencing terms is difficult, but can be performed consistently so long as one is aware of the frame of reference under consideration. Units may always be verified by agreement on the calculated value and associated units at an early point in the derivation.
 Smolin, L. (2006) The Case for Background Independence. In: Rickles, D., French, S. and Saatsi, J.T., Eds., The Structural Foundations of Quantum Gravity, Clarendon Press, Oxford, 196.
 Anderson, J.D., Schubert, G., Trimble, V. and Feldman, M.R. (2015) Measurements of Newton’s Gravitational Constant and the Length of Day. Europhysics Letters, 110, 1.
 Aoyama, T., Hayakawa, M., Kinoshita, T. and Nio, M. (2012) Tenth-Order QED Contributions to the Electron G-2 and an Improved Value of the Fine Structure Constant. Physical Review Letters, 109, 111807. arXiv:hep-ph/1205.5368v2
 Jarosik, N., et al. (2011) Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results. The Astrophysical Journal Supplement Series, 192, 14.
 Bonvin, V., Courbin, F., Suyu, S.H., et al. (2016) H0LiCOW-V. New COSMOGRAIL Time Delays of HE 0435-1223: H0 to 3.8 Percent Precision from Strong Lensing in a Flat ^CDM Model. MNRAS, 465, 4914-4930.
 Gasanalizade, A. and Hasanalizade, R. (2015) Determination of Dark Energy and Dark Matter from the Values of Redshift for the Present Time, Planck and Trans-Planck Epochs of the Big-Bang Model. arXiv:1111.2936v3
 Merritt, D. (2017) Cosmology and Convention. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 57, 41-52.
 Corda, C. (2009) Interferometric Detection of Gravitational Waves: The Definitive Test for General Relativity. International Journal of Modern Physics D, 18, 2275.
 Moskalenko, A.S., Riek, C., Seletskiy, D.V., et al. (2015) Paraxial Theory of Direct Electro-Optic Sampling of the Quantum Vacuum. arXiv: 1508.06953