Latest revision 13-04-2020


Paper by Hoyle

$ \def \MET {\quad \mbox{with} \quad} \def \SP {\quad \mbox{;} \quad} \def \hieruit {\quad \Longrightarrow \quad} \def \EN {\quad \mbox{and} \quad} $ Essential readings: Unfortunately, F. Hoyle's contribution is no exception to what seems to be the rule in nowadays cosmology: it's often more fiction than science. Let's begin with the fiction, as lucidly displayed in Fig.2 of the paper: Spacetime is divided into a number of four-dimensional volumes which make plus and minus contributions to the mass field. A plus aggregate is bordered by minus aggregates, and vice versa. [ ... ] Cosmological distances, as ordinarily understood, fit into a single aggregate. Our experience in astronomy is therefore confined to one sign for the contributions to the mass field. Yes, the plus sign only. There are quite a few interesting - scientific - issues in the writeup as well, though.

It is important to notice that provided light moves through a vacuum and provided electromagnetic radiation from the same atomic transition is used to measure both time and space intervals, then inevitably the speed of light is found to be unity, i.e. constant, such in concordance with INTRODUCTION 5 in the book Relativity Reexamined by Léon Brillouin: We witnessed the invention of atomic clocks of incredible accuracy, whose physical properties differ very much from the clocks Einstein imagined. This will be discussed in some detail in Chapter 3. Let us mention here a real difficulty resulting from internationally adopted definitions. The unit of length is based on the wavelength of a spectral line of krypton-86 under carefully specified conditions with accuracy 108 and the unit of time is based on the frequency of a spectral line of cesium with accuracy 1012. Hence, the same physical phenomenon, a spectral line, is used for two different definitions: length and time, and the velocity c of light remains undefined and looks arbitrary. It should be stated, once and for all, whether a spectral line should be used to define a frequency or a wavelength, but not both! Actually, the speed of light does not look "arbitrary", rather it is a constant ("unity") now by definition.

But the electronic charge $e$ always occurs in quantum mechanics in the fine-structure combination, $e^2/\hbar$, and the particle mass $m$ occurs either in the ratio $m/\hbar$ or as a ratio with respect to the mass of another particle, like $m_e/m_p$ for the electron and proton masses. [ ... ] all particle masses have the dimensionality of an inverse length - the Compton wavelength of a particle being just the reciprocal of its mass. [ ... ] Taken in a sensible way, both Planck’s constant and the speed of light are unity, and they are so everywhere. Is that true? In UAC it is only true with atomic time, that is: as long as there is no (Newtonian) gravity involved.

Intervals of time and space can therefore be considered to be measured with respect to a unit determined by $m_e^{-1}$. Especially time. Hoyle doesn't seem to be aware of the fact that his hypothetical other (minus) side of the zero surface can never be observed, because the time ticks $\sim m_e^{-1}$ become infinite with $m_e \to 0$, while approaching the zero surface from the positive side. And what can never be observed, does that exist anyway? Moreover, the dimensionalities of all physical quantities can be expressed as some power of $m_e , m_e^*$ say. As examples, pressure and energy density have $n = 4$; current density and surface tension have $n = 3$; luminosity, force, and the electromagnetic field have $n = 2$; energy, mass, and frequency have $n = 1$; length has $n = -1$. With our own notation conventions, we subsequently have: $$ \frac{q}{q_0} = \left(\frac{m}{m_0}\right)^n \MET q = \mbox{quantity} \EN n = 4,3,2,1,-1 $$ Especially the energy density $u$ with power law $n=4$ is worthwhile to remember. It follows that (star) temperature $T$ has power law $n=1$ because of the Stefan-Boltzmann law: $$ \frac{u}{u_0} = \left(\frac{m}{m_0}\right)^4 = \frac{\sigma T^4}{\sigma T_0^4} \hieruit \frac{T}{T_0} = \frac{m}{m_0} $$ Another way to derive the same is with the Kinetic theory of gases: $$ \frac{1}{2} m \overline{v^2} = \frac{3}{2} k_B T \hieruit \frac{T}{T_0} = \frac{m}{m_0} $$ Every experiment consists, when its procedures are analyzed, in the counting of a dimensionless number, which is always made up as a product of physical quantities and their inverses in such a way that the sum of the dimensionalities add to zero. No physical quantity with $n \ne 0$ is ever measured, except as a ratio to another quantity ofthe same dimensionality. Hence it follows that, so long as $m_e(x)$ is only slowly variable with respect to the spacetime position $x$, as would be the case if $m_e$ were to vary only on a cosmological time scale, no local laboratory experiment can detect the variation. I wouldn't be so sure about this: see bottom lines of the Narlikar's Law page.

The failure to distinguish between the geometry (6) associated with $m_e$ and the geometry (7) associated with $m_e^*$ is total. No distinction is possible through any observation or through any experiment. We now take the view that to have attempted to distinguish between {6) and (7) was an irrelevant problem. What can never be observed does not exist.

Although the region over which the Einstein-de-Sitter model applies is only a small element of the whole universe, it nevertheless encompasses everything which the astronomer observes, even with the largest telescope. Hence no. Apart from redundancies such as the cosmological constant and other General Relativity stuff, the Einstein-de-Sitter model applies over the whole universe, as has been motivated in our Van Flandern section.

In this Minkowski conformal frame the electron mass $m_e^*$ is a function of $\tau$. Sufficiently near the zero surface, $m_e^*$ can be expanded in powers of $\tau$, $$ m_e^* = A \tau + B \tau^2 + \cdots \qquad (29) $$ The gravitational equations based on (4) turn out to require $A = 0$. Hence the leading term in the expression for $m_e^*$ is quadratic in $\tau$ [ ... ] The coefficient $B$ [ ... ] is positive. With other words, Narlikar's Law has been recovered, for Minkowski flat-space indeed.

Near $\tau = 0$, such radiation is strongly absorbed and reemitted, and so becomes thermalized, because $e^2/m_e$ becomes large as me decreases, so the Thomson cross section becomes large. Indeed, as $m_e \to 0$ absorption processes are formally divergent. The formula for the Thomson cross section is, according to Wikipedia - perhaps of later use: $$ \sigma _{t}=\frac{8\pi}{3}\left(\frac{q^2}{4\pi\varepsilon_{0} mc^2}\right)^2 = \frac{8\pi}{3}\left(\frac{\alpha\lambda_c}{2\pi}\right)^2 $$ The latter when expressed in terms of the Compton wavelength $\lambda_c$ and the Fine structure constant $\alpha$.

[ ... ] gravitation behaves peculiarly in the Minkowski frame, not only because the particle masses change with time, but because the gravitational "constant" $G$ also changes. In the Einstein frame $G$ is indeed constant; but being of dimensionality $n=-2$, $G$ varies like $m_e^{-2}$. Such in concordance with the following.
The Planck mass is defined by: $$ m = \sqrt{\frac{\hbar\,c}{G}} \approx 21.7651\,\mu g $$ Herefrom it follows that: $$ G = \frac{\hbar\,c}{m^2} $$ Let's assume that Planck particles have some physical significance. My own interpretation is as follows. Tiny black holes do not exist, for the simple reason that singularities never really exist in physics. But it may be that they are sort of upper/lower bound to the existence of real elementary particles, meaning for example that particles heavier than Planck particles cannot come into existence. With subscript $0$ for nowadays values and assuming a variable Planck mass, we then derive that the Gravitational Constant may be variable as well: $$ G_0 = \frac{\hbar\,c}{m_0^2} \quad \Longrightarrow \quad \frac{G}{G_0} = \left(\frac{m_0}{m}\right)^2 $$ Conclusion: the Gravitational Constant may be inversely proportional to the square of (varying rest) mass of elementary particles, when measured in atomic time.

[ ... ] the galaxies were larger in proportion to their spacings than they are at present. Indeed for $|\tau|$ given by $$ \left(\frac{\tau_0}{\tau}\right)^2 \ge \left(\frac{\mbox{Spacing}}{\mbox{Radius}}\right)_\mbox{Present day} \qquad (38) $$ the galaxies [ on the other side ] were overlapping each other. The right-hand side of equation (38) has an average value of about 300. Let's check that number with help of Wikipedia: Most of the galaxies are 1,000 to 100,000 parsecs in diameter (approximately 3000 to 300,000 light years) and separated by distances on the order of millions of parsecs (or megaparsecs). For comparison, the Milky Way has a diameter of at least 30,000 parsecs (100,000 ly) and is separated from the Andromeda Galaxy, its nearest large neighbor, by 780,000 parsecs (2.5 million ly.) Sticking to the latter we have $780,000 / 30,000 = 26$, which is quite different from Hoyle's $= 300$ estimate. Making this whole exercise unreliable.

But wait! From Wikipedia we quote: The largest-observed redshift, corresponding to the greatest distance and furthest back in time, is that of the cosmic microwave background radiation; the numerical value of its redshift is about z = 1089. This would mean that (see Length Contraction): $$ \frac{L}{L_0} = \left(\frac{\mbox{Spacing}}{\mbox{Radius}}\right)_\mbox{Present day} = \frac{m_0}{m} = 1+z = 1090 $$ According to Van Flandern this redshift is associated with a Hubble volume which is greater than the observable universe of the Big Bang theory. Which does not represent a problem, though, for a cosmos that is infinite in space and time: $$ D = \ln(1090)\;c_0/H_0 \approx 7 \times \mbox{ Hubble radius} $$ And, according to the above, we find for the CMBR temperature $T$, given that the surface temperature of a main sequence star (e.g. our sun) is about $T_0 = 5,778$ K: $$ \frac{T}{T_0} = \frac{m}{m_0} = \frac{1}{1+z} = \frac{1}{1090} \hieruit T = \frac{5,778}{1090} = 5.3 K $$ The accepted experimental value for the CMBR is $T = 2.725$ K, which is of the same order of magnitude. No fictional "other side". No adjustable BB parameters too.
But we are not finished yet. According to Origins of the CMB: photons had much shorter wavelengths with high associated energy, corresponding to a temperature of about 3,000 K. And with Wikipedia we actually see very much the same formula as the one employed above. Using it the other way around we get, indeed: $$ T_r = 2.725\cdot(1+z) \hieruit T_r = 2.725 \times 1090 = 2,970.25 \approx 3,000 K $$ Putting it in the context of UAC, it has to be defended that "stars" in galaxies, full of young "atoms" with ultralight electrons, filling all empty space, indeed have this mean "surface" temperature. Perhaps some wisdom from standard Big Bang theory can be borrowed for our purpose.