The “Hubble tension”: A growing crisis in cosmology

Credit: NASA Goddard Space Flight Center


The standard model of mathematical physics

The standard model, namely the framework of laws at the foundation of modern mathematical physics, has reigned supreme since the 1970s, having been confirmed to great precision in a vast array of experimental tests. Among other things, the standard model predicted the existence of the Higgs boson, which was experimentally discovered in 2012, nearly 50 years after it was first predicted.

Yet physicists have recognized for many years that the standard model cannot be the final answer. For example, quantum theory and general relativity are known to be mathematically incompatible. String theory and Loop quantum gravity are being explored as potential frameworks to resolve this incompatibility, but neither is remotely well-developed enough to qualify as a new “theory of everything.” Other difficulties exist as well.

But there is only so far that mathematical analysis can go in the absence of solid experimental results. As Sabine Hossenfelder has emphasized, beautiful mathematics published in a vacuum of experimental data can lead physics astray.

The “Hubble tension”

One significant experimental anomaly that does not appear to be going away, and which may point to a fundamental weakness in either the standard model or Big Bang cosmology, is the discrepancy in values of the Hubble constant based on different experimental approaches. This discrepancy is now known as the Hubble tension.

The Hubble constant $H_0$ is a measure of the rate of expansion of the universe, and is directly connected to estimates of the age $A$ of the universe via the relation $A = 1 / H_0$. Units must be converted here, since the age of the universe is normally cited in billions of years, whereas the Hubble constant is usually given in kilometers per second per megaparsec (a parsec is $3.0857 \times 10^{13}$ km, or roughly 3.26 light-years). Also, an adjustment factor is normally applied to this formula to be in full conformance with the Big Bang model.

The trouble is that the best current experimental results for the Hubble constant give conflicting values. See this previous Math Scholar article for an overview of the problem, as of August 2020.

One method to determine $H_0$ is based on the Lambda cold dark matter (Lambda-CDM) model of Big Bang cosmology, combined with careful measurements of the cosmic microwave background (CMB) data from the Planck satellite or similar facilities. The latest (2020) result from the Planck team yielded $H_0 = 67.4 \pm 0.5$. Another approach is to employ more traditional astronomical techniques, typically based on observations of supernovas, Cepheid variable stars or other phenomena, combined with redshift measurements to determine the rate of recession. For example, in 2016, a team of astronomers using the Wide Field Camera 3 (WFC3) of the Hubble Space Telescope obtained the value $H_0 = 73.24 \pm 1.74$.

Clearly, these two sets of values differ by significantly more than the error bars of the two measurements. What is going on?

New results for the Hubble constant

In an attempt to resolve the “Hubble tension,” over the past few years numerous large international research teams, using different approaches, have launched studies hoping to resolve the issue. But rather than resolve the issue, their latest results only deepen the controversy.

The table below shows the most recent Hubble constant figures and standard error bars (either the one-sigma statistical error or the “systematic” error, whichever is larger). The table also shows the date of the study and a link to the actual technical paper documenting the result. Below the table is a chart with the same data.

Study Year Hubble Std err
Distance/redshift studies:
HST/SHOES collaboration REF1 2022 72.53 0.99
Type Ia supernovas REF2 2023 73.29 0.90
Near-infared Type IA supernovas REF3 2022 72.3 1.3
Two-rung cosmic distance ladder REF4 2022 73.1 2.6
Tip of the Red Giant Branch (TRGB) REF5 2021 69.8 1.6
Another TRGB study REF6 2022 71.5 1.8
TRGB and Type Ia supernovas REF7 2023 72.94 1.98
Mira vars in Type Ia supernovas REF8 2019 73.3 4.0
Megamaser studies REF9 2020 73.9 3.0
Megamaser studies REF10 2020 71.8 2.9
Type II supernovas REF11 2022 75.4 3.8
Infared surface brightness REF12 2023 74.6 2.7
Infared surface brightness REF13 2023 73.31 0.99
Infared surface brightness REF14 2021 73.3 2.4
Ionized gas of HII galaxies REF15 2017 71.0 2.8
Infared Tully-Fisher relations REF16 2020 76.0 2.3
Baryonic Tully-Fisher relations REF17 2020 75.1 2.3
Cosmic microwave background studies:
WMAP data REF18 2013 70.0 2.2
South Pole Telescope data REF19 2023 68.3 1.5
Atacama Cosmology Telescope REF20 2020 67.9 1.5
WMAP + SPT REF21 2023 68.2 1.1
WMAP + ACT REF22 2023 67.6 1.1
Planck data REF23 2020 67.4 0.5

As can be seen easily from either the table or the chart, the first 17 measurements, which are mostly based on astronomical measurements of distance and redshift of supernovas, are noticeably distinct from the last eight measurements, which are mostly based on analyses of the cosmic microwave background (CMB). In particular, the distance/redshift studies give an average value of 73.0, with a standard error of 1.0; the CMB-based studies give an average value of 67.5, with a standard error of 0.5. Needless to say, these two values are utterly incompatible — either is far outside the error bars of the other. If we focus on the study with the smallest error bar from each group, namely the Type Ia supernova study (REF2) and the Planck data study (REF23), the difference is even more stark — their difference is 6.5 standard deviations, based on the Type Ia study standard error, and 11.7 standard deviations, based on the Planck study standard error.

Additional recent data and background are given in this November 2023 arXiv preprint, this November 2024 Scientific American article and this Wikipedia article. The chart above is analogous to a chart in the Scientific American article, but was independently constructed based on data from the original papers.

Are the physical models wrong?

While each of these teams is hard at work scrutinizing their methods and refining their results, researchers are increasingly considering the unsettling possibility that one or more of the underlying physical theories are just plain wrong, at least on the length and time scales involved.

Key among these theories is the Lambda-CDM model of Big Bang cosmology. Yet physicists and cosmologists are loath to discard this model, because it explains so much so well:

  • The cosmic microwave background radiation and its properties.
  • The large-scale structure and distribution of galaxies.
  • The present-day observed abundances of the light elements (hydrogen, deuterium, helium and lithium).
  • The accelerating expansion of the universe, as observed in measurements of distant galaxies and supernovas.

As Lloyd Knox, a cosmologist at the University of California, Davis, explains,

The Lambda-CDM model has been amazingly successful. … If there’s a major overhaul of the model, it’s hard to see how it wouldn’t look like a conspiracy. Somehow this ‘wrong’ model got it all right.

Various modifications to the Lambda-CDM model have been proposed, but while some of these changes partially alleviate the Hubble tension, others make it worse. None is taken very seriously in the community at the present time.

Adam Riess, an astronomer at Johns Hopkins University in Baltimore, Maryland who co-received the 2011 Nobel Prize in physics for his discovery of the accelerating universe, is hopeful that additional experimental results will close the gap between the competing values. Nonetheless, he ventures, “My gut feeling is that there’s something interesting going on.”

Early dark energy?

More recently, Riess, together with Johns Hopkins physicist Marc Kamionkowski, have proposed that dark energy, long thought to be a uniform constant throughout the universe and throughout time, might have been stronger in first few billion years after the Big Bang. In particular, they propose that there existed some unknown component of matter-energy, called “early dark energy,” with a density roughly 10 percent of the conventional value, which existed during the early universe but later decayed away. This would bring theory into a closer match with data.

As Kamionkowski explains,

The most obvious form for early dark energy to take is a field, similar to an electromagnetic field, that fills space. This field would have added a negative-pressure energy density to space when the universe was young, with the effect of pushing against gravity and propelling space toward a faster expansion. There are two types of fields that could fit the bill. The simplest option is what’s called a slowly rolling scalar field. This field would start off with its energy density in the form of potential energy — picture it resting on top of a hill. Over time the field would roll down the hill, and its potential energy would be converted to kinetic energy. Kinetic energy wouldn’t affect the universe’s expansion the way the potential energy did, so its effects wouldn’t be observable as time went on.

For additional details and discussion, see this 2024 Scientific American article, this 2023 arXiv preprint, this 2019 Scientific American article, this 2020 Quanta article, this 2020 Nature article and this Wikipedia article.

Caution

In spite of the temptation to throw out or substantially revise the currently understood standard model or the Lambda-CDM Big Bang cosmology, considerable caution is in order. After all, in most cases anomalies such as this are eventually resolved, usually as some defect of the experimental process or as a faulty application of the theory.

A good example of an experimental defect is the 2011 announcement by Italian scientists that neutrinos emitted at CERN (near Geneva, Switzerland) had arrived at the Gran Sasso Lab (in the Italian Alps) 60 nanoseconds sooner than if they had traveled at the speed of light. If upheld, this finding would have constituted a violation of Einstein’s theory of relativity. As it turns out, the experimental team subsequently discovered that the discrepancy was due to a loose fiber optic cable that had introduced an error in the timing system.

A good example of misapplication of underlying theory is the solar neutrino anomaly, namely a discrepancy in the number of observed neutrinos emanating from the interior of the sun from what had been predicted (incorrectly, as it turned out) based on the standard model. In 1998, researchers discovered that the anomaly could be resolved if these neutrinos have a very small but nonzero mass; then, by straightforward application of standard model, the flavor of some of these neutrinos could change enroute from the sun to the earth, thus resolving the discrepancy. Takaaki Kajita and Arthur McDonald received the 2015 Nobel Prize in physics for this discovery.

In any event, sooner or later some experimental result may be found that fundamentally and irrevocably upends some currently accepted theory, either a specific framework such as Lambda-CDM Big Bang cosmology, or even the foundational standard model of physics. Will the “Hubble tension” anomaly ultimately be the straw that breaks the camel’s back? Only time will tell.

Comments are closed.