The Hubble Constant - has some 'splainin to do

The Hubble Constant - has some 'splainin to do

INTRO - THE ELUSIVE VALUE OF THE HUBBLE CONSTANT
The evidence that our universe is expanding at accelerating rate is irrefutable. But no one yet knows precisely what the rate of that expansion is. A rate represented by a term called the Hubble constant.

Not knowing the precise value of the Hubble constant is vexing for cosmologists, because it's one of the most important numbers in all of cosmology. For example, knowing its value can lead us to a better understanding of the physical origins of very small irregularities (anisotropies) in the Cosmic Microwave Background (CMB); the fingerprint of the early universe.

Knowing the Hubble constant would also allow cosmologists to constrain the ratio between the energy density and pressure of dark energy; an unknown energy that is believed to permeate spacetime and drive its expansion.

It seems every year new papers are published using more accurate measurements to arrive at more accurate values, only to find their values are at odds with values posited by subsequent studies using different, yet equally-accurate measurements. And these discrepancies are showing a pattern, where values of the Hubble constant derived from "local" measurements, are consistently higher than those inferred from the most distant measurement; the cosmic microwave background (CMB). The CMB is the fingerprint of the early universe at a time before stars existed.

The farther into space we look, the deeper into the past we're seeing. This is because the light from distant celestial objects takes time to reach us. The light from a star a million light years away takes a million years to arrive here on Earth. Therefore, when we see that star, we're actually seeing the star as it was a million years ago.

This proves useful when determining values of the Hubble constant because it allows us to see expansion rates through time. And since we expect the Hubble constant to be, well, constant, then we'd expect to see the same expansion rate through time. The farthest we can look back in time is to look at the CMB which represents the universe about 378,000 years after the big bang. Astonomers aren't able to directly measure the Hubble constant from the CMB, but they can indirectly infer it.

And what they're finding is CMB-inferred values are consistently lower than values derived from 'local' distance indicators like Cepheid variable stars, and type Ia supernovae.

(For those interested, I discuss more thoroughly how astronomers use "local" distance indicators and the CMB to determine expansion rates at the end of this post as really exciting additional reading.)

That discrepancy between measurements taken from the distant past (CMB inferred values), and more 'current' values from 'local' distance indicators is calling into question just how constant the Hubble constant is. It could be this discrepancy, or "tension" as it is often referred to, is the manisfestation of a combination of small computational errors and/or unaccounted for local effects.

But just this year, two more papers have been published that utilized improved measurement techniques, and their results only serve to reinforce this tension. One paper using an improved method of measuring local distance indicators concluded with a Hubble constant value of 73.24 (± 1.74) km/sec/Mpc (Reiss et al., 2018).

That means from 10 megaparsecs out from Earth, spacetime is expanding at a rate of 73.24 kilometers per second, and for every parsec you go from there, it accelerates another 73.24 km/sec faster.

The other paper's model-dependent CMB-inferred value was, as with papers using the CMB before it, much lower at 67.4 (±0.5) km/sec/Mpc (The Planck Collaboration, 2018). Again, matching previous studies where higher rates consistently apply to local measurements of the Hubble constant, and lower ones to CMB-inferred values.

Paradoxically, the discrepancy between the values is widening as accuracy in measurements improves. The discrepancy, or 'tension' concluded by these two papers is over 4 standard deviations (> 4σ).

If this tension isn't due to computational error, or some overlooked anomaly, then this discrepancy may be an actual feature of the universe. If so, then it calls into question our understanding of the universe. Because as of now, the accepted standard model of cosmology that describes the evolution of the universe from inflation to now works within a universe whose expansion rate is constant.

The tension described above suggest it isn't. It's telling us the rate was lower in the deep past than it is now. The stuff we believe is driving the expansion is therefore not static in nature, but dynamic. That stuff is dark energy.

Dark energy is kind of at the heart of the currently-accepted standard model of cosmology. That model is called, and don't let this fancy name throw you, "Lambda-Cold Dark Energy" model (aka ΛCDM). It's basically a model of the universe that, as mentioned above, describes very accurately what we see across the universe. It describes a universe that has evolved since the earliest moments of inflation to now, having evolved via a bottom-up hierarchical structure. This means all the structure we see in the universe began with "small" stuff (galaxies), and those evolved into larger stuff (galaxy clusters), and those evolve into yet bigger stuffs (galactic superstructures), and so on... I think walls or filaments are next in line, but don't quote me on that. Anyway, all this is described by the "Cold Dark Energy" part. The CDM of ΛCDM.

The Lambda part, the Λ in ΛCDM describes a universe that is expanding at a constant accelerating rate. Let me repeat that... a constant accelerating rate. Doh!

This is a bit of a tangent, but one probably worth taking at this point. That lambda term has its roots in Einstein's field equations; the equations at the very heart of his General Theory of Relativity. When he completed his field equations, he realized they described a universe that would eventually collapse in on itself into an initial singularity; a point of infinite density and mass containing all spacetime. Actually it isn't even a point, anymore than an electron is a "point". It's weird, but there you go.

Anyway, Eistein didn't like the idea of a collapsing universe, and neither did the scientific community at the time. So he added the lambda term (Λ) to his equations to counter the effect of gravity and stabilize the universe so that it was static (not collapsing, not expanding). It was dubbed the cosmological constant, with constant being the key term here.

Since those days, we've learned the universe expanding, and is doing so at an accelerating rate. Today's lambda term (Λ), which is essentially the cosmological constant, still serves as a countering force to gravity, but rather than equally countering it to balance a static universe, it overwhelms gravity and drives expansion. The simplest candidate for the cosmological constant is dark energy; the energy of space itself. We'll come back to this shortly.

Since the cosmological constant is a fixed value, it implies that dark energy will always have the same repulsive strength, and since it's the energy of space itself (a vacuum energy as we'll discuss), then as space expands more dark energy comes into the universe ipso facto, causing the expansion to accelerate.

Assuming the cosmological constant is indeed constant as the name implies, then spacetime will continue to expand at an accelerating rate until the expansion exceeds the speed of light, which should more accurately be called the speed of causality, at which point even elementary particles like quarks will be separated and unable to interact with each other in what has been dubbed "The Big Rip" (no association to baked beans).

That's a scary thought (the Big Rip, not the beans), especially the whole separating quarks scenario, whose force of attraction gets exponentially stronger the farther you try to pull them apart. That would be an unstoppable force meeting an immovable object if ever there was one. But I digress...

Ricky has grown a little suspicious of Lambda, er, Lucy.

Ricky has grown a little suspicious of Lambda, er, Lucy.

But as discussed earlier, the discrepancy between values of the Hubble constant suggest the cosmological constant isn't constant, and that calls into question a great many things we thought we understood about our universe.

While the CDM in the standard ΛCDM model seems to be doing a fine job of explaining itself based on what we observe across the cosmos, that Λ term has some serious 'splainin to do.

Lambda, which as we've said is essentially the cosmological constant, is the value of the energy density of the vacuum of space. It is the "anti-gravity" driving expansion, and therefore a vacuum energy. The simplest candidate for the cosmological constant is dark energy. But don't ask me to describe its fundamentals because I can't. Fortunately for my non-existent ego, no one can.

What we (as in they) do know, is its repulsive strength is determined from the ratio of its energy density to its pressure. This is called dark energy's Equation of State or EoS for short. The lower-case Greek letter ω has been adopted to represent the EoS parameter, and for dark energy it's value is -1, or ω = -1. That negative number basically means dark energy has a strong negative pressure, which equates having a strong repulsive force to gravity. As such, dark energy dominates the standard model of cosmology (ΛCDM).

And that's a problem if the values coming in over recent years for the Hubble constant are in fact revealing an actual feature of the universe, and aren't results of computational errors and the like. Because it means the that ω = -1 isn't a fixed value. It'd mean it changes with time; it would mean its dynamic, not static. And that would mean the standard model needs to be reassessed.

Energy components could be modified or added to the standard model, but it could be such manipulation causes more problems than it solves. For example, they could change the number of neutrinos in the standard model, but doing that would be difficult to physically justify. Not to mention, from my amateur point of view, it seems manipulating the model to fit an outcome rather than devise a model that provides one comes off a bit spurious. But I'm probably wrong about that assumption.

Though on this point, it should be mentioned that while the accelerated expansion of spacetime is an observed fact. The dark energy that explains it is nothing more than our interpretation of what we observe. Just some food for thought.

At any rate, there doesn't seem to be any kind of singularly satisfactory astrophysical solution to the discrepancy between measures of the Hubble constant. So maybe it means dark energy (if it exists) isn't vacuum energy at all, but rather some kind of dynamic field instead.

Perhaps something like the dynamic scalar field described in what's called the quintessence model which I discuss in more detail in my October 2017 post, The Nature of Dark Energy. A model that would supplant the currently-accepted standard model.

So is it time to toss the standard model en lieu of something new?

Not quite yet my friends. Just last month (July, 2018), the Planck Collaboration released to the public a new and improved version of the data acquired by the Planck satellite. This new data has been dubbed the Planck Legacy Archive, and it's available to the public. Not that I'd know what I'm looking at anyway.

Anyway, it's the 3rd and final rendering of the Planck data first released in 2013, and revamped in 2015. This 3rd round has revealed data that has been completely reprocessed and cross-checked multiple times and ways. As it turns out, it strongly supports the standard model, and even has an inflation component.

In fact, not only does it support the standard model based on both temperature & polarization of the CMB together, but on temperature, and polarization individually. For those interested, more can be read on polarization in my 3-part series on gravitational waves, starting with this post, Ripples in Spacetime - Part I.

As mentioned, the Legacy Archive reveals robust support of ΛCDM, and makes notions of injecting new physics, or writing off CMB-inferred values for the Hubble constant as error-prone far more challenging for cosmologists.

But there are still some glaring issues with dark energy. For one, quantum field theory predicts it should have a strength 10120 stronger than what is inferred. That kind of discrepancy clearly tells us that we're missing something fundamental. If dark energy's repulsive force was that strong, the universe would have been ripped apart eons ago.

Even astrophysicist Adam Riess, who earned the Nobel Prize for discovering the accelerating expansion of the Universe through observations of distant supernovae, is starting to wonder if there isn't something fundamentally misunderstood about dark energy. He's even open to the idea that dark energy may not exist, but isn't convinced of that. As the old xenophobe Aristotle once said, "It is the mark of an educated mind to be able to entertain a thought without accepting it."

The jury is still out, but there's good news on the horizon. The Dark Energy Spectroscopic Instrument (DESI) launches in 2019, and it will create an extraordinarily-detailed 3-D cosmic map that may reveal things we've overlooked, or have been simply unaware of. Things that might solve this great conundrum.

I originally intended to continue this blog post by presenting some alternative hypotheses to the standard model of cosmology, as well as some possible solutions to the Hubble constant tension. However, I have already touched upon these points in previous posts, and when I posted to my Farcebook page my intention to write this blog, a good friend of mine who makes documentaries on important topics in cosmology, mentioned he had just wrapped up on a new one discussing the Hubble constant tension. So rather than continue here, I plan to simply share a link to his new documentary when it goes live online. I'll share it here, on Farcebook, and on my Google+ profile for those interested.

For now, I want to leave my 2 readers with this very important consideration. EVERYTHING we've discussed here is based off the Friedman-Lemaitre-Robertson-Walker metric (FLRW metric), which says a lot of things, but one of them is that all these models and constants and stuff are defined in a universe that is flat; the curvature of spacetime is zero, zip, nada, flat as a pancake.

As I mention in my blog post, The Multiverse from the Many-Worlds Perspective, I'm not convinced we live in a flat universe. But take that with a grain of salt, because I'm just a schmo with no credentials.

Having said that, and if the idea of a hyperbolic universe doesn't interest you, then consider this... who says spacetime expansion has to be spatiotemporally uniform? To give that some oomf, consider also this... that averages delete details ipso facto. While averages are important for conceptualizing something otherwise overwhelming, once established, it behooves us to return to look at the details as fundamental parts of the big picture we've painted with such broad strokes.

As always, thanks for reading...

----

Grand Solar Minimum & Global Cooling

Grand Solar Minimum & Global Cooling

Mercury - A 1st-Generation Relic

Mercury - A 1st-Generation Relic