Climate Change - Carbon Dioxide has always been a Primary Driver

Climate Change - Carbon Dioxide has always been a Primary Driver

(Originally posted March 29, 2017 on Blogger)

"Current carbon dioxide levels, around 400 parts per million are still very small compared to the averages over geological history, when thousands of parts per million prevailed, and when life flourished on land and in the oceans."

This was one of several bullet points outlined in a letter with an attached petition sent to the POTUS recently by a retired atmospheric physics professor from MIT. It may be an entirely true statement; cunningly so. I intend to address it here, because entirely true statements can be entirely misleading if not given in context.

Providing proper context is going to require we come up to speed a few concepts. Concepts this retired MIT professor, Richard Lindzen, is either is purposely ignoring, or does not understand. My hope is that by the end of this blog we'll all be able to see just how incredibly complicated Earth dynamics are, and come to realize simple statements by Lindzen, Al Gore, Monckton, and others are almost embarrassing in their naivete.

I believe it may be best to start with conceptualizing geologic time, then move into some specifics of Earth's climate.

From an experiential standpoint, our cognitive limitation as a relatively short-lived species precludes us from truly grasping geologic time. Consider, that we tend to base our perspective both as a society and as individuals on experience, and because of this we are ipso facto ill-equipped to fully comprehend large timescales... or anything that deals in large numbers for that matter. But that isn't to say we can't appreciate them.

Our ability to appreciate large timescales might seem irrelevant, but if nothing else, I believe it can beneficially have meaningful philosophical application in that such appreciation might lend itself to an enhanced perspective of where we stand in the grand scheme of our planet's environment, as well as help us recognize that though environmental changes we experience may arguably be insignificant over geologic time, this doesn't necessarily undermine their significance on a human timescale. I don't want to burst any bubbles out there, but we are not likely a species that will survive over geologic time.

Do I know this? Of course not. Not having all the answers is axiomatically human. So long as we relentlessly pursue the answers to the things we do not know, then we're doing science. To ignore what we do not know, or worse still, claim we know what we do not know, would be intellectually vacuous.

As Einstein once wrote with such blunt eloquence; "As our circle of knowledge expands, so too does the circumference of darkness surrounding it."

THOUGHTFUL CONSIDERATIONS OF GEOLOGIC TIME
Long-term Processes
First-year chemistry students are often taught that carbonates are insoluble. The solubility rules chart (right) which students are required to know, list carbonates under the insoluble column. Yet the dissolution of soluble minerals and rocks such as dolomite and limestone respectively, readily occurs in nature on large timescales; both containing carbonate. In fact, the solubility of these carbonate-rich minerals and rocks have led to the formation of the spectacular features found in karst topographies across the planet. If you've ever seen the stunning stalagmites, cave curtains, columns, stalactites, and other features inside caves, then you've seen some of the beautiful structures formed in karst topography.

So even though carbonates may seem insoluble in the lab, on a grander timescale they're not insoluble at all. In fact, a grander timescale isn't even necessary; by temporal geologic standards, the timescale of karst topography formation is relatively short. All of it has formed within the last million years or so; 'modern times' in the geologic terms. But even modern times are difficult for us to truly get a cognitive handle on.

Karst topography deep in the Guangxi Province of China. Photo by Trey Ratcliff

Karst topography deep in the Guangxi Province of China. Photo by Trey Ratcliff

S-waves propagating through the 'solid' mantle.

Speaking of geology, many first-year geology students are taught that Earth's mantle (asthenosphere included) is solid. It's generally taught that this conclusion has been inferred from the fact that shear waves (S-waves) generated from the focus of earthquakes only propagate through solid mediums, not liquid ones.

And though the mantle exhibits attributes of a solid over a human timescale, it actually behaves as a highly-viscous fluid over geologic time; the behavior of which is described by the fascinating physics of diffusion creep viscosity. This viscous characteristic of mantle rock lends itself to the phenomena of convection and gravity-assist trench pull (aka "slab pull"); both primary drivers of the well-established theory of plate tectonics.

Consider also that salt domes--massive impermeable geologic features consisting of solid mineral evaporites--behave plastically over geologic time as well. They've 'oozed' from the depths like plumes of oil rising in a lava lamp, penetrating surrounding rock, which itself acts ductile over time, to reach the surface.

The mechanics of all this can be understood by Rayleigh-Taylor instability-type diapirism, but there's no need for us to bog down with details of geophysics here. Because the point of all these examples is to help us recognize that our collective planetary-scale perspective has been and continues to be, in large part, skewed by fault of that experiential cognitive limitation I mentioned above.

It's an unfortunate reality that invariably goes along with being an extremely short-lived-newcomer species on this planet. Much of what we have come to understand about our planet, as well as our universe, rely almost entirely on inferences arrived at through thoughtful inductive or deductive reasoning.

Jashak salt dome in Bushehr Province, Iran.

Jashak salt dome in Bushehr Province, Iran.

Large Numbers

We're currently living in the Holocene epoch. Geologists demarcated its beginning at about 11,700 years before present. This isn't even a blip on the geologic timescale. To put the start of the current epoch into perspective, consider that humans (and their pet dogs) were only just beginning to employ wild animal and grain domestication; processes that had yet to lead to sedentary culture and the cities that grow out of it.

 

This was a time about 8,000 years before the first Egyptian Dynasty; over a thousand years before the founding of Jericho, and even before the Sahara had become a desert. It was a time when the entire global population was no bigger than that of the city of San Jose, California. When saber-toothed cats and Wooly mammoths still roamed Earth, and a full two centuries before both the founding of the ultra-ancient site, Göbekli Tepe, and the start of the Clovis culture.

 

Yet the Holocene is just a relatively short-lived epoch that exists within the larger Quaternary period. The Quaternary began an incredible 2,588,000 years ago. To put that into perspective, consider that our species did not even exist back then. Since historical examples do not apply here, let's consider the number 2,588,000 itself. 2,588,000 minutes is nearly 5 years. If it represented inches it'd be nearly 41 miles. If you put 2.588 million dollar bills end-to-end lengthwise, they'd measure over 250 miles.

Inferred estimated scale of the Chicxulub impactor.

And yet the Quaternary is an extremely young period within a much larger era known as the Cenozoic. The Cenozoic extends back a mind-boggling 66 million years; a time when large mammals were just beginning to dominate Earth's landscapes. Back when the Chicxulub impactor left its mark. For perhaps better modern-day perspective, consider that 66 million minutes is over 125 years. That if you had 66 million dollars and spent $50 per minute, it'd take you over two-and-half years to spend it all.

And yet the Cenozoic is just a small era within the expansive Phanerozoic eon, which extends back a confounding 541 million years. At the start of this eon, primates had yet to diverge from mammals on the evolutionary tree. Indeed, mammals had yet to evolve. If evolution isn't your thing, then consider that 541 million minutes is well over 1,000 years. That if you had $541 million in a savings account that paid an annual simple interest of just 1%, you'd earn an additional annual 'income' of $5,410,000 on that single percent of interest. A lot more over the years if that annual interest is compounded.

And yet the Phanerozoic eon is merely one of three eons, all of which follow the unimaginably temporally-distant Hadean supereon that ended 4 billion years ago. Consider that number for a moment; 4 billion. That's a number even billionaires don't fully comprehend. The Federal gift tax aside, if a billionaire gave you a million dollars, it would have the same impact on his or her wallet, as it would on a person with $1,000 dollars who might give you $1. Let's consider time; 4 billion minutes ago, small tribes settled near the flood banks of the Nile were just beginning to develop into identifiable culture groups in what was then still predynastic Egypt. There were no Sumerians, Babylonians, Greeks, Minoans (who should be called Kaptarians), Hittites (who should be referred to as Nessites), nor Etruscans. It was a time before monotheism, writing (of language), and bronze.

Human history is but a speck upon a speck clinging to the very edge of geologic time. The more thoughtfully we consider this fact, the more profound our understanding of the mechanics of environmental change and our place within that change might become. Within this understanding, overly-simple statements citing inferred averages over geologic timescales in the deep past become recognizably naive, and if not innocently naive, then purposely deceitful.

If geologic time were equated to the 24-hour clock, then modern humans have arrived just milliseconds before midnight.

Worlds Apart
The world we know today bears little resemblance to the world as it was during times alluded to by Dr. Lindzen. Here are Robinson projections of how the world likely looked at the beginning and end of the Cryogenian period (720 - 635 million years ago) respectively:

(Source: Sedimentary Geology 294:219–232 · August 2013 - DOI: 10.1016/j.sedgeo.2013.05.016)

(Source: Ibid.)

As we'll get to later, positioning of land masses, and contiguous sizes of land masses affect weathering by way of changes in runoff and land temperature. This is important because unlike what first-year students learn about the human-scale carbon cycle, a geologic-time-length carbon cycle heavily involves interactions between the atmosphere and carbon stored in rocks.

Consider that in addition to continental position and size, we must also interpolate that information to recreate relief changes as surface area and slope angles affect rates of physical erosion and silicate weathering. When we get to modelling of past atmospheric CO2 means, we'll find this sort of consideration in complex weathering-uplift parameters that even in their complexity, still must be simplified to mean global relief.

If this is already sounding like we're on a treacherously complicated path to simply arriving at an understanding of just how underwhelming Lindzen's bullet point is, then let's strap on our seatbelts my friends, because that path is going to get a whole lot more, um, interesting.

Here's how land masses on Earth likely looked at the start of the Cambrian:

(Source: Sedimentary Geology 294:219–232 · August 2013 - DOI: 10.1016/j.sedgeo.2013.05.016)

The bulk of land was at high latitudes where it'd take a relatively high mean global surface temperature to keep it from freezing over; something that would not have been conducive to the "Cambrian explosion", when most major animal phyla evolved as evidenced by the fossil record. And for the record, at that time, none of those phyla were branched into a Mammalia class. What life flourished then, wouldn't likely fair well today. And with regard to life, flora in particular, we have to realize that the quantification and types of plants that existed in the deep past have a significant cumulative affect on weathering. Weathering is important because it affects concentrations of atmospheric CO2 over the long-term.

Here is the bare-bones basics of just one way that CO2 is lost from the atmosphere over geologic time; photosynthesis draws out CO2 gas, the plant dies and its organic matter is buried under sediments where CO2 reacts with calcium and magnesium silicates during weathering to ultimately form calcium and magnesium carbonates that settle on ocean floors via river transport. That all started with a fern, or some pretty angiosperm somewhere. This is one way, and we haven't even looked at ways it is gained.

Earth's land masses weren't the only things positioned drastically different during the Cyrogenian (or any other period for that matter); the moon itself was much closer. Inferences made from evidence revealed in the geologic sedimentary records of tidal rhythmites show stronger and stronger tides the further back in time we go. The increase in tidal strength is attributed to the moon having been closer to Earth, ergo its gravitational effect was greater (inverse square law). This would no doubt have dramatic effect on coastal weathering processes, not to mention direct effect on tidal habitats. These two factors, land mass and lunar positions, would have changed numerous major things about Earth's environment; locations of amphidromic points were different, synoptic climate patterns were different (imagine conditions where monsoons could cross the equator for instance). There is a whole lot involved with simply extrapolating atmospheric CO2 levels in the deep past. I'll get into this more later when I question the validity of some CO2 models, not all of which agree with the thousands of ppm Lindzen postulates without question.

Earth's surface and lunar proximity weren't the only things different back then; the Sun itself was different in that its energy output was only 70% of what it is now; a figure extrapolated from equations describing G-type main sequence stars. It has been known since the early 1980s that the Sun, and stars like the Sun, brighten over their main sequence lifetime as fusion drives stellar core contraction [Gough, D. O. (1981)]. We'll get to the Faint Young Star Paradox pondered by the late Carl Sagan as it pertains to this blog later.

There were other major differences with Earth in the periods Lindzen alludes to, such as the planet's position relative to the Sun, or more specifically, relative to the Sun's equator (celestial equator). Known as the Milanković cycles (often spelled Milankovitch), Earth's position relative to the Sun changes through time in three key ways; the way Earth's axis is oriented, the way Earth's orbit is shaped, and the way Earth wobbles as it orbits the Sun (formerly known as obliquity, eccentricity, and precession respectively). These motions cyclically shift over time periods measured in tens of thousands of years.

These movements have had dramatic effect on the planet's climate schemes over geologic time, particularly with regard to Earth's axial tilt. Formally called obliquity, Earth's axial tilt gradually shifts over a ~41,000 year cycle between 22.1° and 24.5°. We're currently positioned at 23.44°.

Obliquity is the gradual motion of Earth's axis (ecliptic plane) shifting relative to the celestial equator.

This tilt shift affects length and intensity of seasons. Decreased tilt narrows the path of direct solar radiation, decreasing solar energy to higher latitudes as sunlight must penetrate the atmosphere at greater and greater angles, spreading over a larger and larger area, effectively reducing its energy input to Earth. This can trigger an ice age, which can be prolonged through amplifiers such as albedo in what becomes a positive feedback loop. Increased axial tilt has the opposite effect than everything just mentioned.

Above is a graph depicting the fluctuations of Earth's movements relative to the Sun's equator, as well as daily insolation (amount of solar energy received on Earth per unit area on a horizontal surface) over geologic time. The solid line running vertically through the middle of the graph represents the present.

The truncated green line is truncated because it is data collected from the deepest ice core sample ever drilled. The core was taken from the Vostok ice station in Russia, and was part of an international team of scientists who drilled out a 3,623-meter-long ice core. If it were longer, that jagged green line would go back further in time.

We can see that daily insolation at the top of the atmosphere on the day of the summer solstice at 65°N latitude on a cloudless day today as compared to, say, the middle of the Carboniferous period (~310 million years ago), is dramatically different.

There is little doubt that obliquity was a primary driver over long-term climate change between 1-3 million years ago, as its movement matched climate shifts very well on 41,000-year cycles. However, this cycle shifted about a million years ago from 41,000-year cycles to 100,000-year cycles, and scientists don't know why (Zachros et al., 2001). Dubbed the transition problem, it remains a mystery to this day.

Even plants have driven climate in the deep past. Consider the super-plant, Azolla; an aquatic fern with nearly unstoppable growing power. A single plant can draw down 6 tons of carbon from the atmosphere per year; a ton of nitrogen. In fact, the only thing soil scientists believe limits this plant's growth is the availability of phosphorus in the soils.

During the Eocene epoch, Azolla went ape shit and spread far and wide across the planet as conditions for it were ripe due to continental positioning and how that positioning cut off seas, and changed ocean currents. What resulted is what is now dubbed the "Azolla Event", where the Azolla drew down so much CO2 from the atmosphere that it has since changed the planet from a greenhouse planet to the icehouse planet it is now. Yes, it is--in geologic terms--an icehouse still. In fact, we are living in an interglacial period of what is effectively a 2.5+ million-year-long ice age, or more specifically, the Quaternary Glaciation.

The Quaternary Glaciation is punctuated by interglacial periods, such as the one we're in now. But as I've been trying hard to convey, this is all over geologic time, and only the most naive would take this as veritable ammunition to use against climate science as it pertains to a human timescale. Unless of course they believe they'll live for several tens of thousands of years.

The point here isn't to get off on a sarcastic tangent, but to recognize that making blanket climate comparisons between today and regimes that existed in the deep past, is incredibly obtuse.

Some reading this may be thinking, "well Rook, you're making a great case as to why we shouldn't be blaming atmospheric CO2 for any so-called climate change everyone is complaining about". At this point I'll say that line of thinking is good, as it stems from critical thinking and there is no evidence suggesting CO2 has been a primary driver, nor has any respectable climate scientist or geologist ever claimed it has. If I ended this blog here, then discrediting atmospheric CO2 as a prime climate driver would be a logical and sound conclusion my friends.

But unlike Dr. Lindzen, I plan to continue painting the backdrop of context in front of which we can play out the dramatics of what is going on with Earth's climate... minuscule as those dramatics may be on the stage of geologic time, we'll find the production is a complicated one nonetheless.

CO2 AVERAGES THROUGH GEOLOGIC PERIODS
The case for CO2 as a primary driver may seem like it's about to get worse. Rather than regurgitate what's on Wikipedia, I invite you to look up the following geologic periods, which are listed chronologically starting with the one we're in now:

  • Quaternary - 250 ppm (14°C)

  • Neogene - 280 ppm (14°C)

  • Paleogene - 500 ppm (18°C)

  • Cretaceous - 1,700 ppm (18°C)

  • Jurassic - 1,950 ppm (16.5°C)

  • Triassic - 1,750 ppm (17°C)

  • Permian - 900 ppm (16°C)

  • Carboniferous - 800 ppm (14°C)

  • Devonian - 2,200 ppm (20°C)

  • Silurian - 4,500 ppm (17°C)

  • Ordovician - 4,200 ppm (16°C)

  • Cambrian - 4,500 ppm (21°C)

  • ...

  • Cryogenian - 1,300 ppm (5°C)

Just looking at the list above, we can there is no systematic correspondence between the geologic record of mean climate variations (in terms of surface temperature) and inferred mean atmospheric CO2 levels.

However, data points for these inferences are in huge time intervals of 10 million years or more. So what we're looking at above are variations over geologic time scales. I can't emphasize this enough, and will elaborate on this later as it pertains to missing short-term fluctuations lasting only a few million years... yes, short term over geologic time is a few million years mis amigos.

For convenience, I went ahead and added what Wikipedia lists as the mean atmospheric CO2 content and mean surface temperature for each period. As you look up each yourself, you'll find in the upper-right portion of the page a synopsis of these mean atmospheric gas content listings. One of these gases is CO2, which I've circled in red in the cropped screenshot of the Silurian period (below). Here we see that the Silurian period (a period I picked as an arbitrary example) had a mean atmospheric CO2 content of 4,500 parts per million (ppm), which—as stated—is 16 times pre-industrial levels.

Cropped screenshot of Wikipedia's Silurian page showing mean atmospheric CO2 circled in red.

Before I continue, a word about Wikipedia...

When Wikipedia first came online, I was extremely skeptical of it. I found numerous erroneous claims, particularly with scientific and historiographical content. But in the many years since its inception, I've found the site has become a respectable resource full of invaluable information. I find the problem these days isn't so much with the content of the website, but in how people casually use it for information gathering. As most of you know, there is more to reading a Wikipedia page than reading the page.

One of the key features adopted by Wikipedia is references; a long-held required practice in peer-reviewed publications. References are denoted by those little superscript numbers we see throughout a Wikipedia page. When reference isn't given, we'll most often find the following:

[page needed]

[citation needed]

[dubious – discuss]

...or we might simply see bracketed vernacular like, [when?], [who?], or [how?]. These alert us to the possibility that what we've just read may be unsubstantiated.

We can get the most out of Wikipedia by following references to their respective sources, and then being sure the sources are themselves substantiated. Often references are peer-reviewed academic papers. Although, sometimes--and we should be careful to spot this--they'll link to spurious blogs, books, or magazine articles crafted by ideologues.

The point is, we ought only trust in what has academically peer-reviewed reference(s). I'll take it a step further, and say that we ought to know when peer-reviewed publications are themselves worth a darn. This can be more difficult to discern, but generally the number of positive citations and authors a paper has, the more likely it contains highly-supported research we can invest our trust in. Of course, the best tool of all is for us to have a sound understanding of some of the basic principles of science to begin with!

To avoid going too far down this tangent, let's go back and take a look at the above example (the screenshot of the Silurian page on Wikipedia) and follow the references it gives for atmospheric CO2 content and watch how quickly a critical mind can descend down the metaphorical rabbit hole...

Here are the references (3 & 4) of the mean atmospheric CO2 content during the Silurian.

c. 4500 ppm[3][4]

If we click on [4], it takes us to a blank page, so that reference is worthless. If we click [3], it takes us to an image of a graph depicting Phanerozoic Carbon Dioxide:

This isn't a peer-reviewed academic paper, but the image has all the information we need to continue our reference checking. It depicts three different models: GEOCARB III, COPSE, and Rothman's model. Let's ignore the glaringly obvious discrepancy between models as they go back in deep time, and look up each model to find their source data.

So next we search online for "GEOCARB III", "COPSE" and "Rothman CO2 Model" in order to find the academic papers explaining each. For your convenience, the links to these papers are here:

GEOCARB IIICOPSE - Rothman

With these papers in hand, we can begin the arduous-yet-rewarding task of reading them! Yeah! I'll be the first to admit that reading academic papers is often challenging, as it requires a strong background in specific areas of specific fields of science we may not have. But this shouldn't stop us. With a bit of research we can come up to speed with enough background information upon which to appreciate the value of peer-reviewed research. The more we endeavor to learn, the more capable we become as critical thinkers to apply that knowledge to things that may only be controversial to those not in the know.

A closer look at those papers...
The primary author of the GEOCARB III paper is the late Robert A. Berner, a well-renowned geophysicist at Yale. His paper, "GEOCARB III: A Revised Model of Atmospheric CO2 Over Phanerozoic Time", was published in 2001. For reference, the Phanerozoic eon is the time between now and 541 million years ago. The paper details revisions made to earlier GEOCARB II data (1994), itself a revision of an even earlier 1991 unnamed model drafted by Berner.

The paper also gives the caveat that the GEOCARB III model (GCM) is a long-term model in that its data intervals are 10 million years apart, with some specific rock data intervals as much as 30 million years apart. As such, this model completely misses short-term phenomena that might be on scales of only a few million years. If multimillion-year phenomena are missed, then we are forever excluded from knowing of any human-scale climate shifts which we know occur as evidenced by migrations of such culture groups as the 2nd century CE Frisii; one of numerous examples of tectonically-insignificant climate shifts all of the atmospheric CO2 models would invariably miss.

Another being the so-called "Medieval Warm Period" and subsequent "Little Ice Age", both regionally-biased misnomers; "Medieval Climate Anomaly" might be more representative. Since this is a bit of a tangent, I'll just quickly say that neither were global events, and instead there were non-synchronous global anomalies in temperature both cooling and warming. At any rate, such anomalous events like this would be missed by geologic-time scale CO2 models.

Berner was second author on another paper, "CO2 as a primary driver of Phanerozoic climate" (2004), but I'll get to this later.

Some background information
Students in first-year earth science courses are taught the carbon cycle, wherein they learn how and why carbon cycles through the environment; atmosphere, oceans, flora and fauna (biosphere), and soils as illustrated below:

But this is a short-term, human-timescale cycle. Over geologic time however, rocks play an important role in the carbon cycle. Continental collision results in mountain building. This event is known as an orogeny and exposes silicate rock to the atmosphere. Over time this rock is weathered and reacts with atmospheric CO2 forming calcium or magnesium carbonate as described by a simple chemical equation, below being an example:

ie) CaSiO3 + CO2 --> CaCO3 + SiO2  (albeit CaSiO3 is relatively rare)

Over time that weathered material is transported to the ocean by river runoff where it can be taken up by organisms in their carbonate shells. When the organisms die, they sink along with their shells to the ocean floor where they become buried. Over time the remains of their shells are lithified into limestone or chalk where the carbon is locked away in what is an effective tectonic-scale carbon sink. Trillions of tons of atmospheric CO2 can be locked away over geologic time in this manner.

You can read more about the difference between the human-scale carbon cycle and geologic-time-scale carbon cycle in Berner's 1999 paper titled, "A New Look at the Long-Term Carbon Cycle".

It further details the chemical reactions of two sub-cycles, the silicate-carbonate sub-cycle, and a sub-cycle involving organic matter. Berner explains the long-term gain and loss of atmospheric CO2 due to weathering, chemical reactions with calcium and magnesium, as well as the degassing from diagenesis, metamorphism, and vulcanism complete with chemical equations showing reaction involving carbon for those wanting more details on this topic.

I mentioned earlier how different the surface of our planet was in the deep past, as well as how different the Sun was. The GCM uses those parameters in its formulations. It considers evolution of the Sun over geologic time (solar forcing), changes in paleogeography as described above with continental size and positioning, considerations of ice coverage as it relates to albedo, the quantification of the presence and absence of vascular plants across the planet, consideration of those deeply-rooted type vascular plants (aka trees), how tree concentrations over large areas affect heat flux, as well as estimations of pre-vascular plant effects on weathering rates, quantitative importance of gymnosperms versus angiosperms (which the GCM admits it does not have a good handle on), an understanding of what proportion of plants globally respond to CO2 (something the GCM also isn't sure of), river runoff rates inferred from inferred paleogeography as part of the hydrologic cycle, estimations of the greenhouse effect as it relates to CO2, general circulation patterns based on modern systems applied to past geographies, calculations of measured strontium isotopic composition of oceans as a measure of overall silicate weather rates, considerations of changing algal physiology as it relates to their permeability and surface-to-volume ratios, and more.

If this sounds complicated, it's because it is, and Lindzen's bullet point makes a mockery of this complex science with its presumed simplicity and confident denouement. Even just the parameterization of the absorption of infrared radiation by CO2 is complicated in and of itself, as I explained in one of my previous blogs: http://rookscience.blogspot.com/2016/11/the-scare-quote-effect-excited-ghgs.html Knowledge of incoming shortwave infrared radiation being absorbed by atmospheric CO2 has long been known (Callendar et al., 1941).

Deriving each of the parameters noted above is an enormous multi-discipline endeavor that builds on the previous enormously-complex work of others, and in it all, scientists admit they don't have all the answers necessary to accurately past-predict CO2 concentrations.

Even if they did, there is still the risk of small errors in calculations amplifying over time. Anyone who might have read my other blog on long-term forecasting and chaos theory, can see just how precarious and complicated advanced long-term modelling is. And that blog looks at modelling largely injected with directly-measured parameters, whereas GCM and the other CO2 models deal with indirectly-inferred parameters, which of course greatly complicates things.

This complication only scratches the surface of what geophysicists must deal with. For example, in the Rothman paper (2002), which cites Berner 1994, it's clearly stated that estimating past CO2 levels from various geochemical signals is extremely difficult in that those signals have intrinsic relationships to other tectonic and biologic factors. All things in nature both organic and non-organic are interconnected in a very complex web; what happens to one thing will, directly or not, affect another thing.

With regard to complication, consider that in Berner's paper (2004 )he states that strontium isotopic composition in the oceans provides a way to directly measure overall rates of silicate weathering. Yet, authors of the COPSE model (2014) state that it's an indirect measure to estimate uplift or continental basalt area (silicate weathering). Perhaps this hints at one of the reasons why Rothman states in his paper (2002) that analysis of the strontium signal in seawater is "especially difficult", and that some ambiguities in the isotopic signals of carbon and strontium are "subject to much debate" despite the fact other aspects of the signals are clear.

It all sounds like scientists are spinning their wheels, but the fact is they are making great progress. Debates in science are not only commonplace, but they're fundamentally a part of the nature of science for the very reason that they weed out anything that is frivolous or wrong. It's also common for scientists to make valid, progressive points when presenting research and data, while at the same time admitting to lacking evidence, or veritable plot holes.

As another example, Rothman concludes that both degassing and silicate weathering were primary drivers of CO2 fluctuations through the Phanerozoic, but admits to being unsure which dominated, or--as he writes--if weathering was driven by plant diversification, continental collisions (uplift), or by some complex combination of tectonic, biological, and geochemical processes. The answer is likely the latter, but any weaknesses are readily exposed in order that they might be strengthened or discarded by future research. True scientists always point each other in the direction of progress.

At any rate, earlier I wrote that it was obvious there appears to be no correlation between past atmospheric CO2 concentrations and global surface temperature; I emphasized that this was based on a geologic time scale. This latter point is pertinent, because as we'll remember, data inputs of these CO2 models are in 10-million-year intervals (sometimes greater), and as such, relatively short-term fluctuations on the scale of a few million years are completely missed.

Also important to consider is that all those mean atmospheric CO2 levels listed on Wikipedia have been inferred from fluctuations in the strontium isotopic records, which themselves are well-known in the geophysics community as collectively being a difficult thing to parse. I'm not trying to undermine these models, but pointing out their complexities in order that we can better appreciate them without taking them to be axioms.

Continuing on, the COPSE paper by Mills et al., titled, "Changing Tectonic Controls on the Long-term Carbon Cycle from Mesozoic to Present" agrees with Rothman that degassing and silicate weathering were primary drivers of atmospheric CO2 in the deep past. The difference is that the COPSE research goes further in attempting to resolve ongoing academic debates relating to different carbon sources and sinks over geologic time.

Consider that different rock types have different compositions which affect their weathering rates and chemical reactions with regard to carbon. For simplicity, imagine the ocean floors being composed of basalt, and the continents being composed of granite. The COPSE paper discerns differences between oceanic basalt weathering vs. terrestrial granite weathering as carbon sinks. It also considers seafloor spreading rates as it affects global basaltic area. As for sources, the paper looks at subduction zone metamorphism versus igneous intrusions with regard to their degassing rates.

Rates of individual drivers of both sinks and sources varied through tectonic time, and as such the COPSE model recognizes three different climate modes that distinguishes when oceanic basaltic weathering exceeded terrestrial granitic weathering (pre-Cretaceous), when terrestrial granitic weathering increased relative to oceanic basaltic weathering (Cretaceous), and when terrestrial granitic weathering began to exceed oceanic basaltic weathering rates (Cenozoic).

One oddity I have to mention before moving on, is that the COPSE paper specifically addresses inferred atmospheric CO2 concentrations from the "Mesozoic to Present", which correlates to 252 million years ago to present. It seems anomalous that the graph on Wikipedia shows the COPSE data extending back 530 million years; a full 278 million years beyond the scope of the paper, a time during which the largest CO2 spikes occur. I'm sure I'm missing something obvious about this apparent discrepancy, but I'm obliged to point this out nonetheless.

Though Wikipedia lists a handful of models of mean atmospheric CO2 concentrations over geologic time, they do not represent them all. There are numerous models, and not all of them agree with each other for various important reasons. It's very difficult to past-predict factors of complex systems over megamillion year time scales because of all the factors that must be considered. As is known from Lorentz' chaos theory, any small error in model computations exacerbate over time, and often exponentially. Given this, I'm not surprised the past-predictions of atmospheric CO2 concentrations are so varied, and academic debate over specifics continue. That's right my friends, debates. There are always debates going on in all fields of science, it would be wrong to claim there aren't. But it would be equally wrong to say there isn't a consensus (Oreskes, 2004).

Thinking critically about past-predicted high atmospheric CO2
It's well known that the world ocean acts as a major carbon sink; absorbing atmospheric CO2 in order to reach direct air-to-sea exchange equilibrium. We'll find in chemistry, physics, and biology that energy and energy systems naturally seek balance.

There are some factors affecting the rate of CO2 uptake by the world ocean that have to do with the rate at which the oceans circulate, to the availability of carbonate in surface water. Temperature of the water has an affect as well; colder water has the ability to 'hold' more CO2 than warmer water. In fact, this applies to all dissolved gases in water.

To illustrated how complex the story is behind general facts, such as the world ocean being a major carbon sink, let's consider the illustration below. It depicts ocean water circulation both vertically and horizontally over time. When colder ocean deep water upwells to the surface it warms. Given what we know about how warmer water has a lower capacity to hold dissolved CO2, then we can understand that as dissolved-CO2 laden cold deep water warms near the surface, it loses a portion of its holding capacity for that dissolved CO2, and actually releases it to the atmosphere. In this way the ocean can act as a source, rather than a sink. So it's more accurate to say the world ocean is a net carbon sink... for now.

Image credit: NASA

One directly measured effect increased atmospheric CO2 concentration has on ocean water is that through chemical reactions, slowly decreases its pH, as well as available carbonate ions, and calcium carbonate minerals which are important for shell formation. On average, current oceanic pH levels hover around 8 (Hall-Spencer et al., 2008).

Increased CO2 levels, over time, have the effect of decreasing ("acidifying") pH of ocean water. Ocean acidity levels affect the rate at which the calcium carbonate shells of shell-bearing sea creatures occur. Increased acidity is directly proportional to shell dissolution rates. This makes it difficult to resolve how models that show extremely high past CO2 concentrations would account for the 'flourishing' existence of shell-bearing sea creatures.

Some models, such as the GCM, show mean atmospheric CO2 concentrations at 4,500 ppm during some periods, a level at which ocean acidity would have made the existence of shell-bearing life impossible as anyone with a basic understanding of the carbonate pH curve could attest. Even today at a mere 400 ppm, there are numerous papers directly observing the worldwide thinning of calcium carbonate shells (the calcified middle prismatic and inner nacre layers to be precise).

Consider again the GCM with regard to the Cambrian; as mentioned above, it past-predicts a mean atmospheric CO2 level of 4,500 ppm for this period. The Cambrian was a time in geologic history during which their was a biological 'explosion' of new phyla within the animal kingdom (sorry folks, mammals excluded). It was a time when calcium-carbonate-shell-bearing sea animals were abundant; indeed they "flourished" as Lindzen truthfully put it.

However, the abundance of which shell-bearing animals existed in the world ocean stands in glaringly stark contrast against the mean atmospheric CO2 concentrations inferred by such models as the GCM, as such concentrations would effectively force the carbonate pH curve down the scale such that carbonates would form H2CO3 (carbonic acid) rather than the bio-friendly HCO3 (bicarbonate) we all know and love.

H2O (water) + CO2 (carbon dioxide) → H2CO3 (carbonic acid)

I can't help but be slightly sarcastic at this point and say I'd like to see a paper that adequately explains how a calcium carbonate shell can form in carbonic acid. But that sarcasm is probably misplaced, because these folks are the experts... not me.

Let's take a look at the carbonate pH curve:

Above is the carbonate pH curve. It illustrates pH levels at which carbonates are capable of chemically forming certain molecules. Seawater on average is at a pH of about 8, which puts carbonates in the bicarbonate zone (within the pink parabola) where shell production and subsequent "flourishment" are promoted.

However, as can be seen, the blue carbonic acid (H2CO3) curve begins to ascend the concentration axis just slightly below pH 8. Even a slight nudge left on the pH y-axis will make formation and sustainment (is that a word?) of shells difficult, as globally evidenced by thinning shell averages supported by numerous peer-reviewed papers. The ocean's "acidity" need not be technically acidic at all for shelled-animals to, um, stop flourishing.

This paradox applies to other periods some models infer to have had ultra-high atmospheric CO2 concentrations. After all, biomineralization evolved a very long time ago, and with regard to calcium carbonate, it seems odd this would be possible in conditions highly conducive to carbonic acid.

The green curve leads to ultra-basic tricarbonate, and since that's not going to happen, we'll ignore it here.

It gets more complicated...
Though it doesn't seem to support the GCM for there to be shelled sea life flourishing in waters that would result in acidic pH levels not conducive to the formation of calcium carbonate shells, I must also present an interesting correlation that seems to support the GCM.

It's evident that there seems to be no correlation between atmospheric CO2 and global temperatures, but there does seem to be a correlation between atmospheric CO2 and solar forcing (radiative forcing). How then could shells exist in conditions conducive to carbonic acid production? I don't rightly know. But rather than regurgitate the uneducated skepticism of some bloggers, I prefer to ask questions of my own. And as I started this blog out, it's axiomatically human to not know some things.

Graph showing correlation between solar forcing and CO2 (as estimated by the GCM) over geologic time.

What I do know however, is that this correlation debunks Lindzen's bullet point alluding to the notion that CO2 was extremely high in the past while temperatures were moderate for life at that time, because I doubt any human on Earth would consider a mean global surface temperature of 16 to 21°C "moderate". The high levels of CO2 in the distant past were counteracted by a weaker Sun as explained by the physics of G-type star main sequence evolution; and that's a good thing. For if the Sun put out as much energy then as it does now, Earth would have heated up like the surface of Venus. Contrarily, if CO2 levels were then what they are now, the planet would have cooled down like the surface of Europa.

The additional CO2 not only prevented another snowball (or slushball) Earth, but it also answers the question posed by the late Carl Sagan's "Faint Young Star Paradox", which asked how it was possible for Earth to support liquid water at a time when solar output was perhaps as little as 70% of what it is today.

For whatever reason, Lindzen completely ignores these fascinating, and pertinent facts in his otherwise *ahem* profound bullet point.

Of course, within trend of the increasing solar output and decreasing atmospheric CO2, there are smaller-scale anomalies; smaller as in much greater than the span of human evolution. For instance, the Ordovician suffered a major ice age at a time when atmospheric CO2 levels are believed to have been in the thousands of ppm (Berner, 2001).

For many decades scientists were unable to answer how there could be an ice age with so much CO2 in the atmosphere. So much as in a very small fraction of total gases, but for those who might be a little rusty on the chemistry of dosage, something very small can pack a big punch. Just ask any rattlesnake.

Prior to the ice age Earth was a balmy 16°C which made sense. Basic physics of molecular absorption of infrared radiation explains how a doubling of CO2 would lead to an average surface temperature increase of 2 to 4.5°C. So what gives? The answers came in the mid 1990s when it was determined other factors were to blame:

  • Position of the continents

  • Weaker Sun

Over geologic time the continents had shifted such that they formed a supercontinent near the south pole where solar radiation is drastically reduced due to the angle of Earth in relation to the Sun's equator. Additionally, the Sun was about 4.5% weaker then than it is today. Some papers suggest other amplifying factors such as increased ice led to increased albedo which led to more cooling which led to more ice which led to even more albedo etc. Such positive feedbacks coupled with extremely high latitude uni-continent and a weaker Sun to plunge the planet into an ice age. Later evidence even showed CO2 levels had dropped, though not enough to explain the ice age as well as the other factors do.

Getting back on track... CO2 modelling...
Regardless of which model one might adhere to, it seems evident that atmospheric CO2 has likely fluctuated considerably over geologic time. It has done so due to solar, biologic, tectonic, and Milanković-related forcing factors, each having an effect to varying degrees over geologic time followed by positive feedback loops. For example, a trigger initiates cooling, which in turn causes more ice to form at higher latitudes, the high albedo of the ice reflects shortwave solar radiation, thereby leading to more cooling, thereby leading to more ice, thereby leading to more radiation being reflected away, leading to more ice, etc.

Positive feedback loops can work both ways. A trigger sets off warming, which melts tundra ice, which allows the release of trapped methane, which leads to increased warming, which leads to the melting of more ice, which leads to more methane degassing, which leads to more warming, and the trend continues until a greater geologic time type forcing factor stops the madness... ie) Milanković cycles.

At any rate, I've clearly shown how CO2 over geologic time appears to have no correlation to mean global surface temperatures, but that it has a strong correlation with solar forcing. I've also pointed out how all the models use data intervals with spans greater than that of human evolution. And I've pointed out that a trigger can initiate a warming or cooling trend, which leads to a positive feedback loop in which other forcing agents (CO2, CH4, ice gain or loss, etc.) end up acting as amplifiers to that trigger. But I've yet to show any evidence that CO2 itself was a trigger. So far, at best, it seems only to be an amplifier after the fact.

So then we must ask the just and pertinent question; "how and why do most climate scientists and scientists in related fields conclude that atmospheric CO2 from anthropogenic sources is affecting the global climate scheme?"

Again, if I ended my blog here we would walk away without an answer. All I've done is expose Lindzen's bullet point as being profoundly naive. I believe we're long past his simplicity at this point and can focus on answering the question of what evidence there is supporting CO2 as a primary driver over other triggers.

Determining the primary driver via the process of elimination...
As we've covered rather extensively, there have been different triggers setting off global temperature positive feedback loops for as far back as inference allows, with little to no evidence that carbon dioxide has ever been a trigger on a geologic timescale. Whether or not atmospheric CO2 has been a trigger for shorter timescales falling in between aforementioned data intervals (of a few-million-year-long events) is unknown. So let's start by looking at known past triggers, and determine if directly-observed evidence we collect today supports any of them as current primary drivers.

Before we do, we must consider the fact that global CO2 concentrations have increased in recent time, so as we look for primary drivers, we must also consider what sources, other than anthropogenic ones, could be causing the rise. It has to be something, because atmospheric CO2 levels over the past 2,000 years have been relatively stable, and since we all live on human timescales, the rise in CO2 should be of some concern, if for no other reason than it's just common sense to want to know why so we can understand.

Global CO2 concentrations in ppmv over the past 300 years. Source: USGS

Solar Forcing (And oceanic degassing by proxy)
 

The above graph depicts sunspot activity over the past 400 years, and we'll see in subsequent graphs, solar activity has coincided well with global mean temperature trends. This of course suggests solar forcing has been a primary driver of temperature across our planet. In fact, during the Maunder Minimum (see above), the so-called "Little Ice Age" occurred; again strongly evidencing solar forcing as a primary control. There exists a lot of evidence showing the Sun has long been the primary driver of Earth's climate over geologic time.

However, from about the 1970s onward, the increase of mean global temperature rates has diverged with statistical significance from solar activity. Below is a graph depicting three different temperature profiles collected from three different models (red, light blue, and green), graphed alongside atmospheric CO2 (blue), and solar irradiance (yellow). The long-standing correlation between solar forcing and temperature has been statistically lost, even on graphs derived from data including cosmic rays.

As seen in the graph below, the energy of solar forcing in recent years has decreased, which doesn't play well for proponents of the idea that natural oceanic degassing of CO2 is the cause of increased CO2 in the atmosphere. The idea is based off the concept we covered above, wherein colder water has a higher 'holding capacity' for dissolved gases. The idea set forth in recent years by folks (none of whom have published on this idea) is that increased solar forcing has warmed oceanic surface waters thereby reducing their collective ability to 'hold' CO2, and a natural degassing has ensued. Again, this is quickly and easily debunked by the fact solar forcing has decreased (within a geologic timescale increase. We must not lose our grasp of the concept of geologic time in all this). Furthermore, the oceans have been absorbing CO2, as evidenced by global direct pH measurements, and inferred by the widespread thinning of calcium carbonate shells of sea creatures.

So is solar forcing the current trigger? It appears not. And by proxy neither is oceanic degassing.

Cosmic Rays
With regard to cosmic rays, there is a hypothesis that suggests cosmic rays could nucleate small particles in the upper atmosphere. It's inconclusive by lab experiments at CERN, but if a significant fraction of these particles grow large enough, they could hypothetically seed clouds. If so, then increased cloud cover could affect climate. In which direction; to warm it or cool it? This is unknown, as clouds serve to both reflect sunlight away and absorb outgoing infrared radiation. The former would have a cooling effect, and the latter would have a warming effect over time.

However, there is no evidence to suggest that Earth is experiencing either more or less cloud cover. Nor has the hypothesis (tested at CERN) been proven to actually work to the point of seeding clouds, nor is there evidence for Earth having an "adaptive infrared iris" that opens and closes to longwave infrared terrestrial radiation as a self-regulating system (Lindzen et al., 2001).

So is it cosmic rays? There is no evidence for it... or IR irises.

Volcanic Degassing (And oceanic degassing... again)
If volcanic (or oceanic) degassing is/are the cause(s) of CO2 rise in Earth's atmosphere, then we should be able to measure this isotopically. The isotope carbon-13 makes up about 1% of the CO2 degassed by volume from these sources. So comparing this to known CO2 levels in the atmosphere, we should see an increase in the proportion of carbon-13, but we don't. In fact we see the opposite:

The proportion of the isotope, carbon-13 over time showing a dramatic decrease in recent decades.

I'll explain this in more detail later when we discuss isotopic measurements of carbon in the atmosphere, with particular attention paid to carbon-14. We'll also compare the above graph to a graph of atmospheric CO2 concentrations over time in what is a revealing correlation.

But as far as volcanic degassing goes, they would be short-term events, and as such cannot explain the relatively longer term increase in global atmospheric CO2 concentrations. Of course, this isn't to say there aren't natural sources of degassification. There are; rotting vegetation, oceanic (as explained above with regard to upwelling), soil degradation, volcanic eruptions, etc. However, these natural sources have been in balance for thousands of years within the genius that is the natural carbon cycle, and no direct or indirect evidence has shown any of these natural sources having increased their output to explain the rise in available carbon for the production of CO2 gas.

And though volcanic degassing is a natural thing, and doesn't necessarily require an eruption to continue, that degassing is balanced within the framework of the natural carbon cycle, by the uptake of carbon via chemical weathering of rock as explained earlier.

So are volcanoes and/or the oceans at fault? There is no evidence to support it; quite the contrary actually.

Soil Degradation
Soils act as both sources and sinks for greenhouse gasses such as CO2, and the quantities they deal in are quite large, and even dwarf that of fossil fuel emissions. They both take up carbon and lost it in an ecological balance that's part of the natural carbon cycle.

Soils have operated in large quantities for thousands of years within this balance; quantities that may well dwarf that of fossil fuel burning. Though I should address that some folks make the mistake of comparing simple quantities rather than considering the concept of flux. Flux, for whatever reason, seems to elude the basic math skills of many bloggers and other unpublished (as in peer reviewed) heretics.

Let me explain via an analogy in which I'll employ my horrible skills with Microsoft Paint to illustrate.

Imagine a bucket with a hole at the bottom that releases the same amount of water per unit time as is being added by a faucet. The water level in the bucket (or more accurately, the volume of water in the bucket) would remain stable because there is just as much water coming in as going out. Imagine this in terms of carbon in the environment with regard to soil degradation. There is an equilibrium between carbon being taken up by soils, as there is carbon being degassed (as CO2) by soils. Regardless of volume or in the case of the analogy, flow rate. It could be 10 gallons or 100 gallons or 1000 gallons (if the bucket were big enough), and the level (volume) would remain steady, so long as the hole allowed for an equal amount to be lost.

My surreal artwork. Source: Rook "Dali" Andalus

My surreal artwork. Source: Rook "Dali" Andalus

Now add an additional input. A very small "insignificant" one; one that is dwarfed by the flow rate of the faucet:

2.jpg

What may seem insignificant by raw simple quantitative comparison, is actually significant with regard to flux equilibrium. In the above seemingly benign situation, equilibrium has been lost. In time this bucket will reach a tipping point and overflow. The same can be said for soil degassing as a source, and soil carbon uptake as a sink. They've been in balance within the framework of a balanced carbon cycle, but any additional input within that framework can cause that balance to be lost.

Isotopic evidence does not support soil as the prime contributor of increased atmospheric CO2, but soil most definitely should be considered as an amplifier. There are several drivers of greenhouse gas emissions from soil; temperature, humidity, the type of vegetation growing and dying in it. But there are also human-affected drivers; land use, which anyone with access to Google satellite (which is most of you), can see is near global and significant. Land cover; land cover has changed dramatically as a direct (and indirect) cause of human activity. In fact, there are entire industries dedicated to producing artificial land cover... the cement comes to mind, among others. Nutrients; anyone who has driven the I-5 through central California (and has read my blogs on water in California), will know natural nutrient levels across large portions of developed and developing countries (as geographers like to call them) are both enhanced and depleted. Very little is pristine these days sadly.

So though soil degradation is not the source of additional CO2 (for now), it is very likely going to be an amplifier and there are already papers coming out in respected peer-reviewed journals attesting to this. Therefore this is an important source to look at nonetheless. And not to burst any skeptical bubbles out there, but soils can and are being anthropogenically affected, and on noteworthy scales (C. Oertel et al., 2016).

Rotting Vegetation
Global average biomass degassing has remained relatively steady and shows no dramatic increase that could explain the added CO2 gas. Any increased degassing would be evidenced by proportionally more carbon-14, which is not the case. What rotting vegetation degasses CO2, living vegetation takes up in a carbon balance that, once again, is part of the natural carbon cycle.

Therefore, this is not the source of additional CO2.

Milanković Cycles
This one is pretty straight forward as well; the climatic effects driven by Milanković cycles are played out gradually over the course of tens of thousands of years, not decades as is directly evidenced by several independent records. The graph below shows the rate increase in recent decades using four independent records from the Met, NASA, NOAA, and JMA:

Source: NASA

Based on known and predictable patterns of the Milanković cycles, we can expect a gradual increase in summer insolation at 65 degrees north latitude over the next 25,000 years; a timeframe I doubt any of us are worried about, and one that certainly has nothing to do with the past few decades.

So is it any of the Milanković cycles? Unless the current 100,000-year obliquity cycle has transitioned again to a 50-year cycle, then it's not likely. A shift of that magnitude and in that short of time would require the type of energy Earth endured during the Theia Impact event, something we might notice.

Putting things together
I think we've demonstrated rather clearly that there are many factors influencing climate, and no climate scientist, geologist, or other professional practitioner of related science has ever said there weren't. There certainly exists no bias in the heaps of research over the past 100 years (Harlos et al., 2016). With that said, let's put some things together.

Earlier I wrote that there is no apparent correlation between atmospheric CO2 and mean global surface temperatures. But that's just a comparison between just CO2 and just temperatures. But we've spent a whole blog discussing how all energy systems are webs, with triggers and amplifiers; what happens to one thing affects directly or not another.

If we add solar forcing to the above comparison, we start to see there is a correlation of CO2 to temperature over geologic time. Well shit.

As solar output has been increasing, atmospheric carbon dioxide has been decreasing. The two counteracting geologic-timescale drivers couple to explain quite well the ongoing moderate temperatures both myself and Dr. Lindzen agree have been most beneficial for flourishing life; with the noted exception of several mass extinction events, but Lindzen may have forgotten that bit. No worries, as the Carnian Pluvial Event has taught us, what's bad for one species, might be great for another. Lindzen may have also forgotten about that boring billion years between 1.8 billion and 800 million years ago (dubbed the "Boring Billion").

At any rate, small fluctuations within the grander scheme of things (the ones not missed by 10-million year proxies) can and have been explained by other amplifiers such as vegetation, volcanic activity, ocean circulation, soil degradation, solar activity, etc. The linear thought process of looking at only one thing when trying to explain changes in a complex energy system never bodes well in academia. And contrary to what seems to be popular public belief (on both sides of the, um, 'controversy'), scientists don't simply look at CO2 when publishing the thousands of papers dating back over 100 years that have been written on it being a potent greenhouse gas with global potential to affect temperature in a potentially detrimental way. (Arrhenius, 1896) (Callendar, 1937) (Plass, 1955) (Keeling, 1969) (Sawyer, 1972) etc...

If nothing else, the title of 2004 paper, "CO2 as a primary driver of Phanerozoic climate" should have been a hint to where I was going with all this. You'll recall that Berner was second author on that paper, and first author on the famed GEOCARB III paper climate 'skeptics' misunderstand.

So now that we're at least aware there is evidence CO2 is a major driver of global temperature, and on a geologic timescale, we must then ask if the additional CO2 that has been directly measured by numerous independent studies around the world is caused by human sources.

Anthropogenic atmospheric CO2 as a primary driver
The environment at large contains both carbon-12 and carbon-13 naturally. Both are stable isotopes, with 99% of carbon on Earth being that of carbon-12. The other 1% being carbon-13.

There is a third unstable isotope of carbon that exists naturally in trace amounts; carbon-14. It's half life is ~5,730 years, decaying to stable nitrogen-14 (basically). This lends itself well to carbon dating, and as we'll see, to determining carbon sources emitting in excess of the natural carbon budget.

Carbon-14 is created naturally in the upper troposphere and stratosphere by the bombardment of cosmic rays (neutrons actually) with nitrogen-14. The newly formed carbon-14 decays over time back to nitrogen-14 and they cycle, very simply speaking, continues.

Very simply:

cosmic rays --> nitrogen-14 --> carbon-14 --> nitrogen-14 (repeat)

The production of trace amounts of carbon-14 fits the carbon cycle (budget) perfectly, and with the exception of some rare large scale variation (ie. 5480 BCE, and 774-775 CE), is in constant proportion to all carbon on Earth.

That steady proportion of carbon-14 can be measured in everything that takes it up. As we breath we bring carbon from the air. As we eat we bring in carbon from our food. Our food got it from the soils and air. Like us, other animals take in carbon from eating, drinking, and breathing. Trees, animals, plants, bugs, fish... everything alive takes in carbon, and the carbon they take in will invariably be 99% carbon-12, 1% carbon-13, and trace amounts of carbon-14 in known quantities (~1 atom per trillion).

When organisms die, their bodies will continue to contain those same proportions of carbon in them. However, since carbon-14 is radioactive with a half life of 5,730 years (give or take 40 years), that proportion will eventually beta decay to nitrogen-14 as explained above. So any dead bodies we dig up after 5,730 years will have 99% carbon-12, 1% carbon-13, and trace amounts of..... nitrogen-14. By that time all the carbon-14 will have decayed.

So unless we're archeologists or paleontologists, there's no reason to be going around the planet digging up the remains of long-dead organisms. Ahh, but there is. The oil industry sends out geologists with teams of engineers in tail to dig up dead organisms all the time.

Fossil fuels, so named because they're found in fossil layers, are formed over geologic time. Rather than get into the specifics of how they're formed (this blog is way too long already), I'll just summarize that oil and natural gas are what's left of organisms that died many, many millions of years ago.

Given that they've been dead for millions upon millions of years, any carbon-14 they had in their bodies when they died (and for 5,730 years after) would be long, long gone. As such, when we raise these organisms from the depths to incinerate them the gases they release will have carbon, nitrogen, and other elements, but none of that carbon will consist of carbon-14.

Burn enough of these fossil fuels and you'd expect to see a decrease in the natural ratio between carbon-12 and 13 to carbon-14 in the atmosphere as more and more stable isotopes of carbon enter the atmosphere from the burning of fossil fuels. And that's exactly what we see.

Not only have scientists directly measured carbon-14 ratio drops across the globe, but those drops coincide with rises in atmospheric CO2, which itself coincides with the start of the industrial revolution, whose spike coincides with the start of oil combustion, and all of it coinciding with statistically significant rises in mean global surface temperature, which seem to line up with worldwide shell thinning and coral reef bleaching to name a few. All rather suspiciously linked I'd say. But what do I know? I'm no climate scientist, nor a geologist.

And though computer models have been wrong, and vary as to the rates of forecast temperature increase, the fact remains that the temperature, along with suspiciously C-14 lacking CO2, continue to rise at rates which may be insignificant over a geologic timescale, but probably quite significant on a human one. Given the numerous positive feedback loops that have occurred in the deep past, and not knowing exactly where any of their trigger points are, it is possible that even a small increase in mean global surface temperature can hit some unknown threshold in the environment that sets off a positive feedback loop which could send Earth's climate careening in a direction in which only trilobites and cyanobacteria would flourish.

Having read all this, one might wonder who would ever take the time to understand all this, such that they know when they're being fed devious partial truths? As always, thanks for reading.

Cambrian 2.0

Cambrian 2.0

To Mars or Bust - The Naiveté of Terraforming

To Mars or Bust - The Naiveté of Terraforming

Evidence for Hadean Eon Life?

Evidence for Hadean Eon Life?