Global climate models

List of content

Open Climate4you homepage

 

General

Global climate models (GCMs) are comprised of fundamental concepts (laws) and parameterisations of physical, biological, and chemical components of the climate system. These concepts and parameterisations are expressed as mathematical equations, averaged over time and grid volumes. The equations describe the evolution of many variables (e.g. temperature, wind speed, humidity and pressure) and together define the state of the atmosphere. These equations are then converted to a programming language, defining among other things their possible interacting with other formulations, so that they can be solved on a computer and integrated forward in discrete time steps.

A global climate model needs to include a number of component models to represent the oceans, atmosphere, land, and continental ice and the fluxes between each other. Weather models represent a subset of climate models, in which the basic framework of all scales of weather models is presented.

Prior to running any numerical model one requires an initial condition and depending upon the model type one may also require a number of boundary conditions (see below). In numerical weather prediction the initial condition are obtained by analysing and incorporating observations describing the current state of the atmosphere. Whether a grid point is over land or sea, what type of vegetation is prevalent etc, will impact upon how the model interacts with the surface boundary condition.

All numerical models of the atmosphere are based upon the same set of governing equations, describing a number of known physical principles. Where numerical models differ, is how the individual equations are solved; what approximations and assumptions are made and how one represents the physical processes in the physical parameterisations in the atmosphere, for example radiation, convection and precipitation to name a few, often occur at a scale too small to be directly resolved by the numerical model and thus need to be parameterised, i.e., described not by known physical principles, but in an empirical way.

Using climate models in an experimental manner to improve our understanding of how the climate system works is a highly valuable research application. More often, however, climate models are used to predict the future state of the global climate system. Forecasts (or projections) can be made from a single model forecast, or from an ensemble of forecasts which are produced by slightly perturbing the initial conditions and/or other aspects of the model used.

Click here to jump back to the list of content.

 

Chaotic nature of the climate system

The chaotic nature of the climate system was first recognized by Lorenz (1969, 1975), defining two types of problems associated with predictability:

  • Predictability of the first kind, which is essentially the prediction of the future evolution of the atmosphere, given some knowledge of its initial state. Predictability of the first kind is therefore primarily an initial value problem, requiring a detailed set of good observations describing the actual conditions at the start of the modelling experiment. Daily numerical weather prediction is a typical example of this.

  • Predictability of the second kind, in which the objective is to predict the evolution of the statistical properties of the climate system in response to changes in external forcings over time. Predictability of the second kind is essentially a boundary value problem, requiring good information on all external factors which might influence climate over time, e.g., variations in land use, ozone, aerosols, volcanic eruptions, solar variations, etc..

Georgi (2005) demonstrates why climate prediction generally should be considered an initial value problem. To add difficulty to a prediction is the fact that the predictability of the climate system is strongly affected by non-linearities. A system that responds linearly to forcings is highly predictable, i.e. doubling of the forcing results in a doubling of the response. Non-linear behaviours are much less predictable and several factors increase the non-linearity of the climate system as a whole, thereby decreasing the predictability of climate systems in general. In addition to this, complex models involving nonlinearities and interactions tend to loose accuracy because their errors multiply.

In summary, if the climate system is sufficiently non-linear, as observational evidence seems to indicate, then achieving skilful multidecadal climate predictions in response to different human and natural climate forcings is indeed a daunting challenge.

Click here to jump back to the list of content.

 

The ability of computer models to reproduce known temperature series

Global climate models are increasingly being used as tools for predicting how the global climate may respond to changes in atmospheric composition and land surface albedo. Such numerical models are typically developed only to reproduce the characteristics of modern climate and its inherent variability during a short period with relatively modest climate change (Tett et al. 1999). Using numerical models for projections of future climate therefore always involves extrapolation beyond the time range for which the models has been developed and tested. This is in contrast to the prudence by which numerical models are applied in other and less complicated frameworks, e.g. engineering and economics.

Global climate model simulations of the 20th century are usually compared in terms of their ability to reproduce the 20th century temperature record. This is now almost an established test for global climate models. One curious aspect of this result is that it is also well known that the same models that agree in simulating the 20th century temperature record differ significantly in their climate sensitivity. The question therefore remains: If climate models differ in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy?

The answer to this question is discussed by Kiehl (2007). While there exist established data sets for the 20th century evolution of well-mixed greenhouse gases, this is not the case for ozone, aerosols or different natural forcing factors. The only way that the different models (with respect to their sensitivity to changes in greenhouse gasses) all can reproduce the 20th century temperature record is by assuming different 20th century data series for the unknown factors. In essence, the unknown factors in the 20th century used to drive the IPCC climate simulations were chosen to fit the observed temperature trend. This is a classical example of curve fitting or tuning.

It has long been known that it will always be possible to fit a model containing 5 or more adjustable parameters to any known data set. But even when a good fit has been obtained, this does not guarantee that the model will perform well when forecasting just one year ahead into the future. This disappointing fact has been demonstrated many times by economical and other types of numerical models (Pilkey and Pilkey-Jarvis 2007).

Lamb (1995), commenting on a climate model able to reproduce the global temperature history since AD 1600 by an equation involving just three variables (the amount of volcanic material in the atmosphere, warming by CO2, and solar variations), cited the following comment made by the authors of the model: ‘We are hesitant to try to improve the fit of our calculations to the observations by tuning the model…With so many free parameters to vary one ould fit almost anything to anything…’.

Click here to jump back to the list of content.

 

Computer models and the real world

Global climate is in a continuous dynamic state of flux, representing an analogue system, where everything is happening simultaneously. In contrast to this, computer models are digital, attempting to solve a problem by repetitive calculations (iterations), before moving on to solving the next problem, etc. This represents a drawback for computer-based modelling of climate.

While the laws of physics may be beyond discussion, it is not always equally clear or predictable which concept or process will predominate over which when a huge number of competing processes are acting simultaneously as is the case for climate. The description of the individual concepts in a model may well be correctly defined in the mathematical formulations, but the dominance or subservience of one process to others is defined by the modeller, not by the model itself. The modeller decides that issue in the way the program code is written.

In the end, the computer model therefore simply mirrors the intellectual choices of the modeller and only puts numbers to them. If those choices are based on flawed reasoning or insufficient observational evidence, it is naive to believe that the model will somehow remove this fundamental problem through sheer number crunching power. That would be to attribute qualities of judgment to models which they simply do not have. In essence, a mathematical model does not relieve the intellectual burden of determining which variable or process is dominant over which. The modellers have to make a decision on this when writing the code and this choice then becomes an integral part of the model.

Most relationships between parameters in a complex of natural processes are nonlinear relathionships. As one variable changes, another may change exponentially. What is even more complicating is the fact that a number of such parameters may change simultaneously as a certain process unfolds. In addition, a relationship that may be believed to be linear when studied in isolation, may turn up being nonlinear in the context of simultaneous changes in other parameters. It is therefore entirely likely that it forever will be impossible to predict the future development of nature by way of numerical models (Pilkey and Pilkey-Jarvis 2007). 

Anyhow, computer models of climate can never be superior to the knowledge based understanding derived from experiments and classic field observations. Models may prove powerful instruments in improving our understanding of complicated laws and process associations. But until the empirical knowledge coded into them is perfect and comprehensive, they still have to be considered as predictive tools with many limitations. 

Surface air temperature is often seen as the single most important output parameter from climate models. Surface air temperatures are, however, a poor indicator of global climate heat changes, as air has relatively little mass associated with it. Ocean heat changes are the dominant factor for global heat changes, and presumably explains several important temperature peaks and lows shown by recent surface air temperature series, but not forecasted by climate models. Before climate models are able to handle the ocean dynamics in a thorough and well-understood manner, there presumably is little hope of obtaining reliable atmospheric climate forecasts. The question remains, how much of the surface air temperature changes since the end of the Little Ice Age actually are derived from oceanographic changes?

The world’s perhaps most cited climatologist, Reid Bryson, stated as early as in 1933 that a model is "nothing more than a formal statement of how the modeller believes that the part of the world of his concern actually works". Global climate models are often defended by stating that they are based on well established laws of physics. There is, however, much more to the models than just the laws of physics. Otherwise they would all produce the same output for the future climate, which they do not. Climate models are, in effect, nothing more than mathematical ways for experts to express their best opinion about how the real world functions.

 

Click here to jump back to the list of content.