The RCM Fallacy: Why Regional Models are Not a Definitive Source for Climate Impact Projection
A critical analysis of regional climate models reveals that they cannot be assumed to be more accurate, or more useful, in quantifying future risk from extreme weather.
By Dr. Josh Hacker, Chief Science Officer and Co-Founder
RCMs by themselves are not suited to robust physical risk analysis
Amid exploding requirements for climate risk disclosure, and a desire for companies to gain competitive advantage by injecting climate risk into business decisions, organizations are faced with critical questions about which sources of physical climate risk data are most reliable. Climate model projections must be downscaled to be useful in most applications, including impacts. All downscaling methods are limited in some way, and no one approach is best for all parameters and metrics.
Some have assumed that regional climate model (RCM) simulations provide the best downscaled information about future climates. The belief is that RCMs can serve as the basis for enterprise decision-making that could span asset to portfolio planning, as well as resilience and ESG efforts.
There are 5 reasons why this assumption is flawed in addition to issues with scalability, accuracy and business applicability.
- RCM model simulations are constructed to answer science questions—not business questions that affect enterprise risk management or engineering resiliency decisions.
- RCMs are not necessarily more accurate at a set of individual locations than other approaches; accuracy requires additional statistical treatment that is available to other downscaling methods.
- The variety of models used in producing RCM output severely limits interpretability across simulations.
- Fundamental differences in RCM formulation can amplify global climate models (GCM) differences, and lead models to produce climate trends that can have opposite signs, even when driven from the same GCM.
- The expense of RCM data production results in:
- Global inconsistency in models, methods, and the global climate models (GCMs) to drive them.
- Limited resolution/granularity (most are 9-30 km).
- Difficult-to-impossible robust analysis of extreme events.
Let’s discuss the basis for these issues and provide more context.
No single downscaling approach – including RCM simulation – fits all
Decision-making and impacts analysis require downscaling from global climate models (GCMs). Downscaling transforms low-resolution environmental information from large-scale GCMs into high-resolution spatial and temporal scales. It can refine the “coarse” resolutions of climate-model data (110 km, or 68.4 miles) to much more granular scales—from 30 km (18.6 miles) to as fine as one meter (39.37 inches). It adds the faster and smaller features in the Earth system that produce real impacts and extreme events. It also often includes estimating and applying error corrections.
An RCM simulation offers one option for downscaling climate model projections, among multiple fundamental approaches that can also be combined. Hundreds of peer-reviewed papers examine the veracity of these approaches, and many make direct comparisons among them for specific regions and metrics.
The consensus is clear: no single method is best, or even most accurate, for all environmental parameters or climate hazard metrics. Each approach comes with trade-offs such as accuracy, applicability to future climates, applicability to extremes, granularity (resolution), and interpretability.
The assumption that RCMs are inherently more accurate in projecting local and regional impacts does not stand up to critical analysis. Instead, RCMs should be used with caution and in a limited way.
RCM output contains systematic errors
It is true that physical fidelity, here defined as preserving realistic and physically bounded relationships in space and time and between parameters such as temperature and wind, is most easily enforced by using an RCM that resembles a modern-day weather model. But it is not safe to assume that RCM output implies accuracy at a set of individual locations.
Consider that the most accurate atmospheric prediction or simulation possible has errors equivalent to errors in observations (e.g., a temperature measurement from a thermometer). This is only possible for very short-time simulations starting from high-quality initial conditions. For example, high-resolution weather forecast errors are greater than observational errors after a few hours (rainfall), or days (temperature). At longer time scales and with an aim to approach observation error levels, numerical weather forecasts are subject to extensive statistical post-processing to improve accuracy.
Though an RCM is often an adapted weather model that is meant to reproduce accurate statistics, as opposed to an accurate snapshot of the real atmosphere at any given time, the same concept applies. But RCM errors are always greater than observational errors, and also require statistical post-processing to achieve maximum accuracy that ideally approaches observation error levels. Other downscaling methods can be formulated to approach observational error levels too.
Accuracy and granularity of RCM data are prey to limitations
All downscaling approaches are subject to limitations that may or may not be problematic in simulating future climates. Again, results are parameter and metric dependent. It’s easy to imagine an empirical approach based on historical analogs failing after 2070 because the historical data lacks representative extreme heat. Similarly, it’s easy to imagine an RCM failing to be accurate in simulating summertime rainfall because the convective precipitation parameterization was tuned based on 20th-century field-experiment data.
Instead of higher resolutions such as 90 meters, yielded by solutions like ClimateScoreTM Global, most regional climate models are far less granular, at 9 to 30 kilometers. The classic tradeoff between resolution/granularity and length of simulation leads to RCM resolution that is similar to short-term weather forecast models available a decade ago, or longer. The granularity of empirical methods depends primarily on the data available to build downscaling models, which can be far more granular than RCM data alone. Such granularity is critical for many use cases, such as modeling the extreme temperatures that an energy company’s transformers may face in the future.
RCM data sets cannot adequately account for acute weather events
Acute perils like extreme winds, flood, and rain events that can cause severe damage, occur at the tails of a probability distribution. Because of structural limitations in climate models, these acute events are rarely seen in those. RCM outputs are usually insufficient in filling out samples to determine the tails of future distributions that define extremes in a nonstationary climate. This limits or prevents the robust analysis of extreme event probabilities.
RCMs are most often deployed for long-term simulations and to directly downscale climate model projections. Because the simulations are expensive to produce, a small number of simulations is usually available over any individual country.
Climate change is nonstationary, and filling out the “tails” at any future time requires sufficient extreme events representative of that particular time. This is far more easily obtained using a large number of simulations. Empirical downscaling, based on the large number of GCMs in the CMIP6 collection, is by far more capable of providing the large samples needed for extreme value analysis. In addition, uncertainty analysis is far better supported with a large number of simulations.
RCM output is hard to interpret and apply to future climate scenarios
RCM output is difficult to interpret and put into context of various future climate scenarios. RCMs can both dampen and amplify climate change signals relative to the GCMs that drive them. Two RCMs driven from the same GCM can indicate opposite signs of climate change (e.g., increasing or decreasing rainfall of a certain intensity).
Any individual combination of RCM and driving GCM cannot be put into context with the multitude of other possible combinations. Uncertainty within an individual climate scenario cannot be adequately assessed. The choice of RCM/GCM combination by the researchers that developed them, and the availability of a small number (sometimes one) of many possible models, determines an outcome that cannot be placed into a context of all possible outcomes.
Instead, using many simulations from multiple, state-of-the-art, global climate models such as housed within CMIP6 yields a better likelihood of representing scientific consensus and avoiding single-model bias.
Conclusion
As we wrote in our 2021 analysis of CORDEX, RCMs do offer value by enabling the scientific community to better understand relevant regional/local climate phenomena, and their variability and changes, through downscaling. RCMs improve the knowledge exchange needed to support the development of regional research programs.
Jupiter Intelligence cautioned that national, local, and organizational interests tend to drive regional climate model details and RCMs overall do not present a unified or consistent view, or the robust and consistent risk metrics, necessary to make global financial and policy decisions.
We continue to urge that data from a range of independent global climate models — and a toolbox containing available downscaling methods suitable for parameter, time and space scale, and use case — should be deployed in globally consistent ways to meet sustainable business requirements.
Learn more or request a demo and don’t forget to follow us on LinkedIn.
Josh Hacker is the Chief Science Officer and a Jupiter Co-founder.
For further information, please contact us at info@jupiterintel.com.