首页 | 本学科首页   官方微博 | 高级检索  
     


A verification framework for interannual-to-decadal predictions experiments
Authors:L. Goddard  A. Kumar  A. Solomon  D. Smith  G. Boer  P. Gonzalez  V. Kharin  W. Merryfield  C. Deser  S. J. Mason  B. P. Kirtman  R. Msadek  R. Sutton  E. Hawkins  T. Fricker  G. Hegerl  C. A. T. Ferro  D. B. Stephenson  G. A. Meehl  T. Stockdale  R. Burgman  A. M. Greene  Y. Kushnir  M. Newman  J. Carton  I. Fukumori  T. Delworth
Affiliation:1. International Research Institute for Climate and Society, The Earth Institute of Columbia University, Palisades, NY, USA
2. Climate Prediction Center, National Centers for Environmental Prediction, NOAA, Silver Spring, MD, USA
3. Earth System Research Laboratory, NOAA, University of Colorado, Boulder, CO, USA
4. UK Met Office, Hadley Centre, Exeter, UK
5. Canadian Centre for Climate Modelling and Analysis, Environment Canada, Victoria, BC, Canada
6. National Center for Atmospheric Research, Boulder, CO, USA
7. Rosentiel School for Marine and Atmospheric Science, University of Miami, Miami, FL, USA
8. NOAA’s Geophysical Fluid Dynamics Laboratory, Princeton, NJ, USA
9. NCAS-Climate, Department of Meteorology, University of Reading, Reading, UK
10. University of Exeter, Exeter, UK
11. University of Edinburgh, Edinburgh, UK
12. European Centre for Medium-Range Weather Forecasts, Reading, UK
16. Florida International University, Miami, FL, USA
13. Lamont-Doherty Earth Observatory, The Earth Institute of Columbia University, Palisades, NY, USA
14. University of Maryland, College Park, MD, USA
15. Jet Propulsion Laboratory, NASA, Pasadena, CA, USA
Abstract:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号