首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors
Authors:J Kim  Duane E Waliser  Chris A Mattmann  Cameron E Goodale  Andrew F Hart  Paul A Zimdars  Daniel J Crichton  Colin Jones  Grigory Nikulin  Bruce Hewitson  Chris Jack  Christopher Lennard  Alice Favre
Institution:1. JIFRESSE, University of California Los Angeles, Los Angeles, CA, USA
2. Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA
3. Sveriges Meteorologiska och Hydrologiska Institut, Norrk?ping, Sweden
4. University of Cape Town, Cape Town, South Africa
5. Centre de Recherches de Climatologie, UMR 6282, Biogéosciences CNRS, Universitée de Bourgogne, Dijon, France
Abstract:Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号