首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The temporal evolution of innovation and residual statistics of the ECMWF 3D‐ and 4D‐Var data assimilation systems have been studied. First, the observational method is applied on an hourly basis to the innovation sequences in order to partition the perceived forecast error covariance into contributions from observation and background errors. The 4D‐Var background turns out to be ignificantly more accurate than the background in the 3D‐Var. The estimated forecast error variance associated with the 4D‐Var background trajectory increases over the assimilation window. There is also a marked broadening of the horizontal error covariance length scale over the assimilation window. Second, the standard deviation of the residuals, i.e., the fit of observations to the analysis is studied on an hourly basis over the assimilation window. This fit should, in theory, reveal the effect of model error in a strong constraint variational problem. A weakly convex curve is found for this fit implying that the perfect model assumption of 4D‐Var may be violated with as short an assimilation window as six hours. For improving the optimality of variational data assimilation systems, a sequence of retunes are needed, until the specified and diagnosed error covariances agree.  相似文献   

2.
The effectiveness of 2 methods for targeting observations is examined using a T21 L3 QG model in a perfect model context. Target gridpoints are chosen using the pseudo‐inverse (the inverse composed of the first three singular vectors only) and the quasi‐inverse or backward integration (running the tangent equations with a negative time‐step). The effectiveness of a target is measured by setting the analysis error to zero in a region surrounding the target and noting the impact on the forecast error in the verification region. In a post‐time setting, when the targets are based on forecast errors that are known exactly, both methods provide targets that are significantly better than targets chosen at random within a broad region upstream of the verification region. When uncertainty is added to the verifying analysis such that the forecast error is known inexactly, the pseudo‐inverse targets still perform very well, while the backward integration targets are degraded. This degradation due to forecast uncertainty is especially significant when the targets are a function of height as well as horizontal position. When an ensemble‐forecast difference is used in place of the inexact forecast error, the backward integration targets may be improved considerably. However, this significant improvement depends on the characteristics of the initial‐time ensemble perturbation. Pseudo‐inverse targets based on ensemble forecast differences are comparable to pseudo‐inverse targets based on exact forecast errors. Targets based on the largest analysis error are also found to be considerably more effective than random targets. The collocation of the backward integration and pseudo‐inverse targets appears to be a good indicator of target skill.  相似文献   

3.
This study estimates a realistic change of the Japan Sea by assimilating satellite measurements into an eddy-resolving circulation model. Suboptimal but feasible assimilation schemes of approximate filtering and nudging play essential roles in the system. The sequential update of error covariance significantly outperforms the asymptotic covariance in the sequential assimilation due to the irregular sampling patterns from multiple altimeter satellites. The best estimates show an average rms difference of only 1.2°C from the radiometer data, and also explain about half of the sea level variance measured by the altimeter observation. The subsurface conditions associated with the mesoscale variabilities are also improved, especially in the Tsushima Warm Current region. It is demonstrated that the forecast limit strongly depends on variable, depth, and location.  相似文献   

4.
The rôle of thermodynamics in the oceanic general circulation is investigated. The ocean is regarded as an open dissipative system that exchanges heat and salt with the surrounding system. A new quantitative method is presented to express the rate of entropy increase for a large‐scale open system and its surroundings by the transports of heat and matter. This method is based on Clausius's definition of thermodynamic entropy, and is independent of explicit expressions of small‐scale dissipation processes. This method is applied to an oceanic general circulation model, and the entropy increase rate is calculated during the spin‐up period of the model. It is found that, in a steady‐state, the entropy increase rate of the ocean system is zero, whereas that of the surroundings shows positive values, for both heat and salt transports. The zero entropy increase rate of the ocean system represents the fact that the system is in a steady‐state, while the positive entropy increase rate in the surroundings is caused by irreversible transports of heat and salt through the steady‐state circulation. The calculated entropy increase rate in the surroundings is 1.9×1011 W K−1, and is primarily due to the heat transport. It is suggested that the existence of a steady‐state dissipative system on the Earth, from a living system to the oceanic circulation, has a certain contribution to the entropy increase in its nonequilibrium surroundings.  相似文献   

5.
利用相临过去时段预报结果中同一时刻不同时效的模式预报场差异,计算预报误差协方差,并基于集合-变分混合同化系统将其与静态背景场误差协方差结合,从而在同化系统中构建了具有各向异性和一定流依赖特征的背景场误差协方差。单点观测理想试验显示本方案改善了静态模型化背景场误差协方差的各向同性和流依赖性问题。“凡亚比”台风的一系列同化及模拟试验表明,从台风路径、强度等方面本文方案的效果都要优于三维变分法。本文方案在不需要集合预报,计算量与三维变分法相当的情况下,给同化系统引入了各向异性、一定流依赖特征的背景误差协方差,因此本方案适于在计算资源较为紧缺情况下,对时效要求较高的预报业务中应用。  相似文献   

6.
We describe the development and preliminary application of the inverse Regional Ocean Modeling System (ROMS), a four dimensional variational (4DVAR) data assimilation system for high-resolution basin-wide and coastal oceanic flows. Inverse ROMS makes use of the recently developed perturbation tangent linear (TL), representer tangent linear (RP) and adjoint (AD) models to implement an indirect representer-based generalized inverse modeling system. This modeling framework is modular. The TL, RP and AD models are used as stand-alone sub-models within the Inverse Ocean Modeling (IOM) system described in [Chua, B.S., Bennett, A.F., 2001. An inverse ocean modeling system. Ocean Modell. 3, 137–165.]. The system allows the assimilation of a wide range of observation types and uses an iterative algorithm to solve nonlinear assimilation problems. The assimilation is performed either under the perfect model assumption (strong constraint) or by also allowing for errors in the model dynamics (weak constraints). For the weak constraint case the TL and RP models are modified to include additional forcing terms on the right hand side of the model equations. These terms are needed to account for errors in the model dynamics.Inverse ROMS is tested in a realistic 3D baroclinic upwelling system with complex bottom topography, characterized by strong mesoscale eddy variability. We assimilate synthetic data for upper ocean (0–450 m) temperatures and currents over a period of 10 days using both a high resolution and a spatially and temporally aliased sampling array. During the assimilation period the flow field undergoes substantial changes from the initial state. This allows the inverse solution to extract the dynamically active information from the synthetic observations and improve the trajectory of the model state beyond the assimilation window. Both the strong and weak constraint assimilation experiments show forecast skill greater than persistence and climatology during the 10–20 days after the last observation is assimilated.Further investigation in the functional form of the model error covariance and in the use of the representer tangent linear model may lead to improvement in the forecast skill.  相似文献   

7.
3‐dimensional variational algorithms are widely used for atmospheric data assimilation at the present time, particularly on the synoptic and global scales. However, mesoscale and convective scale phenomena are considerably more chaotic and intermittent and it is clear that true 4‐dimensional data assimilation algorithms will be required to properly analyze these phenomena. In its most general form, the data assimilation problem can be posed as the minimization of a 4‐dimensional cost function with the forecast model as a weak constraint. This is a much more difficult problem than the widely discussed 4DVAR algorithm where the model is a strong constraint. Bennett and collaborators have considered a method of solution to the weak constraint problem, based on representer theory. However, their method is not suitable for the numerical weather prediction problem, because it does not cycle in time. In this paper, the representer method is modified to permit cycling in time, in a manner which is entirely internally consistent. The method was applied to a simple 1‐dimensional constituent transport problem where the signal was sampled (perfectly and imperfectly) with various sparse observation network configurations. The cycling representer algorithm discussed here successfully extracted the signal from the noisy, sparse observations  相似文献   

8.
Reducing systematic errors by empirically correcting model errors   总被引:2,自引:0,他引:2  
A methodology for the correction of systematic errors in a simplified atmospheric general‐circulation model is proposed. First, a method for estimating initial tendency model errors is developed, based on a 4‐dimensional variational assimilation of a long‐analysed dataset of observations in a simple quasi‐geostrophic baroclinic model. Then, a time variable potential vorticity source term is added as a forcing to the same model, in order to parameterize subgrid‐scale processes and unrepresented physical phenomena. This forcing term consists in a (large‐scale) flow dependent parametrization of the initial tendency model error computed by the variational assimilation. The flow dependency is given by an analogues technique which relies on the analysis dataset. Such empirical driving causes a substantial improvement of the model climatology, reducing its systematic error and improving its high frequency variability. Low‐frequency variability is also more realistic and the model shows a better reproduction of Euro‐Atlantic weather regimes. A link between the large‐scale flow and the model error is found only in the Euro‐Atlantic sector, other mechanisms being probably the origin of model error in other areas of the globe.  相似文献   

9.
The ability of data assimilation systems to infer unobserved variables has brought major benefits to atmospheric and oceanographic sciences. Information is transferred from observations to unobserved variables in two ways: through the temporal evolution of the predictive equations (either a forecast model or its adjoint) or through an error covariance matrix (or a parametrized approximation to the error covariance). Here, it is found that high frequency information tends to flow through the former route, low frequency through the latter. It is also noted that using the Kalman Filter analysis to estimate the correlation between the observed and unobserved variables can lead to a biased result because of an error correlation: this error correlation is absent when the Kalman Smoother is used.  相似文献   

10.
11.
变分资料同化中不同的变分求解方法   总被引:2,自引:0,他引:2  
在应用变分资料同化方法时面临着两方面的难题:一是背景场误差协方差矩阵的求逆问题;二是与背景场误差协方差矩阵相关的计算与存储问题。为了解决这两方面的问题,不同的求解方法便被提出来了。对主要的变分求解方法,包括增量法、运用空间滤波算子的变分分析法、预处理化法、物理空间统计分析法、谱统计插值法等进行了系统的回顾,对它们的优缺点进行了分析与讨论,并指出了变分资料同化中各种求解方法的适用条件。  相似文献   

12.
This paper addresses some fundamental methodological issues concerning the sensitivity analysis of chaotic geophysical systems. We show, using the Lorenz system as an example, that a naïve approach to variational ("adjoint") sensitivity analysis is of limited utility. Applied to trajectories which are long relative to the predictability time scales of the system, cumulative error growth means that adjoint results diverge exponentially from the "macroscopic climate sensitivity"(that is, the sensitivity of time‐averaged properties of the system to finite‐amplitude perturbations). This problem occurs even for time‐averaged quantities and given infinite computing resources. Alternatively, applied to very short trajectories, the adjoint provides an incorrect estimate of the sensitivity, even if averaged over large numbers of initial conditions, because a finite time scale is required for the model climate to respond fully to certain perturbations. In the Lorenz (1963) system, an intermediate time scale is found on which an ensemble of adjoint gradients can give a reasonably accurate (O(10%)) estimate of the macroscopic climate sensitivity. While this ensemble‐adjoint approach is unlikely to be reliable for more complex systems, it may provide useful guidance in identifying important parameter‐combinations to be explored further through direct finite‐amplitude perturbations.  相似文献   

13.
集合卡尔曼滤波(Ensemble Kalman filter, EnKF)是一种国内外广泛使用的海洋资料同化方案, 用集合成员的状态集合表征模式的背景误差协方差, 结合观测误差协方差, 计算卡尔曼增益矩阵, 有效地将观测信息添加到模式初始场中。由于季节、年际预测很大程度上受到初始场的影响, 因此资料同化可以提高模式的预测性能。本文在NUIST-CFS1.0预测系统逐日SST nudging的初始化方案上, 利用EnKF在每个月末将全场(full field)海表温度(sea surface temperature, SST)、温盐廓线(in-situ temperature and salinity profiles, T-S profiles)以及卫星观测海平面高度异常(sea level anomalies, SLA)观测资料同化到模式初始场中, 对比分析了无海洋资料同化以及加入同化后初始场的区别、加入海洋资料同化后模式提前1~24个月预测性能的差异以及对于厄尔尼诺-南方涛动(El Niño-southern oscillation, ENSO)预测技巧的影响。结果表明, 加入海洋资料同化能有效地改进初始场, 并且呈现随深度增加初始场改进越显著的特征。加入同化后, 对全球SST、次表层海水温度的平均预测技巧均有一定的提高, 也表现出随深度增加预测技巧改进越明显的特征。但加入海洋资料同化后, 模式对ENSO的预测技巧有所下降, 可能是由于模式误差的存在, 使得同化后的预测初始场从接近观测的状态又逐渐恢复到与模式动力相匹配的状态, 加剧了赤道太平洋冷舌偏西、中东部偏暖的气候平均态漂移。  相似文献   

14.
15.
2018年第14号台风“摩羯”对山东造成了大范围暴雨和大风天气,基于WRF(Weather Research and Forecasting)模式及其Hybrid-3DVAR混合同化预报系统,对Hybrid-3DVAR不同集合协方差比例和不同航空气象数据转发(aircraft meteorological data relay,以下简称AMDAR)资料同化时间窗对台风“摩羯”预报的影响进行了数值研究。结果表明:加大集合协方差比例对台风“摩羯”路径预报有较大影响和改进;当全部取来自集合体的流依赖误差协方差时,预报的台风路径最好,降水预报也最接近实况;AMDAR资料同化对于台风路径和降水预报也有正的改进作用,但加大集合协方差比例到100%时对台风路径预报影响更大;不同资料同化时间窗会影响同化的AMDAR资料数量,从而影响台风降水精细化预报;45 min同化时间窗的要素预报误差最小,对台风造成的强降水精细特征预报最接近实况;不同资料同化时间窗主要影响台风降水预报落区分布,对台风路径预报影响相对较小。  相似文献   

16.
Part 1's localization method, Ensemble COrrelations Raised to A Power (ECO-RAP), is incorporated into a Local Ensemble Transform Kalman Filter (LETKF). Because brute force incorporation would be too expensive, we demonstrate a factorization property for Part 1's Covariances Adaptively Localized with ECO-rap (CALECO) forecast error covariance matrix that, together with other simplifications, reduces the cost. The property inexpensively provides a large CALECO ensemble whose covariance is the CALECO matrix. Each member of the CALECO ensemble is an element-wise product between one raw ensemble member and one column of the square root of the ECO-RAP matrix. The LETKF is applied to the CALECO ensemble rather than the raw ensemble. The approach enables the update of large numbers of variables within each observation volume at little additional computational cost. Under plausible assumptions, this makes the CALECO and standard LETKF costs similar. The CALECO LETKF does not require artificial observation error inflation or vertically confined observation volumes both of which confound the assimilation of non-local observations such as satellite observations. Using a 27 member ensemble from a global Numerical Weather Prediction (NWP) system, we depict four-dimensional (4-D) flow-adaptive error covariance localization and test the ability of the CALECO LETKF to reduce analysis error.  相似文献   

17.
A general perturbation–linearization scheme is proposed for the problem of data assimilation with an imperfect and nonlinear model, allowing for the application of the weak constraint representer method. The scheme is shown in discrete formalism for a generic model. An application example is given with computer‐generated data in the case of the Burgers equation. Discussion in reference to the assimilation example concerns: the rôle of the model error, seen as a forcing term in the dynamics; the rôle of representers as a posteriori error covariances; a comparison among different choices for a priori dynamic error variance and strong constraint assimilation. Weak and strong constraint methods are also compared in a forecasting experiment.  相似文献   

18.
I present the derivation of the Preconditioned Optimizing Utility for Large-dimensional analyses (POpULar), which is developed for adopting a non-diagonal background error covariance matrix in nonlinear variational analyses (i.e., analyses employing a non-quadratic cost function). POpULar is based on the idea of a linear preconditioned conjugate gradient method widely adopted in ocean data assimilation systems. POpULar uses the background error covariance matrix as a preconditioner without any decomposition of the matrix. This preconditioning accelerates the convergence. Moreover, the inverse of the matrix is not required. POpULar therefore allows us easily to handle the correlations among deviations of control variables (i.e., the variables which will be analyzed) from their background in nonlinear problems. In order to demonstrate the usefulness of POpULar, we illustrate two effects which are often neglected in studies of ocean data assimilation before. One is the effect of correlations among the deviations of control variables in an adjoint analysis. The other is the nonlinear effect of sea surface dynamic height calculation required when sea surface height observation is employed in a three-dimensional ocean analysis. As the results, these effects are not so small to neglect.  相似文献   

19.
This paper compares contending advanced data assimilation algorithms using the same dynamical model and measurements. Assimilation experiments use the ensemble Kalman filter (EnKF), the ensemble Kalman smoother (EnKS) and the representer method involving a nonlinear model and synthetic measurements of a mesoscale eddy. Twin model experiments provide the “truth” and assimilated state. The difference between truth and assimilation state is a mispositioning of an eddy in the initial state affected by a temporal shift. The systems are constructed to represent the dynamics, error covariances and data density as similarly as possible, though because of the differing assumptions in the system derivations subtle differences do occur. The results reflect some of these differences in the tangent linear assumption made in the representer adjoint and the temporal covariance of the EnKF, which does not correct initial condition errors. These differences are assessed through the accuracy of each method as a function of measurement density. Results indicate that these methods are comparably accurate for sufficiently dense measurement networks; and each is able to correct the position of a purposefully misplaced mesoscale eddy. As measurement density is decreased, the EnKS and the representer method retain accuracy longer than the EnKF. While the representer method is more accurate than the sequential methods within the time period covered by the observations (particularly during the first part of the assimilation time), the representer method is less accurate during later times and during the forecast time period for sparse networks as the tangent linear assumption becomes less accurate. Furthermore, the representer method proves to be significantly more costly (2–4 times) than the EnKS and EnKF even with only a few outer iterations of the iterated indirect representer method.  相似文献   

20.
A low‐order climate model is studied which combines the Lorenz‐84 model for the atmosphere on a fast time scale and a box model for the ocean on a slow time scale. In this climate model, the ocean is forced strongly by the atmosphere. The feedback to the atmosphere is weak. The behaviour of the model is studied as a function of the feedback parameters. We find regions in parameter space with dominant atmospheric dynamics, i.e., a passive ocean, as well as regions with an active ocean, where the oceanic feedback is essential for the qualitative dynamics. The ocean is passive if the coupled system is fully chaotic. This is illustrated by comparing the Kaplan–Yorke dimension and the correlation dimension of the chaotic attractor to the values found in the uncoupled Lorenz‐84 model. The active ocean behaviour occurs at parameter values between fully chaotic and stable periodic motion. Here, intermittency is observed. By means of bifurcation analysis of periodic orbits, the intermittent behaviour, and the rôle played by the ocean model, is clarified. A comparison of power spectra in the active ocean regime and the passive ocean regime clearly shows an increase of energy in the low frequency modes of the atmospheric variables. The results are discussed in terms of itinerancy and quasi‐stationary states observed in realistic atmosphere and climate models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号