首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Data assimilation technique (adjoint method) is applied to study the similarities and the differences between the Ekman (linear) and the Quadratic (nonlinear) bottom friction parameterizations for a two-dimensional tidal model. Two methods are used to treat the bottom friction coefficient (BFC). The first method assumes that the BFC is a constant in the entire computation domain, while the second applies the spatially varying BFCs. The adjoint expressions for the linear and the nonlinear parameterizations and the optimization formulae for the two BFC methods are derived based on the typical Largrangian multiplier method. By assimilating the model-generated ‘observations’, identical twin experiments are performed to test and validate the inversion ability of the presented methodology. Four experiments, which employ the linear parameterization, the nonlinear parameterizations, the constant BFC and the spatially varying BFC, are carried out to simulate the M2 tide in the Bohai Sea and the Yellow Sea by assimilating the TOPEX/Poseidon altimetry and tidal gauge data. After the assimilation, the misfit between model-produced and observed data is significantly decreased in the four experiments. The simulation results indicate that the nonlinear Quadratic parameterization is more accurate than the linear Ekman parameterization if the traditional constant BFC is used. However, when the spatially varying BFCs are used, the differences between the Ekman and the Quadratic approaches diminished, the reason of which is analyzed from the viewpoint of dissipation rate caused by bottom friction. Generally speaking, linear bottom friction parameterizations are often used in global tidal models. This study indicates that they are also applicable in regional ocean tidal models with the combination of spatially varying parameters and the adjoint method.  相似文献   

2.
A new methodology for using buoy measurements in sea wave data assimilation   总被引:1,自引:2,他引:1  
One of the main drawbacks in modern sea wave data assimilation models is the limited temporal and spatial improvement obtained in the final forecasting products. This is mainly due to deviations coming either from the relevant atmospheric input or from the dynamics of the wave model, resulting to systematic errors of the forecasted fields of numerical wave models, when no observation is available for assimilation. A potential solution is presented in this work, based on a combination of advanced statistical techniques, data assimilation systems, and wave models. More precisely, Kalman filtering algorithms are implemented into the wave model WAM and the results are assimilated by an Optimum Interpolation Scheme, in order to extend the beneficial influence of the latter in time and space. The case studied concerns a 3-month period in an open sea area near the South-West coast of the USA (Pacific Ocean).  相似文献   

3.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.  相似文献   

4.
5.
Hassan AE 《Ground water》2004,42(3):347-362
Ground water validation is one of the most challenging issues facing modelers and hydrogeologists. Increased complexity in ground water models has created a gap between model predictions and the ability to validate or build confidence in predictions. Specific procedures and tests that can be easily adapted and applied to determine the validity of site-specific ground water models do not exist. This is true for both deterministic and stochastic models, with stochastic models posing the more difficult validation problem. The objective of this paper is to propose a general validation approach that addresses important issues recognized in previous validation studies, conferences, and symposia. The proposed method links the processes for building, calibrating, evaluating, and validating models in an iterative loop. The approach focuses on using collected validation data to reduce uncertainty in the model and narrow the range of possible outcomes. This method is designed for stochastic numerical models utilizing Monte Carlo simulation approaches, but it can be easily adapted for deterministic models. The proposed methodology relies on the premise that absolute validity is not theoretically possible, nor is it a regulatory requirement. Rather, the proposed methodology highlights the importance of testing various aspects of the model and using diverse statistical tools for rigorous checking and confidence building in the model and its predictions. It is this confidence that will encourage regulators and the public to accept decisions based on the model predictions. This validation approach will be applied to a model, described in this paper, dealing with an underground nuclear test site in rural Nevada.  相似文献   

6.
7.
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decomposition-based calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.  相似文献   

8.
It is a common fact that the majority of today's wave assimilation platforms have a limited, in time, ability of affecting the final wave prediction, especially that of long-period forecasting systems. This is mainly due to the fact that after “closing” the assimilation window, i.e., the time that the available observations are assimilated into the wave model, the latter continues to run without any external information. Therefore, if a systematic divergence from the observations occurs, only a limited portion of the forecasting period will be improved. A way of dealing with this drawback is proposed in this study: A combination of two different statistical tools—Kolmogorov–Zurbenko and Kalman filters—is employed so as to eliminate any systematic error of (a first run of) the wave model results. Then, the obtained forecasts are used as artificial observations that can be assimilated to a follow-up model simulation inside the forecasting period. The method was successfully applied to an open sea area (Pacific Ocean) for significant wave height forecasts using the wave model WAM and six different buoys as observational stations. The results were encouraging and led to the extension of the assimilation impact to the entire forecasting period as well as to a significant reduction of the forecast bias.  相似文献   

9.
Abstract

Basic hidden Markov models are very useful in stochastic environmental research but their ability to accommodate sufficient dependence between observations is somewhat limited. However, they can be modified in several ways to form a rich class of flexible models that are useful in many environmental applications. We consider a class of hidden Markov models that incorporate additional dependence among observations to model average regional rainfall time series. The focus of the study is on models that introduce additional dependence between the state level and the observation level of the process and also on models that incorporate dependence at observation level. Construction of the likelihood function of the models is described along with the usual second-order properties of the process. The maximum likelihood method is used to estimate the parameters of the models. Application of the proposed class of models is illustrated in an analysis of daily regional average rainfall time series from southeast and southwest England for the winter season during 1931 to 2010. Models incorporating additional dependence between the state level and the observation level of the process captured the distributional properties of the daily rainfall well, while the models that incorporate dependence at the observation level showed their ability to reproduce the autocorrelation structure. Changes in some of the regional rainfall properties during the time period are also studied.

Editor D. Koutsoyiannis  相似文献   

10.
This paper summarizes the development steps of a 4D variational assimilation scheme for nearshore wave models. A partition method is applied for adjusting both wave boundary conditions and wind fields. Nonstationary conditions are assimilated by providing defined correlations of model inputs in time. The scheme is implemented into the SWAN model. Twin experiments covering both stationary and nonstationary wave conditions are carried out to assess the adequacy of the proposed scheme. Stationary experiments are carried out considering separately windsea, swells, and mixed sea. Cost functions decline to less than 5% and RMS spectrum errors are reduced to less than 10%. The nonstationary experiment covers 1 day simulation under mixed wave conditions with assimilation windows of 3 h. RMS spectrum errors are reduced to less than 10% after 30 iterations in most assimilation windows. The results show that for spacially uniform model inputs, model accuracy is improved notably by the assimilation scheme throughout the computational domain. It is found that under wave conditions in which observed spectra can be well classified, the assimilation scheme is able to improve model results significantly.  相似文献   

11.
To model currents in a fjord accurate tidal forcing is of extreme importance. Due to complex topography with narrow and shallow straits, the tides in the innermost parts of a fjord are both shifted in phase and altered in amplitude compared to the tides in the open water outside the fjord. Commonly, coastal tide information extracted from global or regional models is used on the boundary of the fjord model. Since tides vary over short distances in shallower waters close to the coast, the global and regional tidal forcings are usually too coarse to achieve sufficiently accurate tides in fjords. We present a straightforward method to remedy this problem by simply adjusting the tides to fit the observed tides at the entrance of the fjord. To evaluate the method, we present results from the Oslofjord, Norway. A model for the fjord is first run using raw tidal forcing on its open boundary. By comparing modelled and observed time series of water level at a tidal gauge station close to the open boundary of the model, a factor for the amplitude and a shift in phase are computed. The amplitude factor and the phase shift are then applied to produce adjusted tidal forcing at the open boundary. Next, we rerun the fjord model using the adjusted tidal forcing. The results from the two runs are then compared to independent observations inside the fjord in terms of amplitude and phases of the various tidal components, the total tidal water level, and the depth integrated tidal currents. The results show improvements in the modelled tides in both the outer, and more importantly, the inner parts of the fjord.  相似文献   

12.
The possibility of applying the global positioning system (GPS) data for calculating the corrections for variations in the sea level to the results of a gravity survey and for improving the accuracy of marine gravity measurements is discussed.  相似文献   

13.
中国大陆精密重力潮汐改正模型   总被引:8,自引:4,他引:8       下载免费PDF全文
利用理论和实验重力固体潮模型,充分考虑全球海潮和中国近海潮汐的负荷效应,建立了中国大陆的精密重力潮汐改正模型.结果表明,采用不同的固体潮模型会对重力潮汐结果产生相对变化幅度小于0.06%的差异;在沿海地区海潮负荷的影响约为整个潮汐的4%,而中部地区约为1%,其中中国近海潮汐模型的影响约占整个海潮负荷的10%,内插或外推潮波的负荷约占海潮负荷的3%.通过比较实测的重力数据表明,本文给出的重力潮汐改正模型的精度远远优于0.5×10-8 m·s-2,说明了本文构建的模型的实用性,可为中国大陆高精度重力测量提供有效参考和精密的改正模型.  相似文献   

14.
A multigrid solver for 3D electromagnetic diffusion   总被引:2,自引:0,他引:2  
The performance of a multigrid solver for the time‐harmonic electromagnetic problem in geophysical settings is investigated. The frequencies are sufficiently small for waves travelling at the speed of light to be negligible, so that a diffusive problem remains. The discretization of the governing equations is obtained by the finite‐integration technique, which can be viewed as a finite‐volume generalization of Yee's staggered grid scheme. The resulting set of discrete equations is solved by a multigrid method. The convergence rate of the multigrid method decreased when the grid was stretched. The slower convergence rate of the multigrid method can be compensated by using bicgstab2 , a conjugate‐gradient‐type method for non‐symmetric problems. In that case, the multigrid solver acts as a preconditioner. However, whereas the multigrid method provides excellent convergence with constant grid spacings, it performs less than satisfactorily when substantial grid stretching is used.  相似文献   

15.
The freshwater budget of a tidal flat area is evaluated from long-term hydrographic time series from an observation pole positioned in a tidal channel in the Hörnum Basin (Germany). For each tidal cycle, the freshwater budget is calculated from the total imported and exported water volumes and the corresponding mean densities. The variability of the budget on a tidal scale is characterised by a period of twice the tidal period, exhibiting a minimum when the tidal flats are dry around daylight hours during the foregoing low tide, and a maximum when low tide occurs at night; enhanced evaporation on the flats at daylight hours is identified as the driving process. On the average over one year, while winter observations are missing, the freshwater budget is negative for the years 2002–2005 and positive only for 2006. The interannual mean is negative and amounts to a freshwater loss of about 2 mm day−1, although the large-scale climate in this region is humid. The results demonstrate that the bulk parametrisations for the latent and sensible heat flux between the ocean and the atmosphere must not be applied for the tidelands.  相似文献   

16.
Two accurately calibrated superconducting gravimeters (SGs) provide high quality tidal gravity records in three central European stations: C025 in Vienna and at Conrad observatory (A) and OSG050 in Pecný (CZ). To correct the tidal gravity factors from ocean loading effects we compared the load vectors from different ocean tides models (OTMs) computed with different software: OLFG/OLMP by the Free Ocean Tides Loading Provider (FLP), ICET and NLOADF. Even with the recent OTMs the mass conservation is critical but the methods used to correct the mass imbalance agree within 0.1 nm/s2. Although the different software agrees, FLP probably provides more accurate computations as this software has been optimised. For our final computation we used the mean load vector computed by FLP for 8 OTMs (CSR4, NAO99, GOT00, TPX07, FES04, DTU10, EOT11a and HAMTIDE). The corrected tidal factors of the 3 stations agree better than 0.04% in amplitude and 0.02° in phase. Considering the weighted mean of the three stations we get for O1 δc = 1.1535 ± 0.0001, for K1 δc = 1.1352 ± 0.0003 and for M2 δc = 1.1621 ± 0.0003. These values confirm previous ones obtained with 16 European stations. The theoretical body tides model DDW99/NH provides the best agreement for M2 (1.1620) and MATH01/NH for O1 (1.1540) and K1 (1.1350). The largest discrepancy is for O1 (0.05%). The corrected phase αc does not differ significantly from zero except for K1 and S2. The calibrations of the two SG's are consistent within 0.025% and agree with Strasbourg results within 0.05%.  相似文献   

17.
Studia Geophysica et Geodaetica - Авторы уже раньше...  相似文献   

18.
19.
 Regional flood analysis is formulated as a physical-modelling problem consisting in the inference of meaningful physical models for a set of observable uncertain quantities representing floods, given the observed data separately associated with them. It is argued that physical modelling suitable for representing causality relationships should involve the use of models comprising functional dependences of the observable uncertain quantities with regard to other quantities which are unobservable. The regional physical-modelling problem becomes the selection, from any proposed space of candidate models, of a probability distribution for the unobservable uncertain quantities together with a functional-dependence model connecting the observable to the unobservable uncertain quantities. Due to the need to coherently represent observational data and to express precisely the available evidences, the physical modelling problem is formalized in a plausible logic language, within the logical probability framework. A logical inference procedure called the relative entropy method with fractile constraints (REF) is formulated within this framework and extended to solve the regional physical-modelling problem. Contrary to the current statistical methods, it allows the selection and validation of inferred models and can be applied whatever it is the number of observational data. The complete solution to the problem using the relative entropy procedure is presented. This method is applied to the regional modelling of annual maximum floods of a set of separate rivers in the Iberian Peninsula. For this application the space of candidate models includes several types of two-parameter probability distributions for the unobservable uncertain quantities and the class of linear homogeneous functional-dependence models connecting the observable to the unobservable quantities.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号