首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a new ensemble-based algorithm that assimilates the vertical rain structure retrieved from microwave radiometer and radar measurements in a regional weather forecast model, by employing a Bayesian framework. The goal of the study is to evaluate the capability of the proposed technique to improve track prediction of tropical cyclones that originate in the North Indian Ocean. For this purpose, the tropical cyclone Jal has been analyzed by the community mesoscale weather model, weather research and forecasting (WRF). The ensembles of prognostic variables such as perturbation potential temperature (θk), perturbation geopotential (?, m2/s2), meridional (U) and zonal velocities (V) and water vapor mixing ratio (q v , kg/kg) are generated by the empirical orthogonal function technique. An over pass of the tropical rainfall-measuring mission (TRMM) satellite occurred on 06th NOV 0730 UTC over the system, and the observations from the radiometer and radar on board the satellite(1B11 data products) are inverted using a combined in-home radiometer-radar retrieval technique to estimate the vertical rain structure, namely the cloud liquid water, cloud ice, precipitation water and precipitation ice. Each ensemble is input as a possible set of initial conditions to the WRF model from 00 UTC which was marched in time till 06th NOV 0730 UTC. The above-mentioned hydrometeors from the cloud water and rain water mixing ratios are then estimated for all the ensembles. The Bayesian filter framework technique is then used to determine the conditional probabilities of all the candidates in the ensemble by comparing the retrieved hydrometeors through measured TRMM radiances with the model simulated hydrometeors. Based on the posterior probability density function, the initial conditions at 06 00 UTC are then corrected using a linear weighted average of initial ensembles for the all prognostic variables. With these weighted average initial conditions, the WRF model has been run up to 08th Nov 06 UTC and the predictions are then compared with observations and the control run. An ensemble independence study was conducted on the basis of which, an optimum of 25 ensembles is arrived at. With the optimum ensemble size, the sensitivity of prognostic variables was also analyzed. The model simulated track when compared with that obtained with the corrected set of initial conditions gives better results than the control run. The algorithm can improve track prediction up to 35 % for a 24 h forecast and up to 12 % for a 54 h forecast.  相似文献   

2.
Coarse-scale data assimilation (DA) with large ensemble size is proposed as a robust alternative to standard DA with localization for reservoir history matching problems. With coarse-scale DA, the unknown property function associated with each ensemble member is upscaled to a grid significantly coarser than the original reservoir simulator grid. The grid coarsening is automatic, ensemble-specific and non-uniform. The selection of regions where the grid can be coarsened without introducing too large modelling errors is performed using a second-generation wavelet transform allowing for seamless handling of non-dyadic grids and inactive grid cells. An inexpensive local-local upscaling is performed on each ensemble member. A DA algorithm that restarts from initial time is utilized, which avoids the need for downscaling. Since the DA computational cost roughly equals the number of ensemble members times the cost of a single forward simulation, coarse-scale DA allows for a significant increase in the number of ensemble members at the same computational cost as standard DA with localization. Fixing the computational cost for both approaches, the quality of coarse-scale DA is compared to that of standard DA with localization (using state-of-the-art localization techniques) on examples spanning a large degree of variability. It is found that coarse-scale DA is more robust with respect to variation in example type than each of the localization techniques considered with standard DA. Although the paper is concerned with two spatial dimensions, coarse-scale DA is easily extendible to three spatial dimensions, where it is expected that its advantage with respect to standard DA with localization will increase.  相似文献   

3.
地下水位在非饱和水流数据同化中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
为理解地下水位观测信息在非饱和水流数据同化中的数据价值,建立了基于地下水位动态观测信息的一维饱和-非饱和水流集合卡尔曼滤波,通过虚拟数值实验检验了地下水位观测信息在非饱和水力参数估计和水分校正中的潜在价值。研究结果表明:在以地下水位为唯一观测数据时,同时更新参数和水头比仅更新水头能更好地校正土壤剖面的水头分布;当多层单个水力参数未知时,地下水位观测可以为参数估计提供有效信息;当多层多个参数未知时,地下水位与多层多个参数之间的复杂关系导致观测信息难以估计出最优的(唯一的)参数值;地下水位可作为辅助信息,与含水量观测等信息联合运用改善参数估计和含水量预测精度。  相似文献   

4.
Parameter calibration is one of the most problematic phases of numerical modeling since the choice of parameters affects the model’s reliability as far as the physical problems being studied are concerned. In some cases, laboratory tests or physical models evaluating model parameters cannot be completed and other strategies must be adopted; numerical models reproducing debris flow propagation are one of these. Since scale problems affect the reproduction of real debris flows in the laboratory or specific tests used to determine rheological parameters, calibration is usually carried out by comparing in a subjective way only a few parameters, such as the heights of soil deposits calculated for some sections of the debris flows or the distance traveled by the debris flows using the values detected in situ after an event has occurred. Since no automatic or objective procedure has as yet been produced, this paper presents a numerical procedure based on the application of a statistical algorithm, which makes it possible to define, without ambiguities, the best parameter set. The procedure has been applied to a study case for which digital elevation models of both before and after an important event exist, implicating that a good database for applying the method was available. Its application has uncovered insights to better understand debris flows and related phenomena.  相似文献   

5.
6.
A 3-compartment model of phytoplankton growth dynamics has been coupled with a primitive-equation circulation model to better understand and quantify physical and biological processes in the Adriatic Sea. This paper presents the development and application of a data assimilation procedure based on optimal control theory. The aim of the procedure is to identify a set of model coefficient values that ensures the best fit between data and model results by minimizing a function that measures model and data discrepancies. In this sense, twin experiments have been successfully implemented in order to have a better estimation of biological model parameters and biological initial conditions.  相似文献   

7.
Reducing uncertainty in global temperature reconstructions of the past millennium remains the key issue in applying this record to society’s pressing climate change problem. Reconstructions are collaborative, built on the research of hundreds of scientists who apply their diverse scientific expertise and field and laboratory skill to create the individual proxy reconstructions that underlie the multi-proxy, global average temperature time series. Web 2.0 features have enabled collaborative efforts that improve the characterization of uncertainty. Raw data shared via a repository (the World Data Center for Paleoclimatology) enable new reconstructions from the collection of user-generated data. Standards propagated by expert communities facilitate quality control and interoperability. Open access to data and computer code promote transparency and make the science accessible to a broader audience. Blogs, wikis, and listservs share background information and highlight contentious as well as unique aspects of paleo science. A novel approach now underway, titled the Paleoclimate Reconstruction Challenge, and based on the sharing of simulated data (pseudo-proxies) and reconstruction results, seeks to facilitate method development, further reducing uncertainty. Broadly-useful aspects of the Challenge may find application in other fields.  相似文献   

8.
9.
The objective of this study is to investigate the impact of a surface data assimilation (SDA) technique, together with the traditional four-dimensional data assimilation (FDDA), on the simulation of a monsoon depression that formed over India during the field phase of the 1999 Bay of Bengal Monsoon Experiment (BOBMEX). The SDA uses the analyzed surface data to continuously assimilate the surface layer temperature as well as the water vapor mixing ratio in the mesoscale model. The depression for the greater part of this study was offshore and since successful application of the SDA would require surface information, a method of estimating surface temperature and surface humidity using NOAA-TOVS satellites was used. Three sets of numerical experiments were performed using a coupled mesoscale model. The first set, called CONTROL, uses the NCEP (National Center for Environmental Prediction) reanalysis for the initial and lateral boundary conditions in the MM5 simulation. The second and the third sets implemented the SDA of temperature and moisture together with the traditional FDDA scheme available in the MM5 model. The second set of MM5 simulation implemented the SDA scheme only over the land areas, and the third set extended the SDA technique over land as well as sea. Both the second and third sets of the MM5 simulation used the NOAA-TOVS and QuikSCAT satellite and conventional upper air and surface meteorological data to provide an improved analysis. The results of the three sets of MM5 simulations are compared with one another and with the analysis and the BOBMEX 1999 buoy, ship, and radiosonde observations. The predicted sea level pressure of both the model runs with assimilation resembles the analysis closely and also captures the large-scale structure of the monsoon depression well. The central sea level pressures of the depression for both the model runs with assimilation were 2–4 hPa lower than the CONTROL. The results of both the model runs with assimilation indicate a larger spatial area as well as increased rainfall amounts over the coastal regions after landfall compared with the CONTROL. The impact of FDDA and SDA, the latter over land, resulted in reduced errors of the following: 1.45 K in temperature, 0.39 m s−1 in wind speed, and 14° in wind direction compared with the BOBMEX buoy observation, and 1.43 m s−1 in wind speed, 43° in wind direction, and 0.75% in relative humidity compared with the CONTROL. The impact of SDA over land and sea compared with SDA over land only showed a further marginal reduction of errors: 0.23 K in air temperature (BOBMEX buoy) and 1.33 m s−1 in wind speed simulations.  相似文献   

10.
11.
12.
The variational technique of data assimilation using adjoint equations has been illustrated using a nonlinear oceanographic shallow water model. The technique consists of minimizing a cost function representing the misfit between the model and the data subject to the model equations acting as constraints. The problem has been transformed into an unconstrained one by the use of Lagrange multipliers. Particular emphasis has been laid on finite difference formulation of the algorithm. Several numerical experiments have been conducted using simulated data obtained from a control run of the model. Implications of this technique for assimilating asynoptic satellite altimeter data into ocean models have been discussed.  相似文献   

13.
The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our implementation of the MCMC method provides the gold standard against which the aforementioned Gaussian approximations are assessed. We present numerical synthetic experiments where we quantify the capability of each of the ad hoc Gaussian approximation in reproducing the mean and the variance of the posterior distribution (characterized via MCMC) associated to a data assimilation problem. Both single-phase and two-phase (oil–water) reservoir models are considered so that fundamental differences in the resulting forward operators are highlighted. The main objective of our controlled experiments was to exhibit the substantial discrepancies of the approximation properties of standard ad hoc Gaussian approximations. Numerical investigations of the type we present here will lead to the greater understanding of the cost-efficient, but ad hoc, Bayesian techniques used for data assimilation in petroleum reservoirs and hence ultimately to improved techniques with more accurate uncertainty quantification.  相似文献   

14.
The ensemble Kalman filter has been successfully applied for data assimilation in very large models, including those in reservoir simulation and weather. Two problems become critical in a standard implementation of the ensemble Kalman filter, however, when the ensemble size is small. The first is that the ensemble approximation to cross-covariances of model and state variables to data can indicate the presence of correlations that are not real. These spurious correlations give rise to model or state variable updates in regions that should not be updated. The second problem is that the number of degrees of freedom in the ensemble is only as large as the size of the ensemble, so the assimilation of large amounts of precise, independent data is impossible. Localization of the Kalman gain is almost universal in the weather community, but applications of localization for the ensemble Kalman filter in porous media flow have been somewhat rare. It has been shown, however, that localization of updates to regions of non-zero sensitivity or regions of non-zero cross-covariance improves the performance of the EnKF when the ensemble size is small. Localization is necessary for assimilation of large amounts of independent data. The problem is to define appropriate localization functions for different types of data and different types of variables. We show that the knowledge of sensitivity alone is not sufficient for determination of the region of localization. The region depends also on the prior covariance for model variables and on the past history of data assimilation. Although the goal is to choose localization functions that are large enough to include the true region of non-zero cross-covariance, for EnKF applications, the choice of localization function needs to balance the harm done by spurious covariance resulting from small ensembles and the harm done by excluding real correlations. In this paper, we focus on the distance-based localization and provide insights for choosing suitable localization functions for data assimilation in multiphase flow problems. In practice, we conclude that it is reasonable to choose localization functions based on well patterns, that localization function should be larger than regions of non-zero sensitivity and should extend beyond a single well pattern.  相似文献   

15.
This paper presents a practical computational approach to quantify the effect of individual observations in estimating the state of a system. Such a methodology can be used for pruning redundant measurements and for designing future sensor networks. The mathematical approach is based on computing the sensitivity of the analyzed model states (unconstrained optimization solution) with respect to the data. The computational cost is dominated by the solution of a linear system, whose matrix is the Hessian of the cost function, and is only available in operator form. The right-hand side is the gradient of a scalar cost function that quantifies the forecast error of the numerical model. The use of adjoint models to obtain the necessary first- and second-order derivatives is discussed. We study various strategies to accelerate the computation, including matrix-free iterative solvers, preconditioners, and an in-house multigrid solver. Experiments are conducted on both a small-size shallow-water equations model and on a large-scale numerical weather prediction model, in order to illustrate the capabilities of the new methodology.  相似文献   

16.
Extreme weather events such as cloudburst and thunderstorms are great threat to life and property. It is a great challenge for the forecasters to nowcast such hazardous extreme weather events. Mesoscale model (ARPS) with real-time assimilation of DWR data has been operationally implemented in India Meteorological Department (IMD) for real-time nowcast of weather over Indian region. Three-dimensional variational (ARPS3DVAR) technique and cloud analysis procedure are utilized for real-time data assimilation in the model. The assimilation is performed as a sequence of intermittent cycles and complete process (starting from reception, processing and assimilation of DWR data, running of ARPS model and Web site updation) takes less than 20 minutes. Thus, real-time nowcast for next 3 h from ARPS model is available within 20 minutes of corresponding hour. Cloudburst event of September 15, 2011, and thunderstorm event of October 22, 2010, are considered to demonstrate the capability of ARPS model to nowcast the extreme weather events in real time over Indian region. Results show that in both the cases, ARPS3DVAR and cloud analysis technique are able to extract hydrometeors from radar data which are transported to upper levels by the strong upward motion resulting in the distribution of hydrometeors at various isobaric levels. Dynamic and thermodynamic structures of cloudburst and thunderstorm are also well simulated. Thus, significant improvement in the initial condition is noticed. In the case of cloudburst event, the model is able to capture the sudden collisions of two or more clouds during 09–10 UTC. Rainfall predicted by the model during cloudburst event is over 100 mm which is very close to the observed rainfall (117 mm). The model is able to predict the cloudburst with slight errors in time and space. Real-time nowcast of thunderstorm shows that movement, horizontal extension, and north–south orientation of thunderstorm are well captured during first hour and deteriorate thereafter. The amount of rainfall predicted by the model during thunderstorm closely matches with observation with slight errors in the location of rainfall area. The temporal and spatial information predicted by ARPS model about the sudden collision/merger and broken up of convective cells, intensification, weakening, and maintaining intensity of convective cells has added value to a human forecast.  相似文献   

17.
A dynamical downscaling approach based on scale-selective data assimilation (SSDA) is applied to tropical cyclone (TC) track forecasts. The results from a case study of super Typhoon Megi (2010) show that the SSDA approach is very effective in improving the TC track forecasts by fitting the large-scale wind field from the regional model to that from the global forecast system (GFS) forecasts while allowing the small-scale circulation to develop freely in the regional model. A comparison to the conventional spectral-nudging four-dimensional data assimilation (FDDA) indicates that the SSDA approach outperforms the FDDA in TC track forecasts because the former allows the small-scale features in a regional model to develop more freely than the latter due to different techniques used. In addition, a number of numerical experiments are performed to investigate the sensitivity of SSDA’s effect in TC track forecasts to some parameters in SSDA, including the cutoff wave number, the vertical layers of the atmosphere being adjusted, and the interval of SSDA implementation. The results show that the improvements are sensitive in different extent to the above three parameters.  相似文献   

18.
Oceansat-1 was successfully launched by India in 1999, with two payloads, namely Multi-frequency Scanning Microwave Radiometer (MSMR) and Ocean Color Monitor (OCM) to study the biological and physical parameters of the ocean. The MSMR sensor is configured as an eight-channel radiometer using four frequencies with dual polarization. The MSMR data at 75 km resolution from the Oceansat-I have been assimilated in the National Centre for Medium Range Weather Forecasting (NCMRWF) data assimilation forecast system. The operational analysis and forecast system at NCMRWF is based on a T80L18 global spectral model and Spectral Statistical Interpolation (SSI) scheme for data analysis. The impact of the MSMR data is seen globally, however it is significant over the oceanic region where conventional data are rare. The dry-nature of the control analyses have been removed by utilizing the MSMR data. Therefore, the total precipitable water data from MSMR has been identified as a very crucial parameter in this study. The impact of surface wind speed from MSMR is to increase easterlies over the tropical Indian Ocean. Shifting of the positions of westerly troughs and ridges in the south Indian Ocean has contributed to reduction of temperature to around 30‡S.  相似文献   

19.
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.  相似文献   

20.
Computational Geosciences - A Correction to this paper has been published: https://doi.org/10.1007/s10596-021-10079-6  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号