首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 695 毫秒
1.
An inverse problem is posed in terms of log-conductivities which are decomposed into macroscale deterministic and microscale stochastic components. The macroscale and microscale conductivities conceptualize hierarchical, scale-dependent aquifer parameters. A deterministic parameter estimation scheme divides a flow domain into a limited number of macroscale constant conductivity zones. A stochastic microscale parameter estimation scheme is used to obtain fluctuations about the macroscale averages in terms of geostatistical models. Both the macroscale and the microscale conductivities are estimated via maximum likelihood, adjoint-state methodologies. Monte Carlo-type approaches are used to examine the distribution of macroscale and microscale conductivity estimates.  相似文献   

2.
Many problems in hydraulics and hydrology are described by linear, time dependent partial differential equations, linearity being, of course, an assumption based on necessity.Solutions to such equations have been obtained in the past based purely on deterministic consideration. The derivation of such a solution requires that the initial conditions, the boundary conditions, and the parameters contained within the equations be stipulated in exact terms. It is obvious that the solution so derived is a function of these specified, values.There are at least four ways in which randomness enters the problem. i) the random initial value problem; ii) the random boundary value problem; iii) the random forcing problem when the non-homogeneous part becomes random and iv) the random parameter problem.Such randomness is inherent in the environment surrounding the system, the environment being endowed with a large number of degrees of freedom.This paper considers the problem of groundwater flow in a phreatic aquifer fed by rainfall. The goveming equations are linear second order partial differential equations. Explicit form solutions to this randomly forced equation have been derived in well defined regular boundaries. The paper also provides a derivation of low order moment equations. It contains a discussion on the parameter estimation problem for stochastic partial differential equations.  相似文献   

3.
This paper is concerned with developing computational methods and approximations for maximum likelihood estimation and minimum mean square error smoothing of irregularly observed two-dimensional stationary spatial processes. The approximations are based on various Fourier expansions of the covariance function of the spatial process, expressed in terms of the inverse discrete Fourier transform of the spectral density function of the underlying spatial process. We assume that the underlying spatial process is governed by elliptic stochastic partial differential equations (SPDE's) driven by a Gaussian white noise process. SPDE's have often been used to model the underlying physical phenomenon and the elliptic SPDE's are generally associated with steady-state problems.A central problem in estimation of underlying model parameters is to identify the covariance function of the process. The cumbersome exact analytical calculation of the covariance function by inverting the spectral density function of the process, has commonly been used in the literature. The present work develops various Fourier approximations for the covariance function of the underlying process which are in easily computable form and allow easy application of Newton-type algorithms for maximum likelihood estimation of the model parameters. This work also develops an iterative search algorithm which combines the Gauss-Newton algorithm and a type of generalized expectation-maximization (EM) algorithm, namely expectation-conditional maximization (ECM) algorithm, for maximum likelihood estimation of the parameters.We analyze the accuracy of the covariance function approximations for the spatial autoregressive-moving average (ARMA) models analyzed in Vecchia (1988) and illustrate the performance of our iterative search algorithm in obtaining the maximum likelihood estimation of the model parameters on simulated and actual data.  相似文献   

4.
The steady state two dimensional groundwater flow equation with constant transmissivities was studied by Whittle in 1954 as a stochastic Laplace equation. He showed that the correlation function consisted of a modified Bessel function of the second kind, order 1, multiplied by its argument. This paper uses this pioneering work of Whittle to fit an aquifer head field to unequally spaced observations by maximum likelihood. Observational error is also included in the model. Both the isotropic and anisotropic cases are considered. The fitted field is then calculated on a two dimensional grid together with its standard deviation. The method is closely related to the use of two-dimensional splines for fitting surfaces to irregularly spaced observations.  相似文献   

5.
A second order stochastic differential equation is used for modeling of water-table elevation. The data were sampled at the Borden Aquifer as a part of a tracer experiment. The purpose of the water-table data collection was to determine presence of a water flow. We argue that the water-table surface is a simple plane oscillating up and down in time according to an equation for a stochastic oscillator. We derive the model, estimate its parameters and provide arguments for goodness-of-fit of the model.  相似文献   

6.
A second order stochastic differential equation is used for modeling of water-table elevation. The data were sampled at the Borden Aquifer as a part of a tracer experiment. The purpose of the water-table data collection was to determine presence of a water flow. We argue that the water-table surface is a simple plane oscillating up and down in time according to an equation for a stochastic oscillator. We derive the model, estimate its parameters and provide arguments for goodness-of-fit of the model.  相似文献   

7.
Parameter estimation in nonlinear environmental problems   总被引:5,自引:4,他引:1  
Popular parameter estimation methods, including least squares, maximum likelihood, and maximum a posteriori (MAP), solve an optimization problem to obtain a central value (or best estimate) followed by an approximate evaluation of the spread (or covariance matrix). A different approach is the Monte Carlo (MC) method, and particularly Markov chain Monte Carlo (MCMC) methods, which allow sampling from the posterior distribution of the parameters. Though available for years, MC methods have only recently drawn wide attention as practical ways for solving challenging high-dimensional parameter estimation problems. They have a broader scope of applications than conventional methods and can be used to derive the full posterior pdf but can be computationally very intensive. This paper compares a number of different methods and presents improvements using as case study a nonlinear DNAPL source dissolution and solute transport model. This depth-integrated semi-analytical model approximates dissolution from the DNAPL source zone using nonlinear empirical equations with partially known parameters. It then calculates the DNAPL plume concentration in the aquifer by solving the advection-dispersion equation with a flux boundary. The comparison is among the classical MAP and some versions of computer-intensive Monte Carlo methods, including the Metropolis–Hastings (MH) method and the adaptive direction sampling (ADS) method.  相似文献   

8.
The standard practice for assessing aquifer parameters is to match groundwater drawdown data obtained during pumping tests against theoretical well function curves specific to the aquifer system being tested. The shape of the curve derived from the logarithmic time derivative of the drawdown data is also very frequently used as a diagnostic tool to identify the aquifer system in which the pumping test is being conducted. The present study investigates the incremental area method (IAM) to serve as an alternative diagnostic tool for the aquifer system identification as well as a supplement to the aquifer parameter estimation procedure. The IAM based diagnostic curves for ideal confined, leaky, bounded and unconfined aquifers have been derived as part of this study, and individual features of the plots have been identified. These features were noted to be unique to each aquifer setting, which could be used for rapid evaluation of the aquifer system. The effectiveness of the IAM methodology was investigated by analyzing field data for various aquifer settings including leaky, unconfined, bounded and heterogeneous conditions. The results showed that the proposed approach is a viable method for use as a diagnostic tool to identify the aquifer system characteristics as well as to support the estimation of the hydraulic parameters obtained from standard curve matching procedures. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
Abstract

An approach is presented to solve the inverse problem for simultaneous identification of different aquifer parameters under steady-state conditions. The proposed methodology is formulated as a maximum likelihood parameter estimation problem. Gauss-Newton and full Newton algorithms are used for optimization with an adjoint-state method for calculating the complete Hessian matrix. The methodology is applied to a realistic groundwater model and Monte-Carlo analysis is used to check the results.  相似文献   

10.
In 1988 and 1989, a natural gradient tracer test was performed in the shallow, aerobic and aquifer at Canadian Forces Base (CFB) Borden. A mixture of ground water containing dissolved oxygenated gasoline was injected below the water table along with chloride (Cl-) as a conservative tracer. The migration of BTEX, MTBE, and Cl was monitored in detail for 16 moths. The mass of BTEX compounds in the plume diminished significantly with time due to intrinsic aerobic biodegradation, while MTBE showed only a small decrease in mass over the 16-month period. In 1995/96, a comprehensive ground water sampling program was undertaken to define the mass of MTBE still present in the aquifer. Since the plume had migrated into an unmonitored section of the Borden Aquifer, numerical modeling and geostatistical methods were applied to define an optimal sampling grid and to improve the level of confidence in the results. A drive point profiling system was used to obtain ground water samples. Numerical modeling with no consideration of degradation pedicted maximum concentrations in excess of 3000 μg/L; field sampling found maximum concentrations of less than 200 μg/L. A mass balance for the remaining MTBE mass in the aquifer eight years after injection showed that only 3% of the original mass remained. Sorption, volatilization, a biotic degradation, and plant uptake are not considered significant attenuation processes for the field conditions. Therefore, we suggest that biodegradation may have played a major role in the attenuation of MTBE within the Borden Aquifer.  相似文献   

11.
Velocity variability at scales smaller than the size of a solute plume enhances the rate of spreading of the plume around its center of mass. Macroscopically, the rate of spreading can be quantified through macrodispersion coefficients, the determination of which has been the subject of stochastic theories. This work compares the results of a volume-averaging approach with those of the advection dominated large-time small-perturbation theory of Dagan [1982] and Gelhar and Axness [1983]. Consider transport of an ideal tracer in a porous medium with deterministic periodic velocity. Using the Taylor-Aris-Brenner method of moments, it has been previously demonstrated [Kitanidis, 1992] that when the plume spreads over an area much larger than the period, the volume-averaged concentration satisfies the advection-dispersion equation with constant coefficients that can be computed. Here, the volume-averaging analysis is extended to the case of stationary random velocities. Additionally, a perturbation method is applied to obtain explicit solutions for small-fluctuation cases, and the results are compared with those of the stochastic macrodispersion theory. It is shown that the method of moments, which uses spatial averaging, for sufficiently large volumes of averaging yields the same result as the stochastic theory, which is based on ensemble averaging. The result is of theoretical but also practical significance because the volume-averaging approach provides a potentially efficient way to compute macrodispersion coefficients. The method is applied to a simplified representation of the Borden aquifer. Received: December 28, 1998  相似文献   

12.
ABSTRACT

This study investigates the impact of hydraulic conductivity uncertainty on the sustainable management of the aquifer of Lake Karla, Greece, using the stochastic optimization approach. The lack of surface water resources in combination with the sharp increase in irrigation needs in the basin over the last 30 years have led to an unprecedented degradation of the aquifer. In addition, the lack of data regarding hydraulic conductivity in a heterogeneous aquifer leads to hydrogeologic uncertainty. This uncertainty has to be taken into consideration when developing the optimization procedure in order to achieve the aquifer’s sustainable management. Multiple Monte Carlo realizations of this spatially-distributed parameter are generated and groundwater flow is simulated for each one of them. The main goal of the sustainable management of the ‘depleted’ aquifer of Lake Karla is two-fold: to determine the optimum volume of renewable groundwater that can be extracted, while, at the same time, restoring its water table to a historic high level. A stochastic optimization problem is therefore formulated, based on the application of the optimization method for each of the aquifer’s multiple stochastic realizations in a future period. In order to carry out this stochastic optimization procedure, a modelling system consisting of a series of interlinked models was developed. The results show that the proposed stochastic optimization framework can be a very useful tool for estimating the impact of hydraulic conductivity uncertainty on the management strategies of a depleted aquifer restoration. They also prove that the optimization process is affected more by hydraulic conductivity uncertainty than the simulation process.
Editor Z.W. Kundzewicz; Guest editor S. Weijs  相似文献   

13.
ABSTRACT

The extreme value type III distribution was derived by using the principle of maximum entropy. The derivation required only two constraints to be determined from data, and yielded a procedure for estimation of distribution parameters. This method of parameter estimation was comparable to the methods of moments (MOM) and maximum likelihood estimation (MLE) for the low flow data used.  相似文献   

14.
The two component extreme value (TCEV) distribution has recently been shown to account for most of the characteristics of the real flood experience. A new method of parameter estimation for this distribution is derived using the principle of maximum entropy (POME). This method of parameter estimation is suitable for application in both the site-specific and regional cases and appears simpler than the maximum likelihood estimation method. Statistical properties of the regionalized estimation were evaluated using a Monte Carlo approach and compared with those of the maximum likelihood regional estimators.  相似文献   

15.
Abstract: Linear continuous time stochastic Nash cascade conceptual models for runoff are developed. The runoff is modeled as a simple system of linear stochastic differential equations driven by white Gaussian and marked point process noises. In the case of d reservoirs, the outputs of these reservoirs form a d dimensional vector Markov process, of which only the dth coordinate process is observed, usually at a discrete sample of time points. The dth coordinate process is not Markovian. Thus runoff is a partially observed Markov process if it is modeled using the stochastic Nash cascade model. We consider how to estimate the parameters in such models. In principle, maximum likelihood estimation for the complete process parameters can be carried out directly or through some form of the EM (estimation and maximization) algorithm or variation thereof, applied to the observed process data. In this research we consider a direct approximate likelihood approach and a filtering approach to an algorithm of EM type, as developed in Thompson and Kaseke (1994). These two methods are applied to some real life runoff data from a catchment in Wales, England. We also consider a special case of the martingale estimating function approach on the runoff model in the presence of rainfall. Finally, some simulations of the runoff process are given based on the estimated parameters.  相似文献   

16.
Estimation of hydraulic parameters is essential to understand the interaction between groundwater flow and seawater intrusion. Though several studies have addressed hydraulic parameter estimation, based on pumping tests as well as geophysical methods, not many studies have addressed the problem with clayey formations being present. In this study, a methodology is proposed to estimate anisotropic hydraulic conductivity and porosity values for the coastal aquifer with unconsolidated formations. For this purpose, the one-dimensional resistivity of the aquifer and the groundwater conductivity data are used to estimate porosity at discrete points. The hydraulic conductivity values are estimated by its mutual dependence with porosity and petrophysical parameters. From these estimated values, the bilinear relationship between hydraulic conductivity and aquifer resistivity is established based on the clay content of the sampled formation. The methodology is applied on a coastal aquifer along with the coastal Karnataka, India, which has significant clayey formations embedded in unconsolidated rock. The estimation of hydraulic conductivity values from the established correlations has a correlation coefficient of 0.83 with pumping test data, indicating good reliability of the methodology. The established correlations also enable the estimation of horizontal hydraulic conductivity on two-dimensional resistivity sections, which was not addressed by earlier studies. The inventive approach of using the established bilinear correlations at one-dimensional to two-dimensional resistivity sections is verified by the comparison method. The horizontal hydraulic conductivity agrees with previous findings from inverse modelling. Additionally, this study provides critical insights into the estimation of vertical hydraulic conductivity and an equation is formulated which relates vertical hydraulic conductivity with horizontal. Based on the approach presented, the anisotropic hydraulic conductivity of any type aquifer with embedded clayey formations can be estimated. The anisotropic hydraulic conductivity has the potential to be used as an important input to the groundwater models.  相似文献   

17.
We present a workflow to estimate geostatistical aquifer parameters from pumping test data using the Python package welltestpy . The procedure of pumping test analysis is exemplified for two data sets from the Horkheimer Insel site and from the Lauswiesen site, Germany. The analysis is based on a semi-analytical drawdown solution from the upscaling approach Radial Coarse Graining, which enables to infer log-transmissivity variance and horizontal correlation length, beside mean transmissivity, and storativity, from pumping test data. We estimate these parameters of aquifer heterogeneity from type-curve analysis and determine their sensitivity. This procedure, implemented in welltestpy , is a template for analyzing any pumping test. It goes beyond the possibilities of standard methods, for example, based on Theis' equation, which are limited to mean transmissivity and storativity. A sensitivity study showed the impact of observation well positions on the parameter estimation quality. The insights of this study help to optimize future test setups for geostatistical aquifer analysis and provides guidance for investigating pumping tests with regard to aquifer statistics using the open-source software package welltestpy .  相似文献   

18.
A new method of parameter estimation in data scarce regions is valuable for bivariate hydrological extreme frequency analysis. This paper proposes a new method of parameter estimation (maximum entropy estimation, MEE) for both Gumbel and Gumbel–Hougaard copula in situations when insufficient data are available. MEE requires only the lower and upper bounds of two hydrological variables. To test our new method, two experiments to model the joint distribution of the maximum daily precipitation at two pairs of stations on the tributaries of Heihe and Jinghe River, respectively, were performed and compared with the method of moments, correlation index estimation, and maximum likelihood estimation, which require a large amount of data. Both experiments show that for the Ye Niugou and Qilian stations, the performance of MEE is nearly identical to those of the conventional methods. For the Xifeng and Huanxian stations, MEE can capture information indicating that the maximum daily precipitation at the Xifeng and Huanxian stations has an upper tail dependence, whereas the results generated by correlation index estimation and maximum likelihood estimation are unreasonable. Moreover, MEE is proved to be generally reliable and robust by many simulations under three different situations. The Gumbel–Hougaard copula with MEE can also be applied to the bivariate frequency analysis of other extreme events in data‐scarce regions.  相似文献   

19.
A parameter estimation or inversion procedure is incomplete without an analysis of uncertainties in the results. In the fundamental approach of Bayesian parameter estimation, discussed in Part I of this paper, the a posteriori probability density function (pdf) is the solution to the inverse problem. It is the product of the a priori pdf, containing a priori information on the parameters, and the likelihood function, which represents the information from the data. The maximum of the a posteriori pdf is usually taken as a point estimate of the parameters. The shape of this pdf, however, gives the full picture of uncertainty in the parameters. Uncertainty analysis is strictly a problem of information reduction. This can be achieved in several stages. Standard deviations can be computed as overall uncertainty measures of the parameters, when the shape of the a posteriori pdf is not too far from Gaussian. Covariance and related matrices give more detailed information. An eigenvalue or principle component analysis allows the inspection of essential linear combinations of the parameters. The relative contributions of a priori information and data to the solution can be elegantly studied. Results in this paper are especially worked out for the non-linear Gaussian case. Comparisons with other approaches are given. The procedures are illustrated with a simple two-parameter inverse problem.  相似文献   

20.
Satellites provide important information on many meteorological and oceanographic variables. State-space models are commonly used to analyse such data sets with measurement errors. In this work, we propose to extend the usual linear and Gaussian state-space to analyse time series with irregular time sampling, such as the one obtained when keeping all the satellite observations available at some specific location. We discuss the parameter estimation using a method of moment and the method of maximum likelihood. Simulation results indicate that the method of moment leads to a computationally efficient and numerically robust estimation procedure suitable for initializing the Expectation–Maximisation algorithm, which is combined with a standard numerical optimization procedure to maximize the likelihood function. The model is validated on sea surface temperature (SST) data from a particular satellite. The results indicate that the proposed methodology can be used to reconstruct realistic SST time series at a specific location and also give useful information on the quality of satellite measurement and the dynamics of the SST.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号