首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
In this paper we present a method which allows delineation of geologic structures in a bi-modal lithotype setting. We propose to use gravity data in combination with a priori information about the density contrast between the two lithotypes. The iterative method uses an objective function with five tunable parameters which need to be set. Using an efficient parameter search, suitable ranges of these are investigated to determine their optimal values, respectively, which in turn, ensures good inversion results.
The approach produces structural images of the subsurface, without the need of an a priori density model; the depth to the top of the inhomogeneity is also retrieved.
Besides synthetic simulations, the methodology has also been applied to a small gravity data set, acquired by the industry over a basinal structure. A consistent, bi-modal image of the bedrock depression is obtained from the data, which, in this case, was the goal. Other potential areas of application include delineation of salt structures and ore deposits.  相似文献   

2.
We present the extension of stereotomography to P - and S -wave velocity estimation from PP - and PS -reflected/diffracted waves. In this new context, we greatly benefit from the use of locally coherent events by stereotomography. In particular, when applied to S -wave velocity estimation from PS -data, no pairing of PP - and PS -events is a priori required. In our procedure the P -wave velocity model is obtained first using stereotomography on PP -arrivals. Then the S -wave velocity model is obtained using PS -stereotomography on PS -arrivals fixing the P -wave velocity model. We present an application to an 'ideal' synthetic data set demonstrating the relevance of the approach, which allows us to recover depth consistent P - and S -waves velocity models even if no pairing of PP - and PS -events is introduced. Finally, results to a real data set from the Gulf of Mexico are presented demonstrating the potential of the method in a noisy data context.  相似文献   

3.
The use of a priori data to resolve non-uniqueness in linear inversion   总被引:2,自引:0,他引:2  
Summary . The recent, but by now classical method for dealing with non-uniqueness in geophysical inverse problems is to construct linear averages of the unknown function whose values are uniquely defined by empirical data (Backus & Gilbert). However, the usefulness of such linear averages for making geophysical inferences depends on the good behaviour of the unknown function in the region in which it is averaged. The assumption of good behaviour, which is implicit in the acceptance of a given average property, is equivalent to the use of a priori information about the unknown function. There are many cases in which such a priori information may be expressed quantitatively and incorporated in the analysis from the very beginning. In these cases, the classical least-squares method may be used both to estimate the unknown function and to provide meaningful error estimates. In this paper I develop methods for exploring the resolving power in such cases. For those problems in which a continuous unknown function is represented by a finite number of'layer averages', the ultimately achievable resolving width is simply the layer thickness, and perfectly rectangular resolving kernels of greater width are achievable. The method is applied to synthetic data for the inverse'gravitational edge effect'problem where yi are data, f (z) is an unknown function, and ei are random errors. Results are compared with those of Parker, who studied the same problem using the Backus—Gilbert approach.  相似文献   

4.
How to Choose Priors for Bayesian Estimation of the Discovery Process Model   总被引:1,自引:0,他引:1  
The Bayesian version of the discovery process model provides an effective way to estimate the parameters of the superpopulation, the efficiency of the exploration effort, the number of pools and the undiscovered potential in a play. The posterior estimates are greatly influenced by the prior distribution of these parameters. Some empirical and statistical relationships for these parameters can be obtained from Monte Carlo simulations of the discovery model. For example, there is a linear relationship between the expectation of a pool size in logarithms and the order of its discovery, the slope of which is related to the discoverability factor. Some simple estimates for these unknown play parameters can be derived based upon these empirical and statistical conclusions and may serve as priors for the Bayesian approach. The priors and posteriors from this empirical Bayesian approach are compared with the estimates from Lee and Wang's modified maximum likelihood approach using the same data.  相似文献   

5.
Hydro power schemes operating in a free electricity market seek to maximize profits by differing generation rates to take best advantage of fluctuating selling prices, subject to the constraints of keeping storage lakes within their operational bounds and avoiding spillage losses. Various computer algorithms can be used in place of manual scheme operation to aid this maximization process, so it is desirable to quantify any profit gained from a given algorithm. A standard approach involves applying the algorithm to a period of past river flow records to see how much additional scheme income might have been obtained. This process requires the use of a hydro power scheme model, which inevitably can only approximate operational details, so the anticipated income gains are likely to be biased estimates of actual income gained from implementation of the algorithm. In addition to preliminary algorithm evaluation, it is desirable that hydro scheme managers have methodology to confirm anticipated income gain. Such confirmation can be difficult because true income gains are typically in the order of a few percentage and may not be easily distinguishable from background noise. We develop an approach, which allows estimation of true income gain for the situation where a change is made from manual to computer control of hydro power scheme operations, or upgrading from one maximization algorithm to another. The method uses a regression model to describe the former period of scheme operation. Postimplementation residuals from the regression predictions then provide estimates of actual income gain. The method can be sensitive to small but consistent income gains. Also, there is no requirement to construct any hydro scheme simulation model so bias effects should be considerably reduced. The approach was developed in the context of evaluating an income-maximization algorithm applied to a small hydro power scheme in the Kaimai Ranges of New Zealand. However, the methodology seems sufficiently simple and general to be applicable, with modification, to other power schemes moving toward increasing income through operational changes.  相似文献   

6.
The development of chironomid-based air temperature inference models in high latitude regions often relies on limited spatial coverage of meteorological data and/or on punctual measurements of water temperature at the time of sampling. The use of simple linear regression to relate air temperature and latitude was until recently the best method to characterize the air temperature gradient along a latitudinal gradient. However, recent studies have used high-resolution gridded climate data to develop new chironomid-based air temperature inference models. This innovative approach has, however, never been further analyzed to test its reliability. This study presents a method using ArcGIS® to extract air temperatures from a high-resolution global gridded climate data set (New et al. 2002) and to incorporate these new data in a variety of chironomid-based air temperature inference models to test their performance. Results suggest that this method is reliable and produces better estimates of air temperature and will be helpful in the development of further quantitative air temperature inference models in remote areas.  相似文献   

7.
One of the uses of geostatistical conditional simulation is as a tool in assessing the spatial uncertainty of inputs to the Monte Carlo method of system uncertainty analysis. Because the number of experimental data in practical applications is limited, the geostatistical parameters used in the simulation are themselves uncertain. The inference of these parameters by maximum likelihood allows for an easy assessment of this estimation uncertainty which, in turn, may be included in the conditional simulation procedure. A case study based on transmissivity data is presented to show the methodology whereby both model selection and parameter inference are solved by maximum likelihood.  相似文献   

8.
We measure the degree of consistency between published models of azimuthal seismic anisotropy from surface waves, focusing on Rayleigh wave phase-velocity models. Some models agree up to wavelengths of ∼2000 km, albeit at small values of linear correlation coefficients. Others are, however, not well correlated at all, also with regard to isotropic structure. This points to differences in the underlying data sets and inversion strategies, particularly the relative 'damping' of mapped isotropic versus anisotropic anomalies. Yet, there is more agreement between published models than commonly held, encouraging further analysis. Employing a generalized spherical harmonic representation, we analyse power spectra of orientational (2Ψ) anisotropic heterogeneity from seismology. We find that the anisotropic component of some models is characterized by stronger short-wavelength power than the associated isotropic structure. This spectral signal is consistent with predictions from new geodynamic models, based on olivine texturing in mantle flow. The flow models are also successful in predicting some of the seismologically mapped patterns. We substantiate earlier findings that flow computations significantly outperform models of fast azimuths based on absolute plate velocities. Moreover, further evidence for the importance of active upwellings and downwellings as inferred from seismic tomography is presented. Deterministic estimates of expected anisotropic structure based on mantle flow computations such as ours can help guide future seismologic inversions, particularly in oceanic plate regions. We propose to consider such a priori information when addressing open questions about the averaging properties and resolution of surface and body wave based estimates of anisotropy.  相似文献   

9.
This work investigates constructing plans of building interiors using learned building measurements. In particular, we address the problem of accurately estimating dimensions of rooms when measurements of the interior space have not been captured. Our approach focuses on learning the geometry, orientation and occurrence of rooms from a corpus of real-world building plan data to form a predictive model. The trained predictive model may then be queried to generate estimates of room dimensions and orientations. These estimates are then integrated with the overall building footprint and iteratively improved using a two-stage optimisation process to form complete interior plans.

The approach is presented as a semi-automatic method for constructing plans which can cope with a limited set of known information and constructs likely representations of building plans through modelling of soft and hard constraints. We evaluate the method in the context of estimating residential house plans and demonstrate that predictions can effectively be used for constructing plans given limited prior knowledge about the types of rooms and their topology.  相似文献   


10.
Choice of norm for the density distribution of the Earth   总被引:1,自引:0,他引:1  
Summary. The determination of the density distribution of the Earth from gravity data is called the inverse gravimetric problem. A unique solution to this problem may be obtained by introducing a priori data concerning the covariance of density anomalies. This is equivalent to requiring the density to fulfil a minimum norm condition. The generally used norm is the one equal to the integral of the square of the density distribution ( L2 -norm), the use of which implies that blocks of constant density are uncorrelated. It is shown that for harmonic anomalous density distributions this leads to an external gravity field with a power spectrum (degree-variances) which tends too slowly to zero, i.e. implying gravity anomalies much less correlated than actually observed. It is proposed to use a stronger norm, equal to the integral of the square sum of the derivatives of the density distribution. As a consequence of this, base functions which are constant within blocks, are no longer a natural choice when solving the inverse gravimetric problem. Instead a block with a linearly varying density may be used. A formula for the potential of such a block is derived.  相似文献   

11.
We incorporate a maximum entropy image reconstruction technique into the process of modelling the time-dependent geomagnetic field at the core–mantle boundary (CMB). In order to deal with unconstrained small lengthscales in the process of inverting the data, some core field models are regularized using a priori quadratic norms in both space and time. This artificial damping leads to the underestimation of power at large wavenumbers, and to a loss of contrast in the reconstructed picture of the field at the CMB. The entropy norm, recently introduced to regularize magnetic field maps, provides models with better contrast, and involves a minimum of a priori information about the field structure. However, this technique was developed to build only snapshots of the magnetic field. Previously described in the spatial domain, we show here how to implement this technique in the spherical harmonic domain, and we extend it to the time-dependent problem where both spatial and temporal regularizations are required. We apply our method to model the field over the interval 1840–1990 from a compilation of historical observations. Applying the maximum entropy method in space—for a fit to the data similar to that obtained with a quadratic regularization—effectively reorganizes the magnetic field lines in order to have a map with better contrast. This is associated with a less rapidly decaying spectrum at large wavenumbers. Applying the maximum entropy method in time permits us to model sharper temporal changes, associated with larger spatial gradients in the secular variation, without producing spurious fluctuations on short timescales. This method avoids the smearing back in time of field features that are not constrained by the data. Perspectives concerning future applications of the method are also discussed.  相似文献   

12.
A systematic test of time-to-failure analysis   总被引:7,自引:0,他引:7  
Time-to-failure analysis is a technique for predicting earthquakes in which a failure function is fit to a time-series of accumulated Benioff strain. Benioff strain is computed from regional seismicity in areas that may produce a large earthquake. We have tested the technique by fitting two functions, a power law proposed by Bufe & Varnes (1993) and a log-periodic function proposed by Sornette & Sammis (1995). We compared predictions from the two time-to-failure models to observed activity and to predicted levels of activity based upon the Poisson model. Likelihood ratios show that the most successful model is Poisson, with the simple Poisson model four times as likely to be correct as the best time-to-failure model. The best time-failure model is a blend of 90 per cent Poisson and 10 per cent log-periodic predictions. We tested the accuracy of the error estimates produced by the standard least-squares fitter and found greater accuracy for fits of the simple power law than for fits of the more complicated log-periodic function. The least-squares fitter underestimates the true error in time-to-failure functions because the error estimates are based upon linearized versions of the functions being fitted.  相似文献   

13.
Error covariance estimates are necessary information for the combination of solutions resulting from different kinds of data or methods, or for the assimilation of new results in already existing solutions. Such a combination or assimilation process demands proper weighting of the data, in order for the combination to be optimal and the error estimates of the results realistic. One flexible method for the gravity field approximation is least-squares collocation leading to optimal solutions for the predicted quantities and their error covariance estimates. The drawback of this method is related to the current ability of computers in handling very large systems of linear equations produced by an equally large amount of available input data. This problem becomes more serious when error covariance estimates have to be simultaneously computed. Using numerical experiments aiming at revealing dependencies between error covariance estimates and given features of the input data we investigate the possibility of a straightforward estimation of error covariance functions exploiting known characteristics of the observations. The experiments using gravity anomalies for the computation of geoid heights and the associated error covariance functions were conducted in the Arctic region north of 64° latitude. The correlation between the known features of the data and the parameters variance and correlation length of the computed error covariance functions was estimated using multiple regression analysis. The results showed that a satisfactory a priori estimation of these parameters was not possible, at least in the area considered.  相似文献   

14.
A Bayesian approach to inverse modelling of stratigraphy, part 1: method   总被引:2,自引:0,他引:2  
The inference of ancient environmental conditions from their preserved response in the sedimentary record still remains an outstanding issue in stratigraphy. Since the 1970s, conceptual stratigraphic models (e.g. sequence stratigraphy) based on the underlying assumption that accommodation space is the critical control on stratigraphic architecture have been widely used. Although these methods considered more recently other possible parameters such as sediment supply and transport efficiency, they still lack in taking into account the full range of possible parameters, processes, and their complex interactions that control stratigraphic architecture. In this contribution, we present a new quantitative method for the inference of key environmental parameters (specifically sediment supply and relative sea level) that control stratigraphy. The approach combines a fully non‐linear inversion scheme with a ‘process–response’ forward model of stratigraphy. We formulate the inverse problem using a Bayesian framework in order to sample the full range of possible solutions and explicitly build in prior geological knowledge. Our methodology combines Reversible Jump Markov chain Monte Carlo and Simulated Tempering algorithms which are able to deal with variable‐dimensional inverse problems and multi‐modal posterior probability distributions, respectively. The inverse scheme has been linked to a forward stratigraphic model, BARSIM (developed by Joep Storms, University of Delft), which simulates shallow‐marine wave/storm‐dominated systems over geological timescales. This link requires the construction of a likelihood function to quantify the agreement between simulated and observed data of different types (e.g. sediment age and thickness, grain size distributions). The technique has been tested and validated with synthetic data, in which all the parameters are specified to produce a ‘perfect’ simulation, although we add noise to these synthetic data for subsequent testing of the inverse modelling approach. These tests addressed convergence and computational‐overhead issues, and highlight the robustness of the inverse scheme, which is able to assess the full range of uncertainties on the inferred environmental parameters and facies distributions.  相似文献   

15.
Depositional stratigraphy represents the only physical archive of palaeo-sediment routing and this limits analysis of ancient source-to-sink systems in both space and time. Here, we use palaeo-digital elevation models (palaeoDEMs; based on high-resolution palaeogeographic reconstructions), HadCM3L general circulation model climate data and the BQART suspended sediment discharge model to demonstrate a predictive, forward approach to palaeo-sediment routing system analysis. To exemplify our approach, we use palaeoDEMs and HadCM3L data to predict the configurations, geometries and climates of large continental catchments in the Cenomanian and Turonian North American continent. Then, we use BQART to estimate suspended sediment discharges and catchment-averaged erosion rates and we map their spatial distributions. We validate our estimates with published geologic constraints from the Cenomanian Dunvegan Formation, Alberta, Canada, and the Turonian Ferron Sandstone, Utah, USA, and find that estimates are consistent or within a factor of two to three. We then evaluate the univariate and multivariate sensitivity of our estimates to a range of uncertainty margins on palaeogeographic and palaeoclimatic boundary conditions; large uncertainty margins (≤50%/±5°C) still recover estimates of suspended sediment discharge within an order of magnitude of published constraints. PalaeoDEMs are therefore suitable as a first-order investigative tool in palaeo-sediment routing system analysis and are particularly useful where stratigraphic records are incomplete. We highlight the potential of this approach to predict the global spatio-temporal response of suspended sediment discharges and catchment-averaged erosion rates to long-period tectonic and climatic forcing in the geologic past.  相似文献   

16.
We present an evaluation of the procedure by which model prediction bias is examined in palaeolimnological transfer function inference models. We argue that most of the prediction biases commonly reported in the literature are, in fact, fallacious, and are the artificial consequence of the inappropriate manner in which residuals are traditionally examined. We show that the extent of the specious model bias is entirely predictable from first principles and is essentially determined by the strength of the predictive model. We suggest that the analysis of residuals should always be examined as a function of the model's predictions and we discuss the implications of the old and new approaches.  相似文献   

17.
地理专家知识表示的框架网络模型研究   总被引:1,自引:0,他引:1  
付炜 《地理研究》2002,21(3):357-364
基于框架网络结构模型的专家知识表示方法 ,采用知识的框架网络结构描述地学环境的实体单元 ,将各级专家知识的表示以指针连接 ,形成了由知识到语义的专家知识表示的框架网络。以乌鲁木齐河流域土地合理利用规划决策专家系统的构建为例 ,阐述了该模型的构造方法和实现方案 ,并讨论了系统知识库的组织结构和推理规则的设计原理。  相似文献   

18.
Mosquito surveillance programs provide a primary means of understanding mosquito vector population dynamics for the risk assessment of human exposure to West Nile virus (WNv). The lack of spatial coverage and missing observations in mosquito surveillance data often challenge our efforts to predict this vector-borne disease and implement control measures. We developed a WNv mosquito abundance prediction model in which local meteorological and environmental data were synthesized with entomological data in a generalized linear mixed modeling framework. The discrete nature of mosquito surveillance data is accommodated by a Poisson distributional assumption, and the site-specific random effects of the generalized linear mixed model (GLMM) capture any fluctuation unexplained by a general trend. The proposed Poisson GLMMs efficiently account for the nested structure of mosquito surveillance data and incorporate the temporal correlation between observations obtained at each trap by a first-order autoregressive model. In the case study, Bayesian inference of the proposed models is illustrated using a subset of mosquito surveillance data in the Greater Toronto Area. The relevance of the proposed GLMM tailored to WNv mosquito surveillance data is highlighted by the comparison of model performance in the presence of inevitable but quantifiable uncertainties.  相似文献   

19.
Summary. Most of the Earth's magnetic field and its secular change originate in the core. Provided the mantle can be treated as an electrical insulator, stochastic inversion enables surface observations to be analysed for the core field. A priori information about the variation of the field at the core boundary leads to very stringent conditions at the Earth's surface. The field models are identical with those derived from the method of harmonic splines (Shure, Parker & Backus) provided the a priori information is specified appropriately.
The method is applied to secular variation data from 106 magnetic observatories. Model predictions for fields at the Earth's surface have error estimates associated with them that appear realistic. For plausible choices of a priori information the error of the field at the core is unbounded, but integrals over patches of the core surface can have finite errors. The hypothesis that magnetic fields are frozen to the core fluid implies that certain integrals of the secular variation vanish. This idea is tested by computing the integrals and their standard and maximum errors. Most of the integrals are within one standard deviation of zero, but those over the large patches to the north and south of the magnetic equator are many times their standard error, because of the dominating influence of the decaying dipole. All integrals are well within their maximum error, indicating that it will be possible to construct core fields, consistent with frozen flux, that satisfy the observations.  相似文献   

20.
K. Stüwe  J. Robl  S. Matthai 《Geomorphology》2009,108(3-4):200-208
A simple numerical landscape evolution model is used to investigate the rate of erosional decay of the Yucca Mountain crest in Nevada, USA — a location proposed as a permanent repository for high level radioactive waste. The model is based on a stream power approach in which we assume that the rate of erosion is proportional to the size of the catchment as a proxy for water flux and to the square of the topographic gradient. The proportionality constants in the model are determined using the structural history of the region: extensional tectonics has dissected the region into a series of well-defined tilt blocks in the last 11 my and the ratio of fault displacement and gully incision during this time is used to scale the model. Forward predictions of our model into the future show that the crest will denude to the level of the proposed site between 500,000 years and 5 my. This prediction is based on conservative estimates for all involved parameters. Erosion may be more rapid if other processes are involved. For example, our model does not consider continuing uplift or catastrophic surface processes as they have been recorded in the region. We conclude that any “total system performance analysis” (TSPA — as has been performed for the Yucca Mountain region to predict geological events inside the ridge) must consider erosion as an integral part of its predictions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号