首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
The geographical detector model can be applied to either spatial or non-spatial data for discovering associations between a dependent variable and potential discrete controlling factors. It can also be applied to continuous factors after they are discretized. However, the power of determinant (PD), measuring data association based on the variance of the dependent variable within zones of a potential controlling factor, does not explicitly consider the spatial characteristics of the data and is also influenced by the number of levels into which each continuous factor is discretized. Here, we propose an improved spatial data association estimator (termed as SPatial Association DEtector, SPADE) to measure the spatial data association by the power of spatial and multilevel discretization determinant (PSMD), which explicitly considers the spatial variance by assigning the weight of the influence based on spatial distribution and also minimizes the influence of the number of levels on PD values by using the multilevel discretization and considering information loss due to discretization. We illustrate our new method by applying it to simulated data with known benchmark association and to dissection density data in the United States to assess its potential controlling factors. Our results show that PSMD is a better measure of association between spatially distributed data than the original PD.  相似文献   

2.
Much uncertainty is derived from the application of conceptual rainfall runoff models. In this paper, HYSIM, an 'off-the-shelf' conceptual rainfall runoff model, is applied to a suite of catchments throughout Ireland in preparation for use in climate impact assessment. Parameter uncertainty is assessed using the GLUE methodology. Given the lack of source code available for the model, parameter sampling is carried out using Latin hypercube sampling. Uncertainty bounds are constructed for model output. These bounds will be used to quantify uncertainty in future simulations as they include error derived from data measurement, model structure and parameterization.  相似文献   

3.
Model parameterization through adjustment to field data is a crucial step in the modeling and the understanding of the drainage network response to tectonic or climatic perturbations. Using as a test case a data set of 18 knickpoints that materialize the migration of a 0.7-Ma-old erosion wave in the Ourthe catchment of northern Ardennes (western Europe), we explore the impact of various data fitting on the calibration of the stream power model of river incision, from which a simple knickpoint celerity equation is derived. Our results show that statistical least squares adjustments (or misfit functions) based either on the stream-wise distances between observed and modeled knickpoint positions at time t or on differences between observed and modeled time at the actual knickpoint locations yield significantly different values for the m and K parameters of the model. As there is no physical reason to prefer one of these approaches, an intermediate least-rectangles adjustment might at first glance appear as the best compromise. However, the statistics of the analysis of 200 sets of synthetic knickpoints generated in the Ourthe catchment indicate that the time-based adjustment is the most capable of getting close to the true parameter values. Moreover, this fitting method leads in all cases to an m value lower than that obtained from the classical distance adjustment (for example, 0.75 against 0.86 for the real case of the Ourthe catchment), corresponding to an increase in the non-linear character of the dependence of knickpoint celerity on discharge.  相似文献   

4.
The thermal structure of a sedimentary basin is controlled by its thermal conductivity, its boundary conditions, water flow, rate of sedimentation and erosion and radiogenic heat sources. The radiogenic heat production in the sediments is known to vary over several orders of magnitude, with the lowest values in evaporites and carbonates and the highest values in black shales. Due to a paucity of information available on the existing heat sources, this parameter can be represented with a known mean value and a Gaussian correlation structure rather than a deterministic function. In this paper, the 1-D steady-state thermal structure in a sedimentary basin has been modelled in a stochastic framework with a random radiogenic heat source, and analytical expressions for the first two moments of the temperature field have been obtained. A synthetic example has been examined to quantify the error bounds on the temperature field due to uncertainties in the radiogenic heat sources.  相似文献   

5.
基于DEM的分布式水文模型构建方法   总被引:52,自引:4,他引:52  
基于 DEM的分布式水文模型是现代水文学同高科技 (如计算机技术、 3S技术等 )相结合的产物 ,是研究变化环境下水文循环与水资源演化规律的理想工具 ,代表了水文模型的最新发展方向。从 DEM的特性出发 ,本文探讨并总结了分布式水文模型的特点、建模思路和模型基本结构框图。在流域离散化方面 ,重点介绍了分布式水文模型常用的三种单元划分方法 ;最后 ,针对分布式水文模型构建问题 ,从“输入模块”、“单元水文模型”、“河网汇流模型”三方面 ,阐述了分布式水文模型微结构构建方法。  相似文献   

6.
We present simulations of large-scale landscape evolution on tectonic time scales obtained from a new numerical model which allows for arbitrary spatial discretization. The new method makes use of efficient algorithms from the field of computational geometry to compute the set of natural neighbours of any irregular distribution of points in a plane. The natural neighbours are used to solve geomorphic equations that include erosion/deposition by channelled flow and diffusion. The algorithm has great geometrical flexibility, which makes it possible to solve problems involving complex boundaries, radially symmetrical uplift functions and horizontal tectonic transport across strike-slip faults. The algorithm is also ideally suited for problems which require large variations in spatial discretization and/or self-adaptive meshing. We present a number of examples to illustrate the power of the new approach and its advantages over more 'classical' models based on regular (rectangular) discretization. We also demonstrate that the synthetic river networks and landscapes generated by the model obey the laws of network composition and have scaling properties similar to those of natural landscapes. Finally we explain how orographically controlled precipitation and flexural isostasy may be easily incorporated in the model without sacrificing efficiency.
  相似文献   

7.
Summary. A travel-time curve for P seismic waves recorded at NORSAR from earthquakes in the North Atlantic and Arctic Oceans is of a significantly different character from those for rays bottoming under western Russia and southeast and central Europe. The differences arise principally from variations in the outer 200–300 km of the three regions and from the apparently anomalous nature of the velocity distribution between 300 and 500km beneath southern and central Europe. Extremal 'tau' inversion is extended to the calculation of bounds on vertical transit time for different depth ranges beneath the three regions. A maximum difference of 5 s is permitted by the bounds in the two-way vertical transit times of P waves between 50 and 800 km below western Russia and the oceans. The bounds obtained on transit times between 300 and 800 km demand no significant difference between the two regions and permit a maximum difference of 2.5 s in two-way transit time. This is consistent with the observation that the oceanic travel-time curve may be fitted to within observational error by a model which is substantially the same as that for western Russia below 300 km.  相似文献   

8.
In Paper I (Breuer & Wolf 1995), a preliminary interpretation of the postglacial land emergence observed at a restricted set of six locations in the Svalbard Archipelago was given. The study was based on a simple model of the Barents Sea ice sheet and suggested increases in lithosphere thickness and asthenosphere viscosity with increasing distance from the continental margin.
In the present paper, the newly developed high-resolution load model. BARENTS-2, and land-uplift observations from an extended set of 25 locations are used to study further the possibility of resolving lateral heterogeneity in the upper mantle below the northern Barents Sea. A comparison of the calculated and observed uplift values shows that the lithosphere thickness is not well resolved by the observations, although values above 110 km are most common for this parameter. In contrast to this, there are indications of a lateral variation of asthenosphere viscosity. Whereas values in the range 1018-1020Pas are inferred for locations close to the continental margin, 1020-1021 Pa s are suggested further away from the margin.
A study of the sensitivity of the values found for lithosphere thickness and asthenosphere viscosity to modifications of load model BARENTS-2 shows that such modifications can be largely accommodated by appropriate changes in lithosphere thickness, whereas the suggested lateral variation of asthenosphere viscosity is essentially unaffected. An estimate of the influence of the Fennoscandian. ice sheet leads to the conclusion that its neglect results in an underestimation of the thickness of the Barents Sea ice sheet by about 10 per cent.  相似文献   

9.
Mineral-potential mapping is the process of combining a set of input maps, each representing a distinct geo-scientific variable, to produce a single map which ranks areas according to their potential to host mineral deposits of a particular type. The maps are combined using a mapping function that must be either provided by an expert (knowledge-driven approach), or induced from sample data (data-driven approach). Current data-driven approaches using multilayer perceptrons (MLPs) to represent the mapping function have several inherent problems: they are highly sensitive to the selection of training data; they do not utilize the contextual information provided by nondeposit data; and there is no objective interpretation of the values output by the MLP. This paper presents a new approach by which MLPs can be trained to output values that can be interpreted strictly as representing posterior probabilities. Other advantages of the approach are that it utilizes all data in the construction of the model, and thus eliminates any dependence on a particular selection of training data. The technique is applied to mapping gold mineralization potential in the Castlemaine region of Victoria, Australia, and results are compared with a method based on estimating probability density functions.  相似文献   

10.
Summary. Small amplitude oscillations of a rotating, density-stratified fluid bounded by a spherical shell are examined. No restrictions are placed on the thickness of the shell. The internal mode spectrum is examined in the complete rotation-stratification parameter range including the regime that is appropriate for a plausible stratification distribution in the Earth's fluid core. A mathematical model is derived in terms of an eigenvalue PDE of mixed type. The existence of oscillatory solutions is exhibited in the limits of no rotation and no stratification. The frequency spectrum is extended asymptotically away from these limiting cases. A reduction in the complexity of the PDE for modes oscillating at the inertial frequency is exploited. A variational formulation is constructed in which the stratification parameter is treated as an eigenvalue of the system for fixed wave frequency. The spectral information is again extended asymptotically away from these 'accessible' points. Although the PDE reduces to Laplace's tidal equations (LTE) only under stringent parameter restrictions, it is observed that aspects of the behaviour of low frequency LTE modes are reproduced in the general model.  相似文献   

11.
Reduced complexity strategies for modelling urban floodplain inundation   总被引:2,自引:1,他引:2  
Significant advances in flood inundation modelling have been made in the last decade through the use of a new generation of 2D hydraulic numerical models. These offer the potential to predict the local pattern and timing of flood depth and velocity, enabling informed flood risk zoning and improved emergency planning. With the availability of high resolution DEMs derived from airborne lidar, these models can theoretically now be routinely parameterized to represent considerable topographic complexity, even in urban areas where the potential exists to represent flows at the scale of individual buildings. Currently, however, computational constraints on conventional finite element and volume codes typically require model discretization at scales well below those achievable with lidar and are thus unable to make optimal use of this emerging data stream.In this paper we review two strategies that attempt to address this mismatch between model and data resolution in an effort to improve urban flood forecasts. The first of these strives for a solution by simplifying the mathematical formulation of the numerical model by using a computationally efficient 2D raster storage cell approach coupled to a 1D channel model. This parsimonious model structure enables simulations over large model domains offering the opportunity to employ a topographic discretization strategy which explicitly represents the built environment. The second approach seeks to further reduce the computational overhead of this raster method by employing a subgrid parameterization to represent the effect of buildings and micro-relief on flow pathways and floodplain storage. This multi-scale methodology enables highly efficient model applications at coarse spatial resolutions while retaining information about the complex geometry of the built environment.These two strategies are evaluated through numerical experiments designed to reconstruct a flood in the small town of Linton in southern England, which occurred in response to a 1 in 250 year rainfall event in October 2001. Results from both approaches are encouraging, with the spatial pattern of inundation and flood wave propagation matching observations well. Both show significant advantages over a coarse resolution model without subgrid parameterisation, particularly in terms of their ability to reproduce both hydrograph and inundation depth measurements simultaneously, without need for recalibration. The subgrid parameterization is shown to achieve this without contributing significant computational complexity and reduces model run-times by an order of magnitude.  相似文献   

12.
Two simple end-member models of a subduction channel have been proposed in the literature: (i) the 'pressure-imposed' model for which the pressure within the channel is assumed to be lithostatic, the channel walls have negligible strength with respect to lateral pressure gradients, and the channel geometry therefore varies with time and (ii) the 'geometry-imposed' model of constant channel geometry, rigid walls and resultant lateral variation in pressure. Neither of these models is realistic, but they provide lower and upper bounds to potential pressure distributions in natural subduction zones. The critical parameter is the relative strength of the confining plates, reflected in the effective viscosity ratio between the channel fill and the walls. The assertion that the 'geometry-imposed' model is internally inconsistent is incorrect—it merely represents one bound to possible behaviour and a bound that may be approached for realistic values of the effective viscosity for weak channel fill (e.g. unconsolidated ocean-floor sediments) and relatively cold and strong subducting and overriding lithospheric plates.  相似文献   

13.
Summary. The inverse problem of using static displacements observed at the surface to infer volume changes within the Earth is considered. This problem can be put in a form such that the method of ideal bodies and the method of positivity constraints may both be applied. Thus all of the techniques previously developed for the gravity inverse problem can be extended to the static displacement problem. Given bounds on the depth, the greatest lower bound on the fractional volume change can be estimated, or, given bounds on the fractional volume change, the least upper bound on the depth can be estimated. Methods of placing bounds on generalized moments of the perturbing body are also developed, and techniques of handling errors in the data are discussed.
Examples are given for both two- and three-dimensional problems. The ideal body method is suited for both 2- and 3-D problems when only two data points are considered, but is unwieldy for more data points. The method of positivity constraints is more versatile and can be used when there are many data points in the case of 2-D problems, but it may lead to an excessive amount of computation in 3-D problems.  相似文献   

14.
Measurement of dispersed vitrinite reflectance in organic sediments is one of the few regional data sets used for placing bounds on the thermal history of a sedimentary basin. Reflectance data are important when access to complementary information such as high‐quality seismic data is unavailable to place bounds on subsidence history and in locations where uplift is an important part of the basin history. Attributes which make vitrinite reflectance measurements a useful data set are the relative ease of making the measurement, and the availability of archived well cores and cuttings in state, provincial, and federal facilities. In order to fully utilize vitrinite data for estimating the temperature history in a basin, physically based methods are required to calibrate an equivalent reflectance from a modelled temperature history with measured data. The most common method for calculating a numerical vitrinite reflectance from temperature history is the EASY%Ro method which we show systematically underestimates measured data. We present a new calculated reflectance model and an adjustment to EASY%Ro which makes the correlation between measured vitrinite values and calculated vitrinite values a physical relationship and more useful for constraining thermal models. We then show that calibrating the thermal history to vitrinite on a constant age date surface (e.g., top Cretaceous) instead of calibrating the thermal history in depth removes the heating rate component from the reflectance calculation and makes thermal history calibration easier to understand and more directly related to heat flow. Finally, we use bounds on the vitrinite–temperature relationships on a constant age date surface to show that significant uncertainty exists in the vitrinite data reported in most data sets.  相似文献   

15.
应用水平土柱法测定了杨凌地区典型粘壤土的水分扩散率,利用土壤水分扩散率的单对数模型和双对数模型对其进行了拟合,建立了土壤水分扩散率单一参数模型,基于主成分分析建立了单一参数模型中参数B的BP神经网络模型。结果表明:利用主成分分析可将研究区域土壤容重、有机质含量、粘粒含量、粗粉粒含量和砂粒含量综合成3个主成分;基于主成分分析建立的BP神经网络模型拟合的单一参数模型参数[B]的均方根误差RMSE为0.308 2;将拟合得到的参数B代入单一参数模型中对土壤水分扩散率进行预测,除去其中较大值的预测结果偏低外,其余土壤水分扩散率预测结果都比较接近实测值,预测结果的均方根误差RMSE为0.257 8,可利用基于主成分分析建立的BP神经网络模型预测单一参数模型中的参数B。  相似文献   

16.
The relationship between two or more variables may change over the geographic space. The change can be in parameter values (e.g., regression coefficients) or even in relation forms (e.g., linear, quadratic, or exponential). Existing local spatial analysis methods often assume a relationship form (e.g., a linear regression model) for all regions and focus only on the change in parameter values. Therefore, they may not be able to discover local relationships of different forms simultaneously. This research proposes a nonparametric approach, a local entropy map, which does not assume a prior relationship form and can detect the existence of multivariate relationships regardless of their forms. The local entropy map calculates an approximation of the Rényi entropy for the multivariate data in each local region (in the geographic space). Each local entropy value is then converted to a p-value by comparing to a distribution of permutation entropy values for the same region. All p-values (one for each local region) are processed by several statistical tests to control the multiple-testing problem. Finally, the testing results are mapped and allow analysts to locate and interactively examine significant local relationships. The method is evaluated with a series of synthetic data sets and a real data set.  相似文献   

17.
基于遗传算法自动获取CA模型的参数   总被引:11,自引:1,他引:10  
杨青生  黎夏 《地理研究》2007,26(2):229-237
本文提出了基于遗传算法来寻找CA模型最佳参数的方法。CA被越来越多地应用于城市和土地利用等复杂系统的动态模拟。CA模型中变量的参数值对模拟结果有非常重要的影响。如何获取理想的参数值是模型的关键。传统的逻辑回归模型运算简单,常常用来获取模型的参数值,要求解释变量间线性无关,所以获取的城市CA模型参数具有一定的局限性。遗传算法在参数优化组合、快速搜索参数值方面有很大的优势。本文利用遗传算法来自动获取优化的CA模型参数值,并获得了纠正后的CA模型。将该模型应用于东莞1988~2004年的城市发展的模拟中,得到了较好的效果。研究结果表明,遗传算法可以有效地自动获取CA模型的参数,其模拟的结果要比传统的逻辑回归校正的CA模型模拟精度高。  相似文献   

18.
In many cases of model evaluation in physical geography, the observed data to which model predictions are compared may not be error free. This paper addresses the effect of observational errors on the mean squared error, the mean bias error and the mean absolute deviation through the derivation of a statistical framework and Monte Carlo simulation. The effect of bias in the observed values may either decrease or increase the expected values of the mean squared error and mean bias error, depending on whether model and observational biases have the same or opposite signs, respectively. Random errors in observed data tend to inflate the mean squared error and the mean absolute deviation, and also increase the variability of all the error indices considered here. The statistical framework is applied to a real example, in which sampling variability of the observed data appears to account for most of the difference between observed and predicted values. Examination of scaled differences between modelled and observed values, where the differences are divided by the estimated standard errors of the observed values, is suggested as a diagnostic tool for determining whether random observational errors are significant.  相似文献   

19.
Summary. The convergence of two methods of inferring bounds on seismic velocity in the Earth from finite sets of inexact observations of τ ( p ) and X( p ) are examined: the linear programming (LP) method of Garmany, Orcutt & Parker and the quadratic programming (QP) method of Stark & Parker. The LP method uses strict limits on the observations of τ and X as its data, while QP uses estimated means and variances of τ and X. The approaches are quite similar and involve only one inherent approximation: they use a finite-dimensional representation of seismic velocity within the Earth. Clearly, not every Earth model can be written this way. It is proved that this does not hinder the methods - they may be made as accurate as desired by increasing the number of dimensions in a specified way. It is shown how to get the highest accuracy with a given number of dimensions.  相似文献   

20.
Summary. Using the techniques of linear and quadratic programming, it can be shown that the isostatic response function for the continental United States, computed by Lewis & Dorman (1970), is incompatible with any local compensation model that involves only negative density contrasts beneath topographic loads. We interpret the need for positive densities as indicating that compensation is regional rather than local. The regional compensation model that we investigate treats the outer shell of the Earth as a thin elastic plate, floating on the surface of a liquid. The response of such a model can be inverted to yield the absolute density gradient in the plate, provided the flexural rigidity of the plate and the density contrast between mantle and topography are specified.
If only positive density gradients are allowed, such a regional model fits the United States response data provided the flexural rigidity of the plate lies between 1021 and 1022 N m. The fit of the model is insensitive to the mantle/ load density contrast, but certain bounds on the density structure can be established if the model is assumed correct. In particular, the maximum density increase within the plate at depths greater than 34 kin must not exceed 470 kg m−3; this can be regarded as an upper bound on the density contrast at the Mohorovicic discontinuity.
The permitted values of the flexural rigidity correspond to plate thicknesses in the range 5–10 km, yet deformations at depths greater than 20 km are indicated by other geophysical data. We conclude that the plate cannot be perfectly elastic; its effective elastic moduli must be much smaller than the seismically determined values. Estimates of the stress-differences produced in the earth by topographic loads, that use the elastic plate model, together with seismically determined elastic parameters, will be too large by a factor of four or more.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号