首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The nearest neighbor search algorithm is one of the major factors that influence the efficiency of grid interpolation. This paper introduces a KD-tree that is a two-dimensional index structure for use in grid interpolation. It also proposes an improved J-nearest neighbor search strategy based on ??priority queue?? and ??neighbor lag?? concepts. In the strategy, two types of J-nearest neighbor search algorithms can be used; these algorithms correspond to the consideration of a fixed number of points and a fixed radius. By using the KD-tree and proposed strategy, interpolation can be performed with methods such as Inverse Distance Weighting and Kriging. Experimental results show that the proposed algorithms has high operating efficiency, especially when the data amount is enormous, and high practical value for increasing the efficiency of grid interpolation.  相似文献   

2.
Some commonly used interpolation algorithms are analyzed briefly in this paper. Among all of the methods, biharmonic spline interpolation, which is based on Green’s function and proposed by Sandwell, has become the mainstream method for its high precision, simplicity and flexibility. However, the minimum curvature method has two flaws. First, it suffers from undesirable oscillations between data points, which is solved by interpolation with splines in tension. Second, the computation time is approximately proportional to the cube of the number of data constraints, making the method slow for situations with dense data coverage. Focusing on the second problem, this paper introduces the moving surface spline interpolation method based on Green’s function, and the interpolation error equations are deduced. Because the proposed method only chooses the nearest data points by using the merge sort algorithm for interpolating, the computation time is greatly decreased. The optimal number of the nearest points can be determined by using the interpolation error estimation equation. No matter how many data points there are, this method can be implemented without difficulty. Examples show that the proposed method can obtain high interpolation precision and high computation speed at the same time.  相似文献   

3.
The objective of this paper is to introduce a novel paradigm to reduce the computational effort in waterflooding global optimization problems while realizing smooth well control trajectories amenable for practical deployments in the field. In order to overcome the problems of slow convergence and non-smooth impractical control strategies, often associated with gradient-free optimization (GFO) methods, we introduce a generalized approach which represent the controls by smooth polynomial approximations either by a polynomial function or by a piecewise polynomial interpolation, which we denote as function control method (FCM) and interpolation control method (ICM), respectively. Using these approaches, we aim to optimize the coefficients of the selected functions or the interpolation points in order to represent the well-control trajectories along a time horizon. Our results demonstrate significant computational savings, due to a substantial reduction in the number of control parameters, as we seek the optimal polynomial coefficients or the interpolation points to describe the control trajectories as opposed to directly searching for the optimal control values (bottom hole pressure) at each time interval. We demonstrate the efficiency of the method on two and three-dimensional models, where we found the optimal variables using a parallel dynamic-neighborhood particle swarm optimization (PSO). We compared our FCM-PSO and ICM-PSO to the traditional formulation solved by both gradient-free and gradient-based methods. In all comparisons, both FCM and ICM show very good to superior performances.  相似文献   

4.
Looking at kriging problems with huge numbers of estimation points and measurements, computational power and storage capacities often pose heavy limitations to the maximum manageable problem size. In the past, a list of FFT-based algorithms for matrix operations have been developed. They allow extremely fast convolution, superposition and inversion of covariance matrices under certain conditions. If adequately used in kriging problems, these algorithms lead to drastic speedup and reductions in storage requirements without changing the kriging estimator. However, they require second-order stationary covariance functions, estimation on regular grids, and the measurements must also form a regular grid. In this study, we show how to alleviate these rather heavy and many times unrealistic restrictions. Stationarity can be generalized to intrinsicity and beyond, if decomposing kriging problems into the sum of a stationary problem and a formally decoupled regression task. We use universal kriging, because it covers arbitrary forms of unknown drift and all cases of generalized covariance functions. Even more general, we use an extension to uncertain rather than unknown drift coefficients. The sampling locations may now be irregular, but must form a subset of the estimation grid. Finally, we present asymptotically exact but fast approximations to the estimation variance and point out application to conditional simulation, cokriging and sequential kriging. The drastic gain in computational and storage efficiency is demonstrated in test cases. Especially high-resolution and data-rich fields such as rainfall interpolation from radar measurements or seismic or other geophysical inversion can benefit from these improvements.  相似文献   

5.
基于PER-Kriging插值方法的降水空间展布   总被引:1,自引:0,他引:1  
将PRISM插值对高程因素的计算方法引入普通Kriging插值中,提出了综合考虑观测点与插值点的位置、距离和高程关系的PER-Kriging插值方法。通过引入计算参考平面并进行数学变换,使该方法具有较好的可操作性。在对高程~降水量采用线性拟合的基础上,加入非线性拟合,进一步提高PER-Kriging插值的表达能力。在澜沧江流域分别采用PER-Kriging和普通Kriging方法进行降水插值对比,结果表明:前者能有效地消除观测点与周围区域地形差异太大造成的插值异常,并且能较好地消除插值运算的平滑效应,平均误差比后者减少了20 mm以上;不同曲线拟合插值的相对误差均小于12%。  相似文献   

6.
Using digital elevation models (DEMs), viewshed analysis algorithms determine the visibility of each point on the terrain at a given location in space. As a data-parallel algorithm, real-time viewshed analysis from grid DEM poses a practical challenge to personal computer (PC) users, particularly when dealing with higher resolution and accuracy of large terrain data. Therefore, this paper presents a universal domain decomposition algorithm based on an equal-area strategy for the parallel viewshed analysis on a PC cluster system. The approach uses a scan-line filling method for data partitioning of the irregular bounding polygon of the terrain. The terrain data are divided into sectors of the same area that are connected by the viewpoint and the region vertices, ignoring the null value (or NODATA) points. Furthermore, each sector is assigned to one processor and is organized in the form of triples composed of location and elevation at one point. An index of triples is built to store all of the locations of terminal vertices row-by-row and thus the random access of any point is achieved by using the offsets in each row. Two commonly applied viewshed algorithms, namely, “reference plane” and “Xdraw” algorithms are employed to verify the performance. In addition, two experiments focus on evaluating the efficiency performance and comparing traditional implementation, respectively. Experimental results demonstrate a significant performance improvement compared with the sequential computing method. The memory usage gradually decreases as the number of processors increases. Based on the equal-area decomposition, partitions in terms of sectors can guarantee a suitable load balance. Additional benefits of the proposed solution also include high storage efficiency and program portability.  相似文献   

7.
Euler deconvolution is a semi-automatic interpretation method that is frequently used with magnetic and gravity data. For a given source type, which is specified by its structural index (SI), it provides an estimate of the source location. It is demonstrated here that by computing the solution space of individual data points and selecting common source locations the accuracy of the result can be improved. Furthermore, only a slight modification of the method is necessary to allow solutions for any number of different Sis to be obtained simultaneously. The method is applicable to both evenly and unevenly sampled geophysical data and is demonstrated on gravity and magnetic data. Source code (in Matlab format) is available from www.iamg.org.  相似文献   

8.
Of concern in the development of oil fields is the problem of determining the optimal locations of wells and the optimal controls to place on the wells. Extraction of hydrocarbon resources from petroleum reservoirs in a cost-effective manner requires that the producers and injectors be placed at optimal locations and that optimal controls be imposed on the wells. While the optimization of well locations and well controls plays an important role in ensuring that the net present value of the project is maximized, optimization of other factors such as well type and number of wells also plays important roles in increasing the profitability of investments. Until very recently, improving the net worth of hydrocarbon assets has been focused primarily on optimizing the well locations or well controls, mostly manually. In recent times, automatic optimization using either gradient-based algorithms or stochastic (global) optimization algorithms has become increasingly popular. A well-control zonation (WCZ) approach to estimating optimal well locations, well rates, well type, and well number is proposed. Our approach uses a set of well coordinates and a set of well-control variables as the optimization parameters. However, one of the well-control variables has its search range extended to cover three parts, one part denoting the region where the well is an injector, a second part denoting the region where there is no well, and a third part denoting the region where the well is a producer. By this, the optimization algorithm is able to match every member in the set of well coordinates to three possibilities within the search space of well controls: an injector, a no-well situation, or a producer. The optimization was performed using differential evolution, and two sample applications were presented to show the effectiveness of the method. Results obtained show that the method is able to reduce the number of optimization variables needed and also to identify simultaneously, optimal well locations, optimal well controls, optimal well type, and the optimum number of wells. Also, comparison of results with the mixed integer nonlinear linear programming (MINLP) approach shows that the WCZ approach mostly outperformed the MINLP approach.  相似文献   

9.
2.5D有限元方法在铁路路基动力响应研究领域中的应用渐趋广泛。针对其在求解随机不平顺条件下路基动力响应时计算效率显著下降的问题,构建了基于二维降阶Hermite插值的2.5D有限元路基动力响应快速计算框架。以路基在频率-波数域动力响应的基本特征为依据确定了插值原则,讨论了插值点分布和数量对插值精度的影响。研究表明:采用二维降阶Hermite插值方法可以实现随机不平顺条件下路基动力响应的快速计算。相比插值点非均匀分布,插值点均匀分布可以兼顾幅值和相位的插值精度,适应性更好。此外,该方法的计算效率仅与插值点数量相关,不受随机不平顺谐波数量的影响,在模拟随机不平顺条件下路基动力响应方面具备显著的优势。  相似文献   

10.
Landslide susceptibility and hazard assessments are the most important steps in landslide risk mapping. The main objective of this study was to investigate and compare the results of two artificial neural network (ANN) algorithms, i.e., multilayer perceptron (MLP) and radial basic function (RBF) for spatial prediction of landslide susceptibility in Vaz Watershed, Iran. At first, landslide locations were identified by aerial photographs and field surveys, and a total of 136 landside locations were constructed from various sources. Then the landslide inventory map was randomly split into a training dataset 70 % (95 landslide locations) for training the ANN model and the remaining 30 % (41 landslides locations) was used for validation purpose. Nine landslide conditioning factors such as slope, slope aspect, altitude, land use, lithology, distance from rivers, distance from roads, distance from faults, and rainfall were constructed in geographical information system. In this study, both MLP and RBF algorithms were used in artificial neural network model. The results showed that MLP with Broyden–Fletcher–Goldfarb–Shanno learning algorithm is more efficient than RBF in landslide susceptibility mapping for the study area. Finally the landslide susceptibility maps were validated using the validation data (i.e., 30 % landslide location data that was not used during the model construction) using area under the curve (AUC) method. The success rate curve showed that the area under the curve for RBF and MLP was 0.9085 (90.85 %) and 0.9193 (91.93 %) accuracy, respectively. Similarly, the validation result showed that the area under the curve for MLP and RBF models were 0.881 (88.1 %) and 0.8724 (87.24 %), respectively. The results of this study showed that landslide susceptibility mapping in the Vaz Watershed of Iran using the ANN approach is viable and can be used for land use planning.  相似文献   

11.
There has recently been a rapid growth in the amount and quality of digital geological and geophysical data for the majority of the Australian continent. Coupled with an increase in computational power and the rising importance of computational methods, there are new possibilities for a large scale, low expenditure digital exploration of mineral deposits. Here we use a multivariate analysis of geophysical datasets to develop a methodology that utilises machine learning algorithms to build and train two-class classifiers for provincial-scale, greenfield mineral exploration. We use iron ore in Western Australia as a case study, and our selected classifier, a mixture of a Gaussian classifier with reject option, successfully identifies 88% of iron ore locations, and 92% of non-iron ore locations. Parameter optimisation allows the user to choose the suite of variables or parameters, such as classifier and degree of dimensionality reduction, that provide the best classification result. We use randomised hold-out to ensure the generalisation of our classifier, and test it against known ground-truth information to demonstrate its ability to detect iron ore and non-iron ore locations. Our classification strategy is based on the heterogeneous nature of the data, where a well-defined target “iron-ore” class is to be separated from a poorly defined non-target class. We apply a classifier with reject option to known data to create a discriminant function that best separates sampled data, while simultaneously “protecting” against new unseen data by “closing” the domain in feature space occupied by the target class. This shows a substantial 4% improvement in classification performance. Our predictive confidence maps successfully identify known areas of iron ore deposits through the Yilgarn Craton, an area that is not heavily sampled in training, as well as suggesting areas for further exploration throughout the Yilgarn Craton. These areas tend to be more concentrated in the north and west of the Yilgarn Craton, such as around the Twin Peaks mine (~ 27°S, 116°E) and a series of lineaments running east–west at ~ 25°S. Within the Pilbara Craton, potential areas for further expansion occur throughout the Marble Bar vicinity between the existing Spinifex Ridge and Abydos mines (21°S, 119–121°E), as well as small, isolated areas north of the Hamersley Group at ~ 21.5°S, ~ 118°E. We also test the usefulness of radiometric data for province-scale iron ore exploration, while our selected classifier makes no use of the radiometric data, we demonstrate that there is no performance penalty from including redundant data and features, suggesting that where possible all potentially pertinent data should be included within a data-driven analysis. This methodology lends itself to large scale, reconnaissance mineral explorations, and, through varying the datasets used and the commodity being targeted, predictive confidence maps for a wide range of minerals can be produced.  相似文献   

12.
A general approach to the computation of basic topographic parameters independent of the spatial distribution of given elevation data is developed. The approach is based on an interpolation function with regular first and second order derivatives and on application of basic principles of differential geometry. General equations for computation of profile, plan, and tangential curvatures are derived. A new algorithm for construction of slope curves is developed using a combined grid and vector approach. Resulting slope curves better fulfill the condition of orthogonality to contours than standard grid algorithms. Presented methods are applied to topographic analysis of a watershed in central Illinois.  相似文献   

13.
A recently developed Bayesian interpolation method (BI) and its application to safety assessment of a flood defense structure are described in this paper. We use a one-dimensional Bayesian Monte Carlo method (BMC) that has been proposed in (Rajabalinejad 2009) to develop a weighted logical dependence between neighboring points. The concept of global uncertainty is adequately explained and different uncertainty association models (UAMs) are presented for linking the local and global uncertainty. Based on the global uncertainty, a simplified approach is introduced. By applying the global uncertainty, we apply the Guassian error estimation to general models and the Generalized Beta (GB) distribution to monotonic models. Our main objective in this research is to simplify the newly developed BMC method and demonstrate that it can dramatically improve the simulation efficiency by using prior information from outcomes of the preceding simulations. We provide theory and numerical algorithms for the BI method geared to multi-dimensional problems, integrate it with a probabilistic finite element model, and apply the coupled models to the reliability assessment of a flood defense for the 17th Street Flood Wall system in New Orleans.  相似文献   

14.
An interpolation method based on a multilayer neural network (MNN), has been examined and tested for the data of irregular sample locations. The main advantage of MNN is in that it can deal with geoscience data with nonlinear behavior and extract characteristics from complex and noisy images. The training of MNN is used to modify connection weights between nodes located in different layers by a simulated annealing algorithm (one of the optimization algorithms of the network). In this process, three types of errors are considered: differences in values, semivariograms, and gradients between sample data and outputs from the trained network. The training is continued until the summation of these errors converges to an acceptably small value. Because the MNN trained by this learning criterion can estimate a value at an arbitrary location, this method is a form of kriging and termed Neural Kriging (NK). In order to evaluate the effectiveness of NK, a problem on restoration ability of a defined reference surface from randomly chosen discrete data was prepared. Two types of surfaces, whose semivariograms are expressed by isotropic spherical and geometric anisotropic gaussian models, were examined in this problem. Though the interpolation accuracy depended on the arrangement pattern of the sample locations for the same number of data, the interpolation errors of NK were shown to be smaller than both those of ordinary MNN and ordinal kriging. NK can also produce a contour map in consideration of gradient constraints. Furthermore, NK was applied to distribution analysis of subsurface temperatures using geothermal investigation loggings of the Hohi area in southwest Japan. In spite of the restricted quantity of sample data, the interpolation results revealed high temperature zones and convection patterns of hydrothermal fluids. NK is regarded as an interpolation method with high accuracy that can be used for regionalized variables with any structure of spatial correlation.  相似文献   

15.
Ground velocity records of the 20 May 2016 Petermann Ranges earthquake are used to calculate its centroid-moment-tensor in the 3?D heterogeneous Earth model AuSREM. The global-centroid-moment-tensor reported a depth of 12?km, which is the shallowest allowed depth in the algorithm. Solutions from other global and local agencies indicate that the event occurred within the top 12?km of the crust, but the locations vary laterally by up to 100?km. We perform a centroid-moment-tensor inversion through a spatiotemporal grid search in 3?D allowing for time shifts around the origin time. Our 3?D grid encompasses the locations of all proposed global solutions. The inversion produces an ensemble of solutions that constrain the depth, lateral location of the centroid, and strike, dip and rake of the fault. The centroid location stands out with a clear peak in the correlation between real and synthetic data for a depth of 1?km at longitude 129.8° and latitude –25.6°. A collection of acceptable solutions at this centroid location, produced by different time shifts, constrain the fault strike to be 304?±?4° or 138?±?1°. The two nodal planes have dip angles of 64?±?5° and 26?±?4° and rake angles of 96?±?2° and 77?±?5°, respectively. The southwest-dipping nodal plane with the dip angle of 64° could be seen as part of a near vertical splay fault system at the end of the Woodroffe Thrust. The other nodal plane could be interpreted as a conjugate fault rupturing perpendicular to the splay structure. We speculate that the latter is more likely, since the hypocentres reported by several agencies, including the Geoscience Australia, as well as the majority of aftershocks are all located to the northeast of our preferred centroid location. Our best estimate for the moment magnitude of this event is 5.9. The optimum centroid is located on the 20?km surface rupture caused by the earthquake. Given the estimated magnitude, the long surface rupture requires only ~4?km of rupture down dip, which is in agreement with the shallow centroid depth we obtained.  相似文献   

16.
Digital evalutation of a series of correlative geo-specific maps in different scales and subsequent multivariate data processing in the form of entropy and neighbourhood analyses are the basis of a comparative selection of soil samples for the environmental specimen bank of the Federal Republic of Germany. By means of regionalisation algorithms, partly developed for, or specifically adapted to the present purpose, the optimum location of sampling points was determined and the result corroborated on large-scale maps or by visual inspection, respectively. In addition an index of representativity was defined, grouping soil taxa in terms of acreage and spatial autocorrelation.Once the spatial structure of the soil associations in an area to be sampled is thus determined variogram analysis, in the second step, contributes to definitely select the most appropriate specimens on the basis of a representative sampling grid and with respect to relevant properties, for instance cation or anion exchange capacities and biodegradability potential.  相似文献   

17.
Sequential Gaussian Simulation(SGSIM)as a stochastic method has been developed to avoid the smoothing effect produced in deterministic methods by generating various stochastic realizations.One of the main issues of this technique is,however,an intensive computation related to the inverse operation in solving the Kriging system,which significantly limits its application when several realizations need to be produced for uncertainty quantification.In this paper,a physics-informed machine learning(PIML)model is proposed to improve the computational efficiency of the SGSIM.To this end,only a small amount of data produced by SGSIM are used as the training dataset based on which the model can discover the spatial correlations between available data and unsampled points.To achieve this,the governing equations of the SGSIM algorithm are incorporated into our proposed network.The quality of realizations produced by the PIML model is compared for both 2D and 3D cases,visually and quantitatively.Furthermore,computational performance is evaluated on different grid sizes.Our results demonstrate that the proposed PIML model can reduce the computational time of SGSIM by several orders of magnitude while similar results can be produced in a matter of seconds.  相似文献   

18.
为了研究采样和网格化方法对地球物理数据成图精度的影响,为野外数据采集布设提供一定的依据,采用数值模拟确定重力异常场场值,通过不同采样间距和不同插值方法计算重力异常绝对误差均方根值和节点处的绝对误差值,对比不同插值方法的误差,得到了如下认识:1)对于同一插值方法而言,存在小间距绝对误差均方根值小于大间距绝对误差均方根值的关系。2)对不同的插值方法而言:当采样间距小于最小异常地质体尺度时,绝对误差均方根值由小到大的顺序是径向基函数法、改进的谢别德法、克里金插值法、自然邻点法、反距离加权插值法、最近邻点法、最小曲率法,并且线性插值三角网法与自然邻点法具有几乎相同的数值;当采样间距大于最小异常地质体尺度时,绝对误差均方根值由小到大的顺序是径向基函数法、改进的谢别德法、克里金插值法、自然邻点法、最小曲率法、最近邻点法、反距离加权插值法,并且线性插值三角网法和自然邻点法具有几乎相同的数值。3)从绝对误差均方值看,径向基函数方法、改进的谢别德方法和克里金方法数值较小,其中径向基函数值绝对误差均方根值最小。4)从节点处绝对误差值来看,径向基函数方法、克里金方法、改进的谢别德方法相对其他插值方法具有更小的误差,不存在局部误差较小或较大的情况,是相对较好的插值方法,并且径向基函数方法是最好的。  相似文献   

19.
We discuss an adaptive resolution system for modeling regional air pollution based on the chemical transport model STEM. The grid adaptivity is implemented using the generic adaptive mesh refinement tool Paramesh, which enables the grid management operations while harnessing the power of parallel computers. The computational algorithm is based on a decomposition of the domain, with the solution in different subdomains being computed with different spatial resolutions. Various refinement criteria that adaptively control the fine grid placement are analyzed to maximize the solution accuracy while maintaining an acceptable computational cost. Numerical experiments in a large-scale parallel setting (~0.5 billion variables) confirm that adaptive resolution, based on a well-chosen refinement criterion, leads to the decrease in spatial error with an acceptable increase in computational time. Fully dynamic grid adaptivity for air quality models is relatively new. We extend previous work on chemical and transport modeling by using dynamically adaptive grid resolution. Advantages and shortcomings of the present approach are also discussed.  相似文献   

20.
Markov Chain Random Fields for Estimation of?Categorical Variables   总被引:3,自引:0,他引:3  
Multi-dimensional Markov chain conditional simulation (or interpolation) models have potential for predicting and simulating categorical variables more accurately from sample data because they can incorporate interclass relationships. This paper introduces a Markov chain random field (MCRF) theory for building one to multi-dimensional Markov chain models for conditional simulation (or interpolation). A MCRF is defined as a single spatial Markov chain that moves (or jumps) in a space, with its conditional probability distribution at each location entirely depending on its nearest known neighbors in different directions. A general solution for conditional probability distribution of a random variable in a MCRF is derived explicitly based on the Bayes’ theorem and conditional independence assumption. One to multi-dimensional Markov chain models for prediction and conditional simulation of categorical variables can be drawn from the general solution and MCRF-based multi-dimensional Markov chain models are nonlinear.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号