首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
位场曲化平的插值-迭代法   总被引:10,自引:5,他引:10       下载免费PDF全文
将起伏曲面B上的位场向下延拓至曲面最低点的平面A的插值-迭代法步骤是:1)将曲面B上的场值放置在水平面A上具有相同水平坐标的点上,作为A上的初值;2)用若干水平面切割B,从A的初值,用快速傅里叶变换法(FFT)向上延拓出这些平面的场值,用插值的方法从这些平面的场值计算曲面B的场值;3)根据B上的实测值与计算值的差值,对A上的值进行加权改正;4)重复步骤2)和3),直到B上的差值小到可以忽略.这种插值-迭代法具有高的计算速度,比通常的FFT法延拓得更深,可以超过10倍点距.文中给出计算实例.  相似文献   

2.
Rainfall measurements by conventional raingauges provide relatively accurate estimates at a few points of a region. The actual rainfield can be approximated by interpolating the available raingauge data to the remaining of the area of interest. In places with relatively low gauge density such interpolated rainfields will be very rough estimates of the actual events. This is especially true for tropical regions where most rainfall has a convective origin with high spatial variability at the daily level. Estimates of rainfall by remote sensing can be very useful in regions such as the Amazon basin, where raingauge density is very low and rainfall highly variable. This paper evaluates the rainfall estimates of the Tropical Rainfall Measuring Mission (TRMM) satellite over the Tapajós river basin, a major tributary of the Amazon. Three-hour TRMM rainfall estimates were aggregated to daily values and were compared with catch of ground-level precipitation gauges on a daily basis after interpolating both data to a regular grid. Both daily TRMM and raingauge-interpolated rainfields were then used as input to a large-scale hydrological model for the whole basin; the calculated hydrographs were then compared to observations at several streamgauges along the river Tapajos and its main tributaries. Results of the rainfield comparisons showed that satellite estimates can be a practical tool for identifying damaged or aberrant raingauges at a basin-wide scale. Results of the hydrological modeling showed that TRMM-based calculated hydrographs are comparable with those obtained using raingauge data.  相似文献   

3.
Total magnetic intensity contour maps for the study region (between 2°E to 10°E and 56°N to 60°N) were digitized and converted to a regular grid of 285 × 285 points. The study area measures approximately 444 km × 444 km and the grid spacing is thus 1. 56 km. The International Geomagnetic Reference Field for 1975 was gridded for the above-used net, and from the two data sets a further grid of the ?T field was generated. A large number of profiles were constructed which were suitable for depth determinations. The regular grid ?T data is also convenient for the computation of the second vertical derivative. Using the method of vertical prisms of Vacquier et al. (1963), a large suite of curvature-depth indices was measured to complement the depths obtained from the intensity slopes and from boreholes which reach the crystalline basement. The depth to the magnetic basement has been contoured, and the resulting map is shown to be in good agreement with what is known about the deeper geology of the study area. The work reported here is part of a research project supported by Amoco Norway, BP Petroleum Development Ltd, Elf Aquitaine, Esso Exploration and Production, Norwegian Gulf, Norsk Hydro, Mobil Exploration Norway, Norwegian Petroleum Directorate, Royal Norwegian Council for Scientific and Industrial Research (NTNF), Norske Shell, and Statoil.  相似文献   

4.
Gravity data are often acquired over long periods of time using different instruments and various survey techniques, resulting in data sets of non-uniform accuracy. As station locations are inhomogeneously distributed, gravity values are interpolated on to a regular grid to allow further processing, such as computing horizontal or vertical gradients. Some interpolation techniques can estimate the interpolation error. Although estimation of the error due to interpolation is of importance, it is more useful to estimate the maximum gravity anomaly that may have gone undetected by a survey. This is equivalent to the determination of the maximum mass whose gravity anomaly will be undetected at any station location, given the data accuracy at each station. Assuming that the maximum density contrast present in the survey area is known or can be reasonably assumed from a knowledge of the geology, the proposed procedure is as follows: at every grid node, the maximum mass whose gravity anomaly does not disturb any of the surrounding observed gravity values by more than their accuracies is determined. A finite vertical cylinder is used as the mass model in the computations. The resulting map gives the maximum detection error and, as such, it is a worst-case scenario. Moreover, the map can be used to optimize future gravity surveys: new stations should be located at, or near, map maxima. The technique is applied to a set of gravity observations obtained from different surveys made over a period of more than 40 years in the Abitibi Greenstone Belt in eastern Canada.  相似文献   

5.
A gravity survey was conducted in a 94-square-mile area of northeastern Hanson County, South Dakota. The 340 measured gravity values, together with test-hole data, were used to approximately delineate a buried valley eroded into the Sioux Quartzite of Precambrian age. This valley contains, in places, an aquifer composed of quartzose sand of pre-Cretaceous age derived from the Sioux Quartzite. The bottom of the valley is approximately 450–600 ft below land surface. Simple Bouguer values were determined from measured gravity data, referenced to a local base station, and interpolated to a 0.5- by 0.5-mile grid. The interpolated simple Bouguer values of residual gravity were determined using a five-ring, inverse-weighted filtering method. The second derivative of the interpolated gravity values, as well as their downward continuation, did not delineate the buried valley as well. Subsequent drilling of nine test holes showed that the gravity method can be used for approximately delineating subsurface features  相似文献   

6.
Hydrogeologists often are called upon to estimate surfaces from discrete, sparse data points. This estimation is often accomplished by manually drawing contours on maps using interpolation methods between points of known value while accounting for features known to influence the water table's surface. By contrast, geographic information systems (GIS) are good at creating smooth continuous surfaces from limited data points and allowing the user to represent the resulting surface resulting with contours, but these automated methods often fail to meet the expectations of many hydrogeologists because they do not include knowledge of other influences on the water table. In this study, we seek to fill this gap in the GIS‐based methodology for hydrogeologists through an interactive tool that shapes an interpolated surface based on additional knowledge of the water table inferred from gaining or losing streams. The modified surface is reflected in water table contours that, for example, “V” upstream for gaining streams, and can be interactively adjusted to fit the user's expectations. By modifying not only the contours but also the associated interpolated surface, additional contours will follow the same trend, and the modified surface can be used for other analyses like calculating average gradients and flow paths. The tool leverages Esri's ArcGIS Desktop software, building upon a robust suite of mapping tools. We see this as a prototype for other tools that could be developed for hydrogeologists to account for variations in the water table inferred from local topographic trends, pumping or injection wells, and other hydrogeologic features.  相似文献   

7.
A technique for reconstruction of the 2d surface velocity field from radar observations is proposed. The method consecutively employs two processing techniques: At the first stage raw radial velocity data are subject to EOF analysis, which enables to fill gaps in observations and provides estimates of the noise level and integral parameters characterizing small-scale variability of the sea surface circulation. These parameters are utilized at the second stage, when the cost function for variational interpolation is constructed, and the updated radial velocities are interpolated on the regular grid.Experiments with simulated and real data are used to assess the method's skill and compare it with the conventional 2d variational (2dVar) approach. It is shown that the proposed technique consistently improves performance of the 2dVar algorithm and becomes particularly effective when a radar stops operating for 1–2 days and/or a persistent gap emerges in spatial coverage of a basin by the HFR network.  相似文献   

8.
Summary The weighted averaging on the surface of a circular disc as a method of the transformation of a data system measured in random points to grid points is discussed using the concepts of filter theory. The transfer functions of various weighting functions are computed. The transformation is illustrated by practical examples.  相似文献   

9.
The paper presents the results of testing the various methods of permanent stations’ velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT (The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities’ values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors’ opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.  相似文献   

10.
New formulations of boundary conditions at an arbitrary two-dimensional (2D) free-surface topography are derived. The top of a curved grid represents the free-surface topography while the grid's interior represents the physical medium. The velocity–stress version of the viscoelastic wave equations is assumed to be valid in this grid. However, the rectangular grid version attained by grid transformation is used to model wave propagation in this work in order to achieve the numerical discretization. We show the detailed solution of the particle velocities at the free surface resulting from discretizing the boundary conditions by second-order finite-differences (FDs). The resulting system of equations is spatially unconditionally stable. The FD order is gradually increased with depth up to eighth order inside the medium. Staggered grids are used in both space and time, and the second-order leap-frog and Crank–Nicholson methods are used for time-stepping. We simulate point sources at the surface of a homogeneous medium with a plane free surface containing a hill and a trench. Applying parameters representing exploration surveys, we present examples with a randomly realized surface topography generated by a 1D von Kármán function of order 1. Viscoelastic simulations are presented using this surface with a homogeneous medium and with a layered, randomized medium realization, all generating significant scattering.  相似文献   

11.
位场延拓的积分-迭代法   总被引:36,自引:14,他引:22       下载免费PDF全文
徐世浙 《地球物理学报》2006,49(4):1176-1182
本文介绍一种新的位场延拓方法——积分-迭代法.将起伏面上的实测位场值,垂直投影至起伏面下部的一个水平面上,作为该水平面上的位场初始值.根据该水平面上的初始值,用积分方法计算起伏面上的位场值.用起伏面上的实测值与计算值的差值,对水平面上的位场值进行校正.如此反复迭代,直至起伏面上的实测值与计算值的差值小到可以忽略.有了水平面上的位场值后,就可以用积分的方法或其他方法计算水平面以上的任意曲面或水平面的位场值.该方法原理简单,不用解线性代数方程组,有较高的计算速度.它特别适用于位场向下延拓,有良好的延拓效果.本文还介绍了积分迭代法的应用实例.  相似文献   

12.
频率域偶层位曲面位场处理和转换方法研究   总被引:5,自引:1,他引:4       下载免费PDF全文
在空间域偶层位法的基础上,研究了完整的频率域偶层位曲面位场处理和转换方法.该法可应用于平面或曲面、规则网或非规则网的位场数据处理和转换.通过对偶层面z坐标和计算面z坐标平移不同的量来加速正演快速收敛和保证反演稳定、快速收敛;提出了适合于不规则网曲面处理和转换的核心算法——单点快速Fourier变换;提出了频率域不规则网曲面处理和转换方法技术.通过以上技术措施解决了大数据量特别是曲面不规则网的位场处理和转换问题,模型试算以及实际资料处理验证了该方法的应用效果.  相似文献   

13.
Summary A computational method for fitting smoothed bicubic splines to data given in a regular rectangular grid is suggested. The one-dimensional spline fit has well defined smoothness properties. These are duplicated for a two-dimensional approximation by solving the corresponding variational problem. The complete algorithm for computing the functional values and its derivatives at arbitrary points is presented. The posibilities of the method are demonstrated on an example from geomagnetic surveys.  相似文献   

14.
Abstract

Finite difference algorithms have been developed to solve a one-dimensional non-linear parabolic equation with one or two moving boundaries and to analyse the unsteady plane flow of ice-sheets. They are designed to investigate the response of an ice-sheet to changes in climate, and to reconstruct climatic changes implied by past ice-sheet variations inferred from glacial geological data. Two algorithms are presented and compared. The first, a fixed domain method, replaces time as an independent variable with span. The grid interval in real space is kept constant, and thus the number of grid points changes with span. The second, a moving mesh method, retains time as one of the independent variables, but normalises the spatial variable relative to the span, which now enters the diffusion and advection coeficients in the parabolic equation for the surface profile.

Crank-Nicholson schemes for the solution of the equations are constructed, and iterative schemes for the solution of the resulting non-linear equations are considered.

Boundary (margin) motion is governed by the surface slope at the margin. Differentiation of the evolution equations results in an evolution equation for the margin slopes. It is shown that incorporation of this evolution equation, while not formally increasing the accuracy of the finite difference schemes, in practice increases accuracy of the solution.  相似文献   

15.
迭代法与FFT法位场向下延拓效果的比较   总被引:15,自引:12,他引:15       下载免费PDF全文
将水平观测面上的实测位场值,垂直投影至下部的延拓水平面上,作为该水平面上的位场初始值. 根据该水平面上的初始值,用快速傅里叶变换(FFT)的方法向上延拓计算观测面上的位场值. 用观测面上的实测值与计算值的差值,对延拓面上的位场值进行校正. 如此反复迭代,直至观测面上的实测值与计算值的差值小到可以忽略. 这种空间域的迭代法原理简单,不用解线性代数方程组,有较高的计算速度和良好的延拓效果. 本文用迭代法对模型数据和实际数据进行向下延拓,对比了迭代法与常规的FFT法在位场向下延拓中的效果,迭代法显著优于FFT法.  相似文献   

16.
The staggered grid finite-difference method is a powerful tool in seismology and is commonly used to study earthquake source dynamics. In the staggered grid finite-difference method stress and particle velocity components are calculated at different grid points, and a faulting problem is a mixed boundary problem, therefore different implementations of fault boundary conditions have been proposed. Viriuex and Madariaga (1982) chose the shear stress grid as the fault surface, however, this method has several problems: (1) Fault slip leakage outside the fault, and (2) the stress bump beyond the crack tip caused by S waves is not well resolved. Madariaga et al. (1998) solved the latter problem via thick fault implementation, but the former problem remains and causes a new issue; displacement discontinuity across the slip is not well modeled because of the artificial thickness of the fault. In the present study we improve the implementation of the fault boundary conditions in the staggered grid finite-difference method by using a fictitious surface to satisfy the fault boundary conditions. In our implementation, velocity (or displacement) grids are set on the fault plane, stress grids are shifted half grid spacing from the fault and stress on the fictitious surface in the rupture zone is given such that the interpolated stress on the fault is equal to the frictional stress. Within the area which does not rupture, stress on the fictitious surface is given a condition of no discontinuity of the velocity (or displacement). Fault normal displacement (or velocity) is given such that the normal stress on the fault is continuous across the fault. Artificial viscous damping is introduced on the fault to avoid vibration caused by onset of the slip. Our implementation has five advantages over previous versions: (1) No leakage of the slip prior to rupture and (2) a zero thickness fault, (3) stress on the fault is reliably calculated, (4) our implementation is suitable for the study of fault constitutive laws, as slip is defined as the difference between displacement on the plane of z = + 0 and that of z = − 0, and (5) cessation of slip is achieved correctly.  相似文献   

17.
After the sampling of a reflection time contour map, i.e. after times and time gradients at the grid points of a square sampling grid have been determined, its conversion into true depth contours can be performed by normal incidence ray tracing. At each grid point the spatial orientation of the ray is uniquely defined by a corresponding time gradient vector, whereas its continuation into the subsurface is controlled by Snell's law. For arbitrarily orientated velocity interfaces the 3 – D ray tracing problem can systematically be solved with the aid of vector algebra, by expressing Snell's law as an equation of vector cross products. This allows to set up a computer algorithm for migration of contour maps. Reliable sampling of reflection time contour maps in the presence of faults is essential for the realization of a practical map migration system. A possible solution of the relevant sampling problem requires a special map editing and digitization procedure. Lateral migration shifts cause a translation and distortion of the original sampling grid. On the transformed grid the true positions of faults can be related to their apparent ones on the reflection time contour map. Errors in the time domain correlations or an incorrect velocity distribution or a combination of both these effects may cause migration failures due to total reflection and time deficiencies, or give rise to an anomalous distortion of grid cells, the latter signifying a violation of the maximum convexity condition. Emphasis is placed upon the significance of map migration as an interpretive tool for solving time to depth conversion problems in the presence of severely faulted or salt intruded overburdens.  相似文献   

18.
Images of geophysical potential field data are becoming more common as a result of the increased availability of image analysis systems. These data are processed using techniques originally developed for remotely sensed satellite imagery. In general, geophysicists are not familiar with such techniques and may apply them without due consideration. This can lead to abuses of the geophysical data and reduce the validity of the interpretation. This paper describes some critical processes which can introduce errors to the data. The production of a regular grid from scattered data is fundamental to image processing. The choice of cell size is paramount and must balance the spatial distribution of the data. The necessary scaling of data from real values into a byte format for display purposes can result in small anomalies being masked. Contrast stretching of grey level images is often applied but can alter the shape of anomalies by varying degrees and should be avoided. Filters are often used to produce shaded relief images but without due regard to their frequency response and the effect on images expanded to fill the display space. The generation of spurious numerical artefacts can be reduced by ensuring that the filter is applied at real precision to the original data grid. The resultant images can then be processed for display. The use of image analysis systems for data integration requires careful consideration of the sampling strategy and information content of each dataset. It is proposed that such procedures are more appropriately conducted on a geographic information system.  相似文献   

19.
Digital elevation models (DEMs) are still an important and current source of information for digital soil mapping and the modeling of soil processes. The grid DEM is often interpolated from contour lines. The contour sampling step becomes an additional interpolation parameter which can play an important role. The objective of this paper is to optimize the interpolation parameters of the Regularized spline with tension (RST) method, in order to prepare a DEM suitable as an input for erosion modeling. Two contrasting cases, with and without a reference DEM, were investigated. If a reference DEM was available, good results of interpolation were reached both by small and larger sampling steps. In the second case, it was found that small sampling steps should be avoided. The influence of the sampling was demonstrated by topographic potential for erosion and deposition.  相似文献   

20.
3D seismic data are usually recorded and processed on rectangular grids, for which sampling requirements are generally derived from the usual 1D viewpoint. For a 3D data set, the band region (the region of the Fourier space in which the amplitude spectrum is not zero) can be approximated by a domain bounded by two cones. Considering the particular shape of this band region we can use the 3D sampling viewpoint, which leads to weaker sampling requirements than does the 1D viewpoint; i.e. fewer sample points are needed to represent data with the same degree of accuracy. The 3D sampling viewpoint considers regular nonrectangular sampling grids. The recording and processing of 3D seismic data on a hexagonal sampling grid is explored. The acquisition of 3D seismic data on a hexagonal sampling grid is an advantageous economic alternative because it requires 13.4% fewer sample points than a rectangular sampling grid. The hexagonal sampling offers savings in data storage and processing of 3D seismic data. A fast algorithm for 3D discrete spectrum evaluation and trace interpolation in the case of a 3D seismic data set sampled on a hexagonal grid is presented and illustrated by synthetic examples. It is shown that by using this algorithm the hexagonal sampling offers, approximately, the same advantage of saving 13.4% in data storage and computational time for 3D phase-shift migration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号