首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 126 毫秒
1.
Summary. A complete method of solution of linear inverse problems with non-negativity constraints has been given previously. Here, problems with more particular applications in geophysics are discussed. First, we describe the evolution of the set of solutions when a statistical distribution of the errors is assumed. The theory of ideal bodies, which has been introduced recently by Parker, is then discussed. Conjectures of Parker's are proved. Algorithms to construct ideal bodies for any data set are given. To finish, we study planar diagrams, which nicely illustrate the extent of the set of equivalent solutions by showing two moments of a solution versus each other for all possible solutions. The evolution of these diagrams when a new measurement is made gives a good idea of the interest of this measurement. Three-dimensional diagrams can be managed in the same way and used for the gravity interpretation.  相似文献   

2.
Summary. Linear-programming methods are powerful and efficient tools for objectively analysing seismic focal mechanisms and are applicable to a wide range of problems, including tsunami warning and nuclear explosion identification. The source mechanism is represented as a point in the six-dimensional space of moment-tensor components. Each observed polarity provides an inequality constraint, linear with respect to the moment tensor components, that restricts the solution to a half-space bounded by a hyperplane passing through the origin. The intersection of these half-spaces is the convex set of all acceptable solutions. Using linear programming, a solution consistent with the polarity constraints can be obtained that maximizes or minimizes any desired linear function of the moment tensor components; the dilatation, the thrust-like nature, and the strike-slip-like nature of an event are examples of such functions. The present method can easily be extended to fit observed seismic-wave amplitudes (either signed or absolute) subject to polarity constraints, and to assess the range of mechanisms consistent with a set of measured amplitudes.  相似文献   

3.
Summary. For linear geophysical inverse problems, the exercise of finding a greatest lower bound on the uniform norms of positive solutions fitting N data, is shown to have a geometrical counterpart in the N- dimensional space of N -tuples of real numbers. By application of the Fenchel Duality Theorem, we demonstrate that the problem is equivalent to the discovery of a particular hyperplane tangent to a convex set in this space. As examples in the case of two data, the new formulation is applied to the problems of recovering density information from planetary mass and moment of inertia, and from two vertical gravity anomalies.  相似文献   

4.
平面点集凸壳的一个性质   总被引:1,自引:0,他引:1  
该文揭示了平面点集凸壳的一个性质,即如果一个子凸壳包含东、西、南、北4个方向上的极值点,则其任一条子凸壳边的外点必在该边所确定的外直角三角形中,并对该性质进行了数学证明。在此基础上,利用该性质对快速凸壳算法进行了改进,达到了良好的加速效果。  相似文献   

5.
Many geophysical inverse problems derive from governing partial differential equations with unknown coefficients. Alternatively, inverse problems often arise from integral equations associated with a Green's function solution to a governing differential equation. In their discrete form such equations reduce to systems of polynomial equations, known as algebraic equations. Using techniques from computational algebra one can address questions of the existence of solutions to such equations as well as the uniqueness of the solutions. The techniques are enumerative and exhaustive, requiring a finite number of computer operations. For example, calculating a bound to the total number of solutions reduces to computing the dimension of a linear vector space. The solution set itself may be constructed through the solution of an eigenvalue problem. The techniques are applied to a set of synthetic magnetotelluric values generated by conductivity variations within a layer. We find that the estimation of the conductivity and the electric field in the subsurface, based upon single-frequency magnetotelluric field values, is equivalent to a linear inverse problem. The techniques are also illustrated by an application to a magnetotelluric data set gathered at Battle Mountain, Nevada. Surface observations of the electric ( E y ) and magnetic ( H x ) fields are used to construct a model of subsurface electrical structure. Using techniques for algebraic equations it is shown that solutions exist, and that the set of solutions is finite. The total number of solutions is bounded above at 134 217 728. A numerical solution of the algebraic equations generates a conductivity structure in accordance with the current geological model for the area.  相似文献   

6.
Summary. The parameters in many geophysical inverse problems may be partitioned so as to separate them into two distinct sets. This separation might be on the grounds of physical differences between the two sets, or it might be for computational reasons. In this paper, methods for making estimates of one or other set of parameters unbiased by uncertainty in the other set are summarized. It is shown that these procedures, which are characterized by asymmetric resolution matrices, are not equivalent to the generalized inverse solution. The properties of various matrix inverses used to obtain solutions are discussed in relation to the usual least-squares and minimum-norm conditions. Finally, a new algorithm for calculating the generalized inverse, in terms of the inverses of partitions, is given.  相似文献   

7.
平面点集凸壳的一种快速算法   总被引:7,自引:1,他引:6  
提出一种计算平面点集凸壳的快速算法———八方向极值快速凸壳算法。该算法首先对平面点集进行一次扫描,从而快速查找到东、南、西、北、东南、西南、东北、西北8个方向上的极值点,构造出一个更接近凸壳的初始凸壳,从而在后续的点集扫描中可以排除更多的内点,使该算法计算效率更高。该算法的空间复杂度为O(N);其时间复杂度虽然无法突破最坏情况下O(NlogN)的理论下限,但其期望时间复杂度已达到线性水平,并且可以容易地扩展到三维和高维空间。  相似文献   

8.
Collaborative spatial decision‐making environments in which group members individually and collectively pursue solutions to semi‐structured problems have a unique set of geographic visualization requirements. Group members often pursue diverse strategies as they attempt to solve such problems. As a consequence, numerous mapped representations of alternative solutions are generated. It is difficult to compare and synthesize these results, especially when decision‐makers have little or no previous cartographic training. In this paper, we derive several map types that synthesize representations of alternative solutions to location‐selection problems. These synthetic maps, designed to be accessible to group members, are created by decomposing solutions into a collection of atomic elements that are then placed into an accounting framework. Network map algebra operations are performed within this framework, and the results are accumulated and displayed as maps. Group members can use these maps to identify similar and dissimilar elements of alternative solutions to a problem. Such maps are intended to promote discussion and support group consensus‐building activities.  相似文献   

9.
基于计算机图形学的土壤质地自动分类系统   总被引:1,自引:0,他引:1  
国内外通行的国际制、美国农部制等土壤质地分类制, 均采用传统的人工查找平面正三角 坐标图方法获取质地名称, 这对大批量土样费时费力、精度难以控制。结合国外土壤质地计算机 分类方法研究的状况, 本文基于计算机图形学技术, 利用Visual Basic 平台, 设计并完成了更具实 用性、更便于国内使用的土壤质地自动分类系统( STAC) 。STAC 系统的特点是简单、方便、快速、 直观, 具有单个或批量土壤质地的自动分类、图形显示、统计、分析以及支持用户自定义分类制等 多种功能。文中着重阐述了其中的计算机图形学实现原理和关键技术。  相似文献   

10.
With recent advances in remote sensing, location-based services and other related technologies, the production of geospatial information has exponentially increased in the last decades. Furthermore, to facilitate discovery and efficient access to such information, spatial data infrastructures were promoted and standardized, with a consideration that metadata are essential to describing data and services. Standardization bodies such as the International Organization for Standardization have defined well-known metadata models such as ISO 19115. However, current metadata assets exhibit heterogeneous quality levels because they are created by different producers with different perspectives. To address quality-related concerns, several initiatives attempted to define a common framework and test the suitability of metadata through automatic controls. Nevertheless, these controls are focused on interoperability by testing the format of metadata and a set of controlled elements. In this paper, we propose a methodology of testing the quality of metadata by considering aspects other than interoperability. The proposal adapts ISO 19157 to the metadata case and has been applied to a corpus of the Spanish Spatial Data Infrastructure. The results demonstrate that our quality check helps determine different types of errors for all metadata elements and can be almost completely automated to enhance the significance of metadata.  相似文献   

11.
Summary. A normal mode superposition approach is used to synthesize complete seismic codas for flat layered earth models and the P-SV phases. Only modes which have real eigenwavenumbers are used so that the search for eigenvalues in the complex wavenumber plane is confined to the real axis. In order to synthesize early P -wave arrivals by summing a number of'trapped'modes, an anomalously high velocity cap layer is added to the bottom of the structure so that most of the seismic energy is contained in the upper layers as high-order surface waves. Causality arguments are used to define time windows for which the resulting synthetic seismograms are close approximations to the exact solutions without the cap layer. The traditional Thomson—Haskell matrix approach to computing the normal modes is reformulated so that numerical problems encountered at high frequencies are avoided and numerical results of the locked mode approximation are given.  相似文献   

12.
GIS-T线性数据模型研究现状与趋势   总被引:13,自引:0,他引:13  
目前GIS-T已有若干线性数据模型,但大多数据模型还没得到实际应用,仍存在一些问题需要解决。该文分析讨论GIS-T线性数据模型,介绍了具有代表性的几种模型,并将各模型对车道及时态的支持进行比较分析和评估。指出当前GIS-T线性数据模型普遍存在的问题,提出基于三维的GIS-T时空数据模型是未来的发展趋势。  相似文献   

13.
Analysis of potential spatial behavior in transport infrastructures is usually carried out by means of a digital network. A basic condition for such a network analysis has traditionally been the desire to find solutions to optimization problems and to achieve greater efficiency in industry. Geographic information system (GIS) tools for network analysis are overwhelmingly targeted at finding solutions to optimization problems, which include the shortest path problem and the traveling salesman problem. This article addresses the problem of the lack of tools for finding solutions to a class of constraint satisfaction problems that are of potential interest to behavioral geographers. Constraint satisfaction problems differ from optimization problems in that they lack an expression to be maximized or minimized. We describe how a constraint-based approach to network analysis can be applied to search for ‘excess routes’ that are longer or in other ways exceed single, optimal routes. Our analysis considers both round-trips and travel from A to B and defines a set of constraints that can characterize such paths. We present a labeling algorithm that can generate solutions to such excess route problems.  相似文献   

14.
Summary. Conventional seismometers employ masses of several kilogrammes suspended with periods of several seconds, but it is possible to achieve the same detection capability with much smaller masses suspended at shorter periods. Such instruments are valuable for borehole applications or where many instruments must be rapidly set up.
The problems of the design of miniature wideband force-feedback seismometers are discussed and two such instruments are described. Both instruments use a capacitive displacement transducer to detect the relative motion of a mass of about 0.05 kg suspended with a natural period near 1 s. A force-feedback system maintains the mass stationary with respect to the instrument frame and the instruments have a response defined by feedback from Dc to 10 Hz. A single miniature instrument can thus provide data over the whole of the seismic range.
Details are given of the experimental difficulties encountered and of a comparison of the instruments with conventional seismometers.  相似文献   

15.
A number of Norwegian glaciers were selected in the 1960s for long-term mass-balance measurements, to produce necessary hydrological information for hydropower exploitation. Special large-scale glacier maps were produced for field work and data processing, and some glaciers have been mapped more than once. Thus, comparison of glacier maps can be used to calculate changes in glacier volume for some of the glaciers, provided they are of sufficient accuracy.
Conventional mass-balance measurements were carried out on all the selected glaciers. A cumulative calculation of net balances for a series of years is used to indicate the change in a glacier's volume during that period. However, various errors originate in the field, some of which are systematic, particularly on glaciers with large winter accumulation.
The present study indicates that certain errors are difficult to define and determine, For the maritime glacier Ålfotbreen, a cumulative mass-balance calculation gives a positive total balance (+3.4 m water equivalent in the period 1968–88), whereas the map comparison indicates a total negative balance (−5.8 m water equivalent). This indicates a discrepancy between the methods, which must be accounted for.
Determination of errors in mass-balance measurements is difficult. Sinking of stakes in the accumulation area and the use of sounding sticks (steel probes) in heavy snowlayers cause problems.  相似文献   

16.
This research compares the geographic information retrieval (GIR) performance of a set of logistic regression models with those of five non‐probabilistic methods that compute a spatial similarity score for a query–document pair. All methods are applied to a test collection of queries and documents indexed spatially by two convex conservative geometric approximations: the minimum bounding box (MBB) and the convex hull. In the comparison, the tested logistic regression models outperform, in terms of standard information retrieval recall and precision measures, all of the non‐probabilistic methods. The retrieval performance achieved by the logistic regression models on MBB approximations is similar to that achieved by the use of the non‐probabilistic methods on convex hulls. Although these results are valid only for the test collection used in this study, they suggest that a logistic regression approach to GIR provides an alternative to the use of higher‐quality geometric representations that are more difficult to obtain, implement, and process. Additionally, this research demonstrates the ability of a probabilistic approach to effectively incorporate information about geographic context in the spatial ranking process.  相似文献   

17.
Summary. An interpretation of the geomagnetic inductive response function, C (ω, 0), observed at Kiruna in northern Sweden, is herein undertaken. The bounds of acceptable solutions are initially discovered by a Monte-Carlo random search procedure, and the best-fitting solutions are examined by the application of linear theory to the problem. The data are shown to have a higher degree of internal consistency than that described by the estimated variances of each datum. A further Monte-Carlo inversion of the variance- reduced data set gives solutions with well defined model parameters.
The two major features of the models are: (1) a small, or non-existent, electrical conductivity variation across the seismic Moho boundary, and (2) the unequivocal existence of an electrical asthenosphere, under the Fenno- scandian shield, beginning at a depth of between 155–185 km, and of 60km minimum thickness. Both of these observations have seismic counterparts.
Finally, possible mantle temperature profiles are deduced which depend on the assumptions and laboratory data employed.  相似文献   

18.
Spatial optimization problems, such as route selection, usually involve multiple, conflicting objectives relevant to locations. An ideal approach to solving such multiobjective optimization problems (MOPs) is to find an evenly distributed set of Pareto‐optimal alternatives, which is capable of representing the possible trade‐off among different objectives. However, these MOPs are commonly solved by combining the multiple objectives into a parametric scalar objective, in the form of a weighted sum function. It has been found that this method fails to produce a set of well spread solutions by disregarding the concave part of the Pareto front. In order to overcome this ill‐behaved nature, a novel adaptive approach has been proposed in this paper. This approach seeks to provide an unbiased approximation of the Pareto front by tuning the search direction in the objective space according to the largest unexplored region until a set of well‐distributed solutions is reached. To validate the proposed methodology, a case study on multiobjective routing has been performed using the Singapore road network with the support of GIS. The experimental results confirm the effectiveness of the approach.  相似文献   

19.
Three forms of linear interpolation are routinely implemented in geographical information science, by interpolating between measurements made at the endpoints of a line, the vertices of a triangle, and the vertices of a rectangle (bilinear interpolation). Assuming the linear form of interpolation to be correct, we study the propagation of error when measurement error variances and covariances are known for the samples at the vertices of these geometric objects. We derive prediction error variances associated with interpolated values at generic points in the above objects, as well as expected (average) prediction error variances over random locations in these objects. We also place all the three variants of linear interpolation mentioned above within a geostatistical framework, and illustrate that they can be seen as particular cases of Universal Kriging (UK). We demonstrate that different definitions of measurement error in UK lead to different UK variants that, for particular expected profiles or surfaces (drift models), yield weights and predictions identical with the interpolation methods considered above, but produce fundamentally different (yet equally plausible from a pure data standpoint) prediction error variances.  相似文献   

20.
一种新的最小凸包算法及其应用   总被引:5,自引:0,他引:5  
当前流行的最小凸包算法的时间复杂度相对较大,不适宜处理海量数据.该文提出一种新的平面离散点的最小凸包生成算法,其时间复杂度为O(nlogn).该算法通过排序、分区、指针定位、一遍扫描离散点集,在运算过程中对凸包顶点进行动态增加或删除,可快速生成点集的最小凸包.最终,求离散分布的居民点点集的最小凸包实例表明,该算法应用效果较好.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号