首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article demonstrates how the generalisation of topographic surfaces has been formalised by means of graph theory and how this formalised approach has been integrated into an ISO standard that is employed within nanotechnology. By applying concepts from higher-dimensional calculus and topology, it is shown that Morse functions are those mappings that are ideally suited for the formal characterisation of topographic surfaces. Based on this result, a data structure termed weighted surface network is defined that may be applied for both the characterisation and the generalisation of the topological structure of a topographic surface. Hereafter, the focus is laid on specific issues of the standard ISO 25178-2; within this standard change trees, a data structure similar to weighted surface networks, are applied to portray the topological information of topographic surfaces. Furthermore, an approach termed Wolf pruning is used to simplify the change tree, with this pruning method being equivalent to the graph-theoretic contractions by which weighted surface networks can be simplified. Finally, some practical applications of the standard ISO 25178-2 within nanotechnology are discussed.  相似文献   

2.
Automating the generalisation process, a major issue for national mapping agencies, is extremely complex. Several works have proposed to deal with this complexity using a trial and error strategy. The performance of systems based on such a strategy is directly dependent on the quality of the control knowledge (i.e. heuristics) used to guide the trials. Unfortunately, most of the time, the definition and updation of knowledge is a fastidious task. In this context, automatic knowledge revision can not only improve the performance of the generalisation, but also allow it to automatically adapt to various usages and evolve when new elements are introduced. In this article, an offline knowledge revision approach is proposed, based on a logging of the system and on the analysis of outcoming logs. This approach is dedicated to the revision of control knowledge expressed by production rules. We have implemented and tested this approach for the automated generalisation of groups of buildings within a generalisation model called AGENT, from initial data that reference a scale of approximately 1:15,000 compared with the target map's scale of 1:50,000. The results show that our approach improves the quality of the control knowledge and thus the performance of the system. Moreover, the approach proposed is generic and can be applied to other systems based on a trial and error strategy, dedicated to generalisation or not.  相似文献   

3.
The introduction of automated generalisation procedures in map production systems requires that generalisation systems are capable of processing large amounts of map data in acceptable time and that cartographic quality is similar to traditional map products. With respect to these requirements, we examine two complementary approaches that should improve generalisation systems currently in use by national topographic mapping agencies. Our focus is particularly on self‐evaluating systems, taking as an example those systems that build on the multi‐agent paradigm. The first approach aims to improve the cartographic quality by utilising cartographic expert knowledge relating to spatial context. More specifically, we introduce expert rules for the selection of generalisation operations based on a classification of buildings into five urban structure types, including inner city, urban, suburban, rural, and industrial and commercial areas. The second approach aims to utilise machine learning techniques to extract heuristics that allow us to reduce the search space and hence the time in which a good cartographical solution is reached. Both approaches are tested individually and in combination for the generalisation of buildings from map scale 1:5000 to the target map scale of 1:25 000. Our experiments show improvements in terms of efficiency and effectiveness. We provide evidence that both approaches complement each other and that a combination of expert and machine learnt rules give better results than the individual approaches. Both approaches are sufficiently general to be applicable to other forms of self‐evaluating, constraint‐based systems than multi‐agent systems, and to other feature classes than buildings. Problems have been identified resulting from difficulties to formalise cartographic quality by means of constraints for the control of the generalisation process.  相似文献   

4.
Our research is concerned with automated generalisation of topographic vector databases in order to produce maps. This article presents a new, agent-based generalisation model called CartACom (Cartographic generalisation with Communicating Agents), dedicated to the treatment of areas of low density but where rubber sheeting techniques are not sufficient because some eliminations or aggregations are needed. In CartACom, the objects of the initial database are modelled as agents, that is, autonomous entities, that choose and apply generalisation algorithms to themselves in order to increase the satisfaction of their constraints as much as possible. The CartACom model focuses on modelling and treating the relational constraints, defined as constraints that concern a relation between two objects. In order to detect and assess their relational constraints, the CartACom agents are able to perceive their spatial surroundings. Moreover, to make the good generalisation decisions to satisfy their relational constraints, they are able to communicate with their neighbours using predefined dialogue protocols. Finally, a hook to another agent-based generalisation model – AGENT – is provided, so that the CartACom agents can handle not only their relational constraints but also their internal constraints. The CartACom model has been applied to the generalisation of low-density, heterogeneous areas like rural areas, where the space is not hierarchically organised. Examples of results obtained on real data show that it is well adapted for this application.  相似文献   

5.

In advanced exploration projects or operating mines, the process of allocating capital for infill drilling programs is a significant and recurrent challenge. Within a large company, the different mine sites and projects compete for the available funds for drilling. To maximize a project’s value to its company, a drillhole location optimizer can be used as an objective tool to compare drilling campaigns. The fast semi-greedy optimizer presented here can allow for the obtention of close to optimal solutions to the coverage problem with up to three orders of magnitude less computing time needed than with integer programming. The heuristic approach is flexible as it allows dynamic updating of block values once new drillholes are selected in the solution, as opposed to existing methods based on static block values. The block values used for optimization incorporate kriging estimate and variance, estimate of indicator at cutoff grade and distances to existing or newly selected drillholes. The heuristic approach tends to locate new drillholes within the maximum risk areas, i.e., within less informed zones predicted as being ore zones. Applied to different deposits, it enables, after suitable normalization, comparison of different drilling campaigns and allocation of budgets accordingly.

  相似文献   

6.
The automation of cartographic map production is still an important research field in Geographical Information Systems (GIS). With the increasing development of monitoring and decision‐aid systems either on computer networks or wireless networks, efficient methods are needed to visualise geographical data while respecting some application constraints (accuracy, legibility, security, etc.). This paper introduces a B‐spline snake model to deal with the current operators involved in the cartographic generalisation process of lines. This model enables us to perform those operators with a continuous approach. In order to avoid local conflicts such as intersections or self‐intersections, the consistency of the lines is checked and discrete operations such as segment removal are performed during the process. We apply the method to map production in the highly constrained domain of maritime navigation systems. Experimental results of marine chart generalisation yield some discussions about generalisation robustness and quality.  相似文献   

7.
A quantitative valuation study has been made of Australian state surveys with the specific goals of (1) establishing the 'worth' of current programs upgrading state government geoscientific information infrastructure, and (2) considering the results of the valuation in terms of strategic planning. The study has been done from the perspective of the community as a whole and has been undertaken in two phases reflecting the different objectives of Australian state surveys in terms of the exploration industry and government policy-making. This paper reports on the second part of this valuation process, measuring the impact of upgraded survey data on government mineral policy decision processes. The valuation methodology developed is a comparative approach used to determine net benefit foregone by not upgrading information infrastructure. The underlying premise for the geological survey study is that existing and upgraded data sets will have a different probability that a deposit will be detected. The approach used in the valuation of geoscientific data introduces a significant technical component with the requirement to model both favorability of mineral occurrence and probability of deposit occurrence for two different generations of government data. The estimation of mineral potential uses modern quantitative methods, including the U.S. Geological Survey three-part resource-assessment process and computer-based prospectivity modeling. To test the methodology mineral potential was assessed for porphyry copper type deposits in part of the Yarrol Province, central Queensland. Results of the Yarrol case study supports the strategy of the state surveys to facilitate effective exploration by improving accuracy and acquiring new data, as part of resource management. It was determined in the Yarrol Province case study that in going from existing to upgraded data sets the area that would be considered permissible for the occurrence of porphyry type deposits almost doubled. The implication of this result is that large tracts of potentially mineralized land would not be identified using existing data. Results of the prospectivity modeling showed a marked increase in the number of exploration targets and in target rankings using the upgraded data set. A significant reduction in discovery risk also is associated with the upgraded data set, a conclusion supported by the fact that known mines with surface exposure are not identified in prospectivity modeling using the existing data sets. These results highlight the absence in the existing data sets of information critical for the identification of prospective ground.Quantitative resource assessment and computer-based prospectivity modeling are seen as complementary processes that provide the support for the increasingly sophisticated needs of Australian survey clients. Significant additional gains to the current value of geoscientific data can be achieved through the in-house analysis and characterization of individual data sets, the integration and interpretation of data sets, and the incorporation of information on geological uncertainty.  相似文献   

8.
Abstract

The European Commission (EC) programme ‘Co–ordination of Information on the Environment’ (CORINE) includes a project to map the land cover of member states. The CORINE map is essentially one which combines land cover and land use, giving 44 separate classes, in vector, displayed at a scale of 1:100000 with a minimum mappable unit of 25 ha. The Institute of Terrestrial Ecology (ITE) has compiled a digital land cover map of Great Britain (LCMGB) from classification of Landsat–TM data, resampled to a 25 m raster, with a minimum mappable unit of 0.125 ha and 25 cover types. This paper describes a pilot study which demonstrates the successful spatial generalisation with contextual interpretation to convert the LCMGB to CORINE specifications using semi–automated techniques within a GIS environment.  相似文献   

9.
一种启发式A*算法和网格划分的空间可达性计算方法   总被引:1,自引:2,他引:1  
本文提出了一个适用于研究城市内部的个体或商业区位的微观可达性计算方法,该方法的核心是将研究区域进行等距的网格划分,通过计算每个网格的可达性指标,来研究整个区域的可达性的空间分布特点。在可达性计算中,利用网格内的道路密度和土地利用状态这两个因素来模拟计算每个网格的交通成本,引入了启发式A *空间搜索算法来计算网格间路径的交通成本,并且加入适当的启发信息,提高了搜索效率,使搜索结果更符合实际需求。最后,基于本文提出的方法,利用GIS二次开发工具ArcEngine开发了计算程序,收集了多源数据,以广州市商务区的可达性作为计算对象,进行了商务区的可达性和易达性案例计算。  相似文献   

10.
Summary. After a short historical outline of the development of the ideas on the compaction of clays and shales it is shown that the exponential porosity-depth dependence of compacted shales expressed by Athy's law can be derived using standard methods of statistical physics. The main result of the paper states that the exponential compaction law expresses the maximum-entropy equilibrium state of the pores in the rock, that is compaction is an irreversible process where clay particles tend towards a statistically defined final equilibrium. Connections with the classical theory of consolidation are pointed out.  相似文献   

11.
Environmental simulation models need automated geographic data reduction methods to optimize the use of high-resolution data in complex environmental models. Advanced map generalization methods have been developed for multiscale geographic data representation. In the case of map generalization, positional, geometric and topological constraints are focused on to improve map legibility and communication of geographic semantics. In the context of environmental modelling, in addition to the spatial criteria, domain criteria and constraints also need to be considered. Currently, due to the absence of domain-specific generalization methods, modellers resort to ad hoc methods of manual digitization or use cartographic methods available in off-the-shelf software. Such manual methods are not feasible solutions when large data sets are to be processed, thus limiting modellers to the single-scale representations. Automated map generalization methods can rarely be used with confidence because simplified data sets may violate domain semantics and may also result in suboptimal model performance. For best modelling results, it is necessary to prioritize domain criteria and constraints during data generalization. Modellers should also be able to automate the generalization techniques and explore the trade-off between model efficiency and model simulation quality for alternative versions of input geographic data at different geographic scales. Based on our long-term research with experts in the analytic element method of groundwater modelling, we developed the multicriteria generalization (MCG) framework as a constraint-based approach to automated geographic data reduction. The MCG framework is based on the spatial multicriteria decision-making paradigm since multiscale data modelling is too complex to be fully automated and should be driven by modellers at each stage. Apart from a detailed discussion of the theoretical aspects of the MCG framework, we discuss two groundwater data modelling experiments that demonstrate how MCG is not just a framework for automated data reduction, but an approach for systematically exploring model performance at multiple geographic scales. Experimental results clearly indicate the benefits of MCG-based data reduction and encourage us to continue expanding the scope of and implement MCG for multiple application domains.  相似文献   

12.
The paper describes the development of a new methodological approach for simulating geographic processes through the development of a data model that represents a process. This methodology complements existing approaches to dynamic modelling, which focus on the states of the system at each time step, by storing and representing the processes that are implicit in the model. The data model, called nen, focuses existing modelling approaches on representing and storing process information, which provides advantages for querying and analysing processes. The flux simulation framework was created utilizing the nen data model to represent processes. This simulator includes basic classes for developing a domain specific simulation and a set of query tools for inquiring after the results of a simulation. The methodology is prototyped with a watershed runoff simulation.  相似文献   

13.
在福建洋口林场4年生的杉木人工林建立固定试验标准地.选择修枝木,每年对其进行1次修枝,修枝强度为修去4种规定树干直径(6cm、8cm、10cm和12cm)以下的所有枝条.修枝试验后2年,随着修枝强度的增加,杉木的生长量明显降低.修枝强度为6cm(密度1200株·hm^-2)的修枝木的平均树高、平均胸径和平均单株材积增长量均显著低于不修枝木.修枝后2年,各修枝处理杉木林林下植物的覆盖度、生物量、总的物种数,灌木层、草本层和总体Shannon-Wiener指数均高于未修枝的对照,但藤本层植物在修枝时被大量劈除,其种类数量、多样性指数却低于对照.闽北生产力较高的4年生、密度1200株·hm^-2的杉木幼林进行修枝,推荐的修枝强度为10cm或12cm.  相似文献   

14.
15.
Spatial optimization techniques are commonly used for regionalization problems, often represented as p-regions problems. Although various spatial optimization approaches have been proposed for finding exact solutions to p-regions problems, these approaches are not practical when applied to large-size problems. Alternatively, various heuristics provide effective ways to find near-optimal solutions for p-regions problem. However, most heuristic approaches are specifically designed for particular geographic settings. This paper proposes a new heuristic approach named Automated Zoning Procedure-Center Interchange (AZP-CI) to solve the p-functional regions problem (PFRP), which constructs regions by combining small areas that share common characteristics with predefined functional centers and have tight connections among themselves through spatial interaction. The AZP-CI consists of two subprocesses. First, the dissolving/splitting process enhances diversification and thereby produces an extensive exploration of the solution space. Second, the standard AZP locally improves the objective value. The AZP-CI was tested using randomly simulated datasets and two empirical datasets with different sizes. These evaluations indicate that AZP-CI outperforms two established heuristic algorithms: the AZP and simulated annealing, in terms of both solution quality and consistency of producing reliable solutions regardless of initial conditions. It is also noted that AZP-CI, as a general heuristic method, can be easily extended to other regionalization problems. Furthermore, the AZP-CI could be a more scalable algorithm to solve computational intensive spatial optimization problems when it is combined with cyberinfrastructure.  相似文献   

16.
Using the analytic hierarchy process (AHP) method for multi-index evaluation has special advantages, while the use of geographic information systems (GIS) is suitable for spatial analysis. Combining AHP with GIS provides an effective approach for studies of mineral potential mapping evaluation. Selection of potential areas for exploration is a complex process in which many diverse criteria are to be considered. In this article, AHP and GIS are used for providing potential maps for Cu porphyry mineralization on the basis of criteria derived from geologic, geochemical, and geophysical, and remote sensing data including alteration and faults. Each criterion was evaluated with the aid of AHP and the result mapped by GIS. This approach allows the use of a mixture of quantitative and qualitative information for decision-making. The results of application in this article provide acceptable outcomes for copper porphyry exploration.  相似文献   

17.
Abstract

A technique is discussed for obtaining a contour tree efficiently as a byproduct of an operational contouring system. This tree may then be used to obtain contour symbolism or interval statistics as well as for further geomorphological study. Alternatively, the tree may be obtained without the computational expense of detailed contour interpolation. The contouring system proceeds by assuming a Voronoi neighbourhood or domain about each data point and generating a dual-graph Delaunay triangulation accordingly. Since a triangulation may be traversed in a tree order, individual triangles may be processed in a guaranteed top-to-bottom sequence on the map. At the active edge of the map under construction a linked list is maintained of the contour ‘stubs’ available to be updated by the next triangle processed. Any new contour segment may extend an existing stub, open two new stubs or close (connect) two previous stubs. Extending this list of edge links backwards into the existing map permits storage of contour segments within main memory until a dump (either to plotter or disc) is required by memory overflow, contour closure, contour labelling or job completion. Maintenance of an appropriate status link permits the immediate distinction of local closure (where the newly-connected segments are themselves not connected) from global closure (where a contour loop is completed and no longer required in memory). The resulting contour map may be represented as a tree, the root node being the bounding contour of the map. The nature of the triangle-ordering procedure ensures that inner contours are closed before enclosing ones, and hence a preliminary contour tree may be generated as conventional contour generation occurs. A final scan through the resulting tree eliminates any inconsistencies.  相似文献   

18.
19.
Does perception match reality when people judge the flatness of large areas, such as U.S. states? The authors conducted a geomorphometric analysis of the contiguous United States, employing publicly available geographic software, Shuttle Radar Topography Mission (SRTM) elevation data, and a new algorithm for measuring flatness. Each 90‐meter cell was categorized as not flat, flat, flatter, or flattest, and each state was measured in terms of percentage flat, flatter, and flattest as well as absolute area in each category. Ultimately, forty‐eight states plus the District of Columbia were mapped and ranked according to these values. Keywords: flatness, U.S. states, slope, Kansas, Florida.  相似文献   

20.
To enhance the quality of oil- and gas-resource assessments and to reduce the risks in oil and gas exploration, a number of assessment techniques have been developed. Unfortunately, these techniques have not always been effective in the timely transfer of information. The amount of time that is required for preparing assessments does not always allow for the necessary high-quality data to be generated. To overcome this problem, a method based on an analysis of the phase state of oil and the dynamics of fluids in secondary migration of hydrocarbons is proposed. The phase state of the oil and fluid potential for secondary migration is estimated initially for each prospect together with the extent of the drainage area. On the basis of these estimates, statistical calculations can be made for the generation and expulsion of hydrocarbons. As a result, more reliable data are available for prospect assessment. The application of this method has a practical significance in that it brings the role of basin modeling in prospect assessment into full play, increases the reliability of petroleum-resource assessments, and reduces the risks in exploration. A case study from the Beitang region in eastern China is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号