首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Quad-pol data are generally acknowledged as providing the highest performance in ship detection applications using SAR data. Yet quad-pol data have half the swath width of single and dual-pol data and are thus less useful for maritime surveillance, where wide area coverage is crucial. Compact polarimetry (CP) has been proposed as a compromise between swath width and polarization information. The circular-transmit-linear-receive (CTLR) CP data have certain engineering advantages over other CP configurations. CP data may be used to reconstruct a reduced quad-pol covariance matrix (termed pseudo-quad, or PQ, data) and the potential of these data in terrestrial applications has recently been demonstrated. We present some of the first results on the use of CTLR data and reconstructed quad-pol data for ship detection. We use Radarsat-2 fine-quad (FQ) data to examine 76 ships over a range of incidence angles and ship orientations at low to moderate wind speeds. We examined the ship detection performance of full quad-pol and full-PQ data; several dual-pol configurations suggested in the literature, HV and PQ HV and the raw CTLR data. We find that the ship detection performance of the PQ HV data is the strongest of all the detectors we examined, with performance that was comparable to quad-pol data. Other strong performers were HV and CTLR data.  相似文献   

2.
基于安全保密的考虑,需要对矢量空间数据进行加密,现有做法是对数据文件整体进行加密,会破坏矢量空间数据结构并影响属性数据的查看。提出了一种不改变矢量空间数据结构,仅对坐标数据加密的方法,能够保护数据的安全且矢量数据结构依然保持不变。运用SHA-512加密用户密钥得到哈希密钥,用高斯随机数置乱哈希密钥生成用来加密坐标数据的密钥。首先读取矢量空间数据的顶点序列,并对矢量数据的顶点坐标序列进行哈尔变换,使用上述密钥对哈尔变换后的均值系数和差值系数进行加密,再实施逆哈尔变换得到加密后坐标,使用高斯随机数置乱顶点序列得到加密后的矢量空间数据。实验结果表明,矢量空间数据的坐标被加密,但文件结构及属性数据完全保持不变,且运行效率高;拥有密钥的用户还可以解密坐标,还原出原始矢量空间数据,安全性高。  相似文献   

3.
在3维激光扫描数据处理的内容和流程中,数据获取和数据配准是数据处理的基本研究内容.根据提出的表面特征提取流程,本文重点阐述了3维激光扫描数据处理中格网建立、数据缩减、数据分割、曲面拟合等方面的研究现状.并提出了相关的基础理论内容与研究方向.  相似文献   

4.
灵泉盆地多源地学信息综合岩性识别方法探讨   总被引:2,自引:0,他引:2  
以满洲里灵泉盆地为例,研究探讨了利用航空γ能谱数据、航磁资料、遥感资料(包括TM数据和MSS数据)进行岩性识别的方法,即①利用γ能谱和MSS数据,通过K-L变换提取岩性信息的方法;②利用γ能谱和TM数据,通过IHS变换提取岩性信息的方法。  相似文献   

5.
6.
ABSTRACT

Earth observations and model simulations are generating big multidimensional array-based raster data. However, it is difficult to efficiently query these big raster data due to the inconsistency among the geospatial raster data model, distributed physical data storage model, and the data pipeline in distributed computing frameworks. To efficiently process big geospatial data, this paper proposes a three-layer hierarchical indexing strategy to optimize Apache Spark with Hadoop Distributed File System (HDFS) from the following aspects: (1) improve I/O efficiency by adopting the chunking data structure; (2) keep the workload balance and high data locality by building the global index (k-d tree); (3) enable Spark and HDFS to natively support geospatial raster data formats (e.g., HDF4, NetCDF4, GeoTiff) by building the local index (hash table); (4) index the in-memory data to further improve geospatial data queries; (5) develop a data repartition strategy to tune the query parallelism while keeping high data locality. The above strategies are implemented by developing the customized RDDs, and evaluated by comparing the performance with that of Spark SQL and SciSpark. The proposed indexing strategy can be applied to other distributed frameworks or cloud-based computing systems to natively support big geospatial data query with high efficiency.  相似文献   

7.
Least-squares collocation may be used for the estimation of spherical harmonic coefficients and their error and error correlations from GOCE data. Due to the extremely large number of data, this requires the use of the so-called method of Fast Spherical Collocation (FSC) which requires that data is gridded equidistantly on each parallel and have the same uncorrelated noise on the parallel. A consequence of this is that error-covariances will be zero except between coefficients of the same signed order (i.e., the same order and the same coefficient type CC or SS). If the data distribution and the characteristics of the data noise are symmetric with respect to the equator, then, within a given order and coefficient type, the error-covariances amongst coefficients whose degrees are of different parity also vanish. The deviation from this “ideal” pattern has been studied using data-sets of second order radial derivatives of the anomalous potential. A total number of points below 17,000 were used having an equi-angular or an equal area distribution or being associated with points on a realistic GOCE orbit but close to the nodes of a grid. Also the data were considered having a correlated or an uncorrelated noise and three different signal covariance functions. Grids including data or not including data in the polar areas were used. Using the functionals associated with the data, error estimates of coefficients and error-correlations between coefficients were calculated up to a maximal degree and order equal to 90. As expected, for the data-distributions with no data in the polar areas the error-estimates were found to be larger than when the polar areas contained data. In all cases it was found that only the error-correlations between coefficients of the same order were significantly different from zero (up to 88%). Error-correlations were significantly larger when data had been regarded as having non-zero error-correlations. Also the error-correlations were largest when the covariance function with the largest signal covariance distance was used. The main finding of this study was that the correlated noise has more pronounced impact on gridded data than on data distributed on a realistic GOCE orbit. This is useful information for methods using gridded data, such as FSC.  相似文献   

8.
张猛  曾永年 《遥感学报》2018,22(1):143-152
植被净初级生产力NPP(Net Primary Production)遥感估算与分析,有赖于高时空分辨率的遥感数据,但目前中高分辨率的遥感数据受卫星回访周期及天气的影响,在中国南方地区难以获取连续时间序列的数据,从而影响了高精度的区域植被净初级生产力的遥感估算。为此,提出一种基于多源遥感数据时空融合技术与CASA模型估算高时空分辨率NPP的方法。首先,利用多源遥感数据,即Landsat8 OLI数据与MODIS13Q1数据,采用遥感数据时空融合方法,获得了时间序列的Landsat8 OLI融合数据;然后,基于Landsat8 OLI时空融合数据,并采用CASA模型,以长株潭城市群核心区为例,进行区域植被NPP的遥感估算。研究结果表明,基于时间序列Landsat融合数据估算的30m分辨率的NPP具有良好的空间细节信息,且估算值与实测值的相关系数达0.825,与实测NPP数据保持了较好的一致性。  相似文献   

9.
A computer-efficient global data file, which contains digitized information that enables identification of a given latitude/longitude defined point as over land or over water, was generated from a data base which defines the world's shoreline. The method used in the generation of this land-sea boundary data map and its data structure are discussed. The data file was originally generated on a Control Data Corporation(CDC) computer, but it has been transported to other computer systems, includingIBM, DEC/VAX, UNIVAC and Cray computers. The land-sea boundary map also includes information on islands and inland lakes. The resolution of this map is 5′×5′ or an equivalent of9 km square surface blocks at the equator. The software to access this data base is structured to be easily transportable to different computers. This data base was used in the generation of the Seasat Geophysical Data Record(GDR) to identify whether a spaceborne radar altimeter measurement was over-land or over-ocean.  相似文献   

10.
Abstract

Many of the traditional data visualization techniques, which proved to be supportive for exploratory analysis of datasets of moderate sizes, fail to fulfil their function when applied to large datasets. There are two approaches to coping with large amounts of data: data selection, when only a portion of data is displayed, and data aggregation, i.e. grouping data items and considering the groups instead of the original data. None of these approaches alone suits the needs of exploratory data analysis, which requires consideration of data on all levels: overall (considering a dataset as a whole), intermediate (viewing and comparing collective characteristics of arbitrary data subsets, or classes), and elementary (accessing individual data items). Therefore, it is necessary to combine these approaches, i.e. build a tool showing the whole set and arbitrarily defined subsets (object classes) in an aggregated way and superimposing this with a representation of arbitrarily selected individual data items.

We have achieved such a combination of approaches by modifying the technique of parallel coordinate plot. These modifications are described and analysed in the paper.  相似文献   

11.
Abstract

Big data is a highlighted challenge for many fields with the rapid expansion of large-volume, complex, and fast-growing sources of data. Mining from big data is required for exploring the essence of data and providing meaningful information. To this end, we have previously introduced the theory of physical field to explore relations between objects in data space and proposed a framework of data field to discover the underlying distribution of big data. This paper concerns an overview of big data mining by the use of data field. It mainly discusses the theory of data field and different aspects of applications including feature selection for high-dimensional data, clustering, and the recognition of facial expression in human–computer interaction. In these applications, data field is employed to capture the intrinsic distribution of data objects for selecting meaningful features, fast clustering, and describing variation of facial expression. It is expected that our contributions would help overcome the problems in accordance with big data.  相似文献   

12.
Three methods to correct for the atmospheric propagation delay in very-long-baseline interferometry (VLBI) measurements were investigated. In the analysis, the NASA R&D experiments from January 1993 to June 1995 were used. The methods were compared in correcting for the excess propagation delay due to water vapour, the “wet” delay, at one of the sites, the Onsala Space Observatory on the west coast of Sweden. The three methods were: (1) estimating the wet delay using the VLBI data themselves; (2) inferring the wet delay from water vapour radiometer (WVR) data, and (3) using independent estimates based on data from the global positioning system (GPS). Optimum elevation cutoff angles were 22 and 26 when using WVR and GPS data, respectively. The results were found to be similar in terms of reproducibility of the estimated baseline lengths. The shortest baselines tend to benefit from external measurements, whereas the lack of improvement in the longer baselines may be partly due to the large amount of data thrown away when removing observations at low elevation angles. Over a 2 week period of intensive measurements, the two methods using external data showed an overall improvement, for all baseline lengths, compared to the first method. This indicates that there are long-term systematic errors in the wet delay data estimated using WVR and GPS data. Received: 27 October 1998 / Accepted: 20 May 1999  相似文献   

13.
Recent technological advances in geospatial data gathering have created massive data sets with better spatial and temporal resolution than ever before. These large spatiotemporal data sets have motivated a challenge for Geoinformatics: how to model changes and design good quality software. Many existing spatiotemporal data models represent how objects and fields evolve over time. However, to properly capture changes, it is also necessary to describe events. As a contribution to this research, this article presents an algebra for spatiotemporal data. Algebras give formal specifications at a high‐level abstraction, independently of programming languages. This helps to develop reliable and expressive applications. Our algebra specifies three data types as generic abstractions built on real‐world observations: time series, trajectory and coverage. Based on these abstractions, it defines object and event types. The proposed data types and functions can model and capture changes in a large range of applications, including location‐based services, environmental monitoring, public health, and natural disasters.  相似文献   

14.
A precise knowledge of the crop distribution in the landscape is crucial for the agricultural sector to inform better management and logistics. Crop-type maps are often derived by the supervised classification of satellite imagery using machine learning models. The choice of data sampled during the data collection phase of building a classification model has a tremendous impact on a model's performance, and is usually collected via roadside surveys throughout the area of interest. However, the large spatial extent, and the varying accessibility to fields, often makes the acquisition of appropriate training data sets difficult. As such, in situ data are often collected on a best-effort basis, leading to inefficiencies, sub-optimal accuracies, and unnecessarily large sample sizes. This highlights the need for new more efficient tools to guide data collection. Here, we address three tasks that one commonly faces when planning to collect in situ data: which survey route to select among a set logistically feasible routes; which fields are the most relevant to collect along the chosen survey route; and how to best augment existing in situ data sets with additional observations. Our findings show that the normalised Moran's I index is a useful indicator for choosing the survey route, and that sequential exploration methods can identify the most important fields to survey on that route. The provided recommendations are flexible, overcome the main logistical constraints associated with in situ data collection, yield accurate results, and could be incorporated in a mobile application to assist data collection in real-time.  相似文献   

15.
ABSTRACT

Open data are currently a hot topic and are associated with realising ambitions such as a more transparent and efficient government, solving societal problems, and increasing economic value. To describe and monitor the state of open data in countries and organisations, several open data assessment frameworks were developed. Despite high scores in these assessment frameworks, the actual (re)use of open government data (OGD) fails to live up to its expectations. Our review of existing open data assessment frameworks reveals that these only cover parts of the open data ecosystem. We have developed a framework, which assesses open data supply, open data governance, and open data user characteristics holistically. This holistic open data framework assesses the maturity of the open data ecosystem and proves to be a useful tool to indicate which aspects of the open data ecosystem are successful and which aspects require attention. Our initial assessment in the Netherlands indicates that the traditional geographical data perform significantly better than non-geographical data, such as healthcare data. Therefore, open geographical data policies in the Netherlands may provide useful cues for other OGD strategies.  相似文献   

16.
Gravity gradient modeling using gravity and DEM   总被引:2,自引:0,他引:2  
A model of the gravity gradient tensor at aircraft altitude is developed from the combination of ground gravity anomaly data and a digital elevation model. The gravity data are processed according to various operational solutions to the boundary-value problem (numerical integration of Stokes’ integral, radial-basis splines, and least-squares collocation). The terrain elevation data are used to reduce free-air anomalies to the geoid and to compute a corresponding indirect effect on the gradients at altitude. We compare the various modeled gradients to airborne gradiometric data and find differences of the order of 10–20 E (SD) for all gradient tensor elements. Our analysis of these differences leads to a conclusion that their source may be primarily measurement error in these particular gradient data. We have thus demonstrated the procedures and the utility of combining ground gravity and elevation data to validate airborne gradiometer systems.  相似文献   

17.
ABSTRACT

Using Artl@s as an example of a project that relies on volunteered geographic information (VGI), this article examines the specific challenges that exist, beyond those frequently discussed in general VGI systems (e.g., participants’ motivation and data quality control) in regard to sharing research data in humanities: (1) most data from the humanities is qualitative and collected from multiple data sources which are often inconsistent and unmappable; (2) data is usually interconnected with multiple relationships among different tables which creates challenges for both mapping and query functionality; (3) data is both geographical and historical. Consequently addresses that no longer exist have to be geolocated and visualized on historical basemaps and spaces must be represented diachronically; (4) the design of web map application needs to balance both sophisticated research requirements and a user-friendly interface; (5) finally contributors expect their data to be cited or acknowledged when used in other studies and users need metadata and citation information in order to reuse and repurpose datasets.

In this article, we discuss how Artl@s, a project which developed a georeferenced historical database of exhibition catalogues, addresses these challenges. Artl@s provides a case study for VGI adoption by digital humanities scholars for research data sharing, as it offers features, such as flexible batch data contribution, interrelated spatial query, automatic geolocalization of historical addresses, and data citation mechanisms.  相似文献   

18.
19.
在.NET环境下运用面向对象技术对地理空间数据进行合理组织,根据地物分类编码和制图综合知识对数据进行显示等级界定,从而在一定程度上解决地理窄间数据多尺度表达的问题;将空间数据和属性数据同时存储,解决了数据在一致性维护、并发控制以及海量空间数据存储管理上的缺陷;根据图幅和分层来建立R树索引,提高了索引的速度.  相似文献   

20.
The 3D Elevation Program (3DEP) is a collaborative effort among government entities, academia, and the private sector to collect high-resolution 3-dimensional data over the United States. The United States Geological Survey (USGS) is making preparations for managing, processing, and delivering petabytes of 3DEP elevation products for the Nation. In addition to the existing 1/3, 1, and 2 arc-second seamless elevation data layers of The National Map, new 3DEP products include lidar point cloud data; a standard 1-meter DEM layer; additional source datasets; and, in Alaska, 5-meter digital elevation models. A new product generation system improves the construction and publication of the seamless elevation datasets, prepares the additional 3DEP products for distribution, and automates the data management functions required to accommodate the high-volume 3DEP data collection. Major changes in geospatial data acquisition, such as high resolution lidar data, volunteered geographic information, data processing using parallel and grid computer systems, and user needs for semantic access to geospatial data and products, are driving USGS research associated with the 3DEP. To address the research requirements, a set of inter-related projects including spatiotemporal data models, data integration, geospatial semantics and ontology, high performance computing, multi-scale representation, and hydrological modeling using lidar and other 3DEP data has been developed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号