首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Sampling satellite images presents some specific characteristics: images overlap and many of them fall partially outside the studied region. A careless sampling may introduce an important bias. This paper illustrates the risk of bias and the efficiency improvements of systematic, pps (probability proportional to size) and stratified sampling.A sampling method is proposed with the following criteria: (a) unbiased estimators are easy to compute; (b) it can be combined with stratification; (c) within each stratum, sampling probability is proportional to the area of the sampling unit; and (d) the geographic distribution of the sample is reasonably homogeneous. Thiessen polygons computed on image centres are sampled through a systematic grid of points. The sampling rates in different strata are tuned by dividing the systematic grid into subgrids or replicates and taking for each stratum a certain number of replicates.The approach is illustrated with an application to the estimation of the geometric accuracy of Image2000, a Landsat ETM+ mosaic of the European Union.  相似文献   

2.
基于不同抽样方法的遥感面积测量方法研究   总被引:11,自引:0,他引:11  
众多研究结果表明,遥感和抽样技术相结合可以有效地进行地物面积的测量。目前,随机抽样、系统抽样和分层抽样方式在 遥感抽样调查技术领域应用比较广泛。本文以遥感图像为基础,从不同角度对随机抽样、系统抽样及分层抽样(包括等样本量、等 面积、等丰度抽样)进行了有益探讨,分析发现: 对于同一地物,从平均误差百分比、标准差和极差3个角度分析,随机和系统抽 样反推得到的总量精度都低于分层抽样精度; 对于不同的地物类型,利用3种分层抽样方法反推的结果与地物所占百分比成正相关 ,地物所占的百分比越大,反推的结果越好; 等样本量、等面积、等丰度分层抽样从平均误差百分比、标准差和极差3个角度分析 各有优势,跟地物所占的百分比也有密切关系。  相似文献   

3.
基于两个独立抽样框架的农作物种植面积遥感估算方法   总被引:34,自引:15,他引:34  
吴炳方  李强子 《遥感学报》2004,8(6):551-569
通过分析遥感技术在中国农作物种植面积估算中所遇到的难点 ,针对运行化的农作物遥感估产系统对主要农作物种植面积估算的需求 ,提出在农作物种植结构区划的基础上 ,采用整群抽样和样条采样技术相结合的方法 ,进行农作物种植面积估算。整群抽样技术利用遥感影像估算农作物总种植成数 ,样条采样是一种适合中国农作物种植结构特征的采样技术 ,用于调查不同农作物类别在所有播种作物中的分类成数。在中国现有的耕地数据库基础上 ,根据两次抽样获得的成数 ,计算得到具体某一种农作物类别的种植面积。最后给出了 2 0 0 3年早稻种植面积估算的实例。  相似文献   

4.
ABSTRACT

Social, economic, and environmental statistical data associated with geographic points are currently globally available in large amounts. When conventional thematic maps, such as proportional symbol maps or point diagram maps, are used to represent these data, the maps appear cluttered if the point data volumes are relatively large or cover a relatively dense region. To overcome these limitations, we propose a new type of thematic map for statistical data associated with geographic points: the point grid map. In a point grid map, an input point data set is transformed into a grid in which each point is represented by a square grid cell of equal size while preserving the relative position of each point, which leads to a clear and uncluttered appearance, and the grid cells can be shaded or patterned with symbols or diagrams according to the attributes of the points. We present an algorithm to construct a point grid map and test it with several simulated and real data sets. Furthermore, we present some variants of the point grid map.  相似文献   

5.
DEM matching for bias compensation of rigorous pushbroom sensor models   总被引:1,自引:0,他引:1  
DEM matching is a technique to match two surfaces or two DEMs, at different reference frames. It was originally proposed to replace the need of ground control points for absolute orientation of perspective images. This paper examines DEM matching for precise mapping of pushbroom images without ground control points. We proved that DEM matching based on 3D similarity transformation can be used when model errors are only on the platform’s position and attitude biases. We also proposed how to estimate bias errors and how to update rigorous pushbroom sensor models from DEM matching results. We used a SPOT-5 stereo pair at ground sampling distance of 2.5 m and a reference DEM dataset at grid spacing of 30 m and showed that rigorous pushbroom models with accuracy better than twice of the ground sampling distance both in image and object space have been achieved through DEM matching. We showed further that DEM matching based on 3D similarity transformation may not work for pushbroom images with drift or drift rate errors. We discussed the effects of DEM outliers on DEM matching and automated removal of outliers. The major contribution of this paper is that we validate DEM matching, theoretically and experimentally, for estimating position and attitude biases and for establishing rigorous sensor models for pushbroom images.  相似文献   

6.
对流层延迟是卫星导航定位的主要误差源,GNSS广域增强需要高精度的对流层延迟产品进行误差修正。对流层延迟可通过GNSS进行实时估计,也可通过融合多源数据的数值气象预报模型获取。IGS发布的全球对流层天顶延迟产品由GNSS解算,其精度可达4mm,时间分辨率为5min,但其分布不均匀,在广袤的海洋区域无数据覆盖。GGOS Atmosphere基于ECMWF 40年再分析资料,可提供1979年以来时间分辨率为6h、空间分辨率为2.5°×2°的全球天顶对流层总延迟格网数据。本文通过2015年全球IGS测站的ZTD资料对GGOS的ZTD产品进行了评估,研究了GGOS Atmosphere对流层延迟产品与IGS发布ZTD资料之间的系统差,通过线性拟合估计出每个测站GGOS-ZTD与IGSZTD系统差系数(包括比例误差a和固定误差b),然后对比例误差a、固定误差b进行球谐展开,建立了两种ZTD数据源之间的系统差模型。选取IGS测站和陆态网测站,对附加系统偏差改正后的GGOSZTD产品对PPP的收敛速度的影响进行研究。本文研究结果表明:GGOS-ZTD与IGS-ZTD间存在系统偏差,其bias平均为-0.54cm;两者之间较差的RMS平均为1.31cm,说明GGOS-ZTD产品足以满足广大GNSS导航定位用户对对流层延迟改正的需要。将改正了系统差后的GGOS-ZTD产品用于ALBH、DEAR、ISPA测站、PALM测站、ADIS测站、YNMH测站、WUHN测站进行PPP试验,发现可明显提高定位收敛速度,尤其是在U方向上,收敛速度分别提高10.58%、31.68%、15.96%、43.89%、51.46%、14.69%、18.40%。  相似文献   

7.
赵雪梅  李玉  赵泉华 《遥感学报》2017,21(5):767-775
为了实现影像的自动化分割,提出一种利用非监督方式将观测数据采样化的遥感影像分割方法。该方法利用欧氏空间的概率分布建模采样数据和观测数据,并将其映射到黎曼空间,通过不断将观测数据转换为采样数据的方式实现影像的自动采样化。每次采样过程只需计算观测数据点到采样点的测地线距离,将距采样点测地线距离最小的观测数据转化为采样数据,以保证采样数据不断趋于该类数据的真实分割结果,同时使算法能够有效分割具有不同像素数的类别。将算法应用于模拟影像和真实遥感影像分割,对其分割结果以及传统基于统计、基于模糊的非监督算法和基于神经网络的监督算法相应分割结果定性定量的对比分析验证了该算法的有效性及可行性。  相似文献   

8.
本文从分析航空重力向下延拓过程中偶然误差和系统误差的变化特性入手,进而提出处理办法。首先,利用试验说明移去恢复法局限性,同时表明需处理系统误差和偶然误差的必要性。然后,采用理论推演和数值模拟计算分别估计了系统误差和偶然误差影响,试验结果发现:系统误差影响和偶然误差影响均与数据格网间隔、向下延拓高度呈线性关系,当格网化间隔较小和延拓高度较高时系统误差影响和偶然误差影响较大。最后,提出使用半参数模型和正则化算法的两步法估计系统误差和减弱偶然误差影响,试验结果说明两步法处理向下延拓各类误差影响优于仅用半参数模型或正则化算法的结果,在试验数据的偶然误差标准差为2×10~(-5) m/s~2、恒值系统误差3×10~(-5) m/s~2和变值系统误差标准差约1.3×10~(-5) m/s~2时,以及向下延拓高度6.3 km和格网间隔6′的条件下,两步法向下延拓结果的精度可达2.3×10~(-5) m/s~2。  相似文献   

9.
The Brazilian Amazon is a vast territory with an enormous need for mapping and monitoring of renewable and non-renewable resources. Due to the adverse environmental condition (rain, cloud, dense vegetation) and difficult access, topographic information is still poor, and when available needs to be updated or re-mapped. In this paper, the feasibility of using Digital Surface Models (DSMs) extracted from TerraSAR-X Stripmap stereo-pair images for detailed topographic mapping was investigated for a mountainous area in the Carajás Mineral Province, located on the easternmost border of the Brazilian Amazon. The quality of the radargrammetric DSMs was evaluated regarding field altimetric measurements. Precise topographic field information acquired from a Global Positioning System (GPS) was used as Ground Control Points (GCPs) for the modeling of the stereoscopic DSMs and as Independent Check Points (ICPs) for the calculation of elevation accuracies. The analysis was performed following two ways: (1) the use of Root Mean Square Error (RMSE) and (2) calculations of systematic error (bias) and precision. The test for significant systematic error was based on the Student’s-t distribution and the test of precision was based on the Chi-squared distribution. The investigation has shown that the accuracy of the TerraSAR-X Stripmap DSMs met the requirements for 1:50,000 map (Class A) as requested by the Brazilian Standard for Cartographic Accuracy. Thus, the use of TerraSAR-X Stripmap images can be considered a promising alternative for detailed topographic mapping in similar environments of the Amazon region, where available topographic information is rare or presents low quality.  相似文献   

10.
黑土区田块尺度遥感精准管理分区   总被引:2,自引:0,他引:2  
基于格网采样与空间插值的精准管理分区方法精度高,但时效性差、成本高。本文以东北农垦地区红星农场农田为研究对象,提出一种基于遥感影像的精准管理分区方法:以裸土高空间分辨率遥感影像作为数据源,结合田间格网采样数据,基于裸土反射光谱特征与黑土主要理化性质的显著相关关系,运用面向对象分割、空间统计分析方法,对典型黑土区田块进行精准管理分区研究,并利用土壤理化性质和农作物生理参数,对分区结果进行评价。得出如下结论:(1)典型黑土区田块内部土壤养分含量空间变异显著;(2)基于裸土影像与面向对象的精准管理分区方法精度高,增强了分区之间的土壤养分与归一化植被指数(NDVI)差异性、分区内部各属性的一致性;(3)基于2015年4月1日和2015年5月20日单期影像分区和两期影像波段叠加(Layer stacking)分区,区间变异系数与区内变异系数之比分别为1.42、1.39和7.63,基于两期影像综合信息的分区结果显著优于基于单期影像分区;(4)基于裸土影像面向对象分割的精准管理分区方法时效性强、成本低、精度高。研究成果为田间变量施肥、发展精准农业、实现农业可持续发展提供依据。  相似文献   

11.
Spatial prediction is commonly used in social and environmental research to estimate values at unobserved locations using sampling data. However, most existing spatial prediction methods and software packages are based on the assumption of spatial autocorrelation (SAC), which may not apply when spatial dependence is weak or non-existent. In this article, we develop a modeling framework for spatial prediction based on spatial stratified heterogeneity (SSH), a common feature of geographical variables, as well as an R package called sandwichr that implements this framework. For populations that can be stratified into homogeneous strata, the proposed framework enables the estimation of values for user-defined reporting units (e.g., administrative units or grid cells) based on the mean of each stratum, even if SAC is weak or absent. The estimated values can be used to create predicted surfaces and mapping. The framework also includes procedures for selecting appropriate stratifications of the populations and assessing prediction uncertainty and model accuracy. The sandwichr package includes functions to implement each step of the framework, allowing users to implement SSH-based spatial prediction effectively and efficiently. Two case studies are provided to illustrate the effectiveness of the proposed framework and the sandwichr package.  相似文献   

12.
Buildings and other human-made constructions have been accepted as an indicator of human habitation and are identified as built-up area. Identification of built-up area in a region and its subsequent measurement is a key step in many fields of studies like urban planning, environmental studies, and population demography. Remote sensing techniques utilising medium resolution images (e.g. LISS III, Landsat) are extensively used for the extraction of the built-up area as high-resolution images are expensive, and its processing is difficult. Extraction of built land use from medium resolution images poses a challenge in regions like Western-Ghats, North-East regions of India, and countries in tropical region, due to the thick evergreen tree cover. The spectral signature of individual houses with a small footprint are easily overpowered by the overlapping tree canopy in a medium resolution image when the buildings are not clustered. Kerala is a typical case for this scenario. The research presented here proposes a stochastic-dasymetric process to aid in the built-up area recognition process by taking Kerala as a case study. The method utilises a set of ancillary information to derive a probability surface. The ancillary information used here includes distance from road junctions, distance from road network, population density, built-up space visible in the LISS III image, the population of the region, and the household size. The methodology employs logistic regression and Monte Carlo simulation in two sub processes. The algorithm estimates the built-up area expected in the region and distributes the estimated built-up area among pixels according to the probability estimated from the ancillary information. The output of the algorithm has two components. The first component is an example scenario of the built-up area distribution. The second component is a probability surface, where the value of each pixel denotes the probability of that pixel to have a significant built-up area within it. The algorithm is validated for regions in Kerala and found to be significant. The model correctly predicted the built-up pixel count count over a validation grid of 900 m in 95.2% of the cases. The algorithm is implemented using Python and ArcGIS.  相似文献   

13.
Four major rainfall events that affected the Western Alps between autumn 1994 and autumn 2000 are analyzed to assess the bias between the radar rainfall estimates at rain gauge locations and the gauge amounts. The aim of this study is to demonstrate the importance of: 1) bias adjustment; 2) the training procedure used to train various adjustment methods by means of independent data; and 3) a quality check of the radar-gauge couples that were used for the training itself. A first adjustment method is simply based on a single "bias-correction" coefficient. A weighted multiple regression (WMR) is well worth the additional effort of determining three additional coefficients, which give a spatial distribution of the adjustment factor rather than a constant one for the whole domain as the output. The independent dataset that was used to train the gauge-adjustment techniques consists of daily radar/gauge amounts accumulated during the first day of each event. The following days are used for an independent verification that is dealt with in a companion letter, which will validate the methods and illustrate the improvements and the feasibility of a real-time application during intense events. The WMR technique tries to correct not only the overall bias but also the beam-broadening, visibility, and orography influences. The training procedure of both the bulk- and WMR-adjustment methods highlighted a considerable radar underestimation, which is certainly not surprising in mountainous terrain. The WMR-derived coefficients also clearly show that the radar underestimates precipitation for higher sampling volumes and longer distances. Since the WMR is fast and simple to use, it represents an alternative to more sophisticated methods and seems to be particularly useful for operational services.  相似文献   

14.
Using CORINE land cover and the point survey LUCAS for area estimation   总被引:3,自引:0,他引:3  
CORINE land cover 2000 (CLC2000) is a European land cover map produced by photo-interpretation of Landsat ETM+ images. Its direct use for area estimation can be strongly biased and does not generally report single crops. CLC areas need to be calibrated to give acceptable statistical results.LUCAS (land use/cover area frame survey) is a point survey carried out in 2001 and 2003 in the European Union (EU15) on a systematic sample of clusters of points. LUCAS is especially useful for area estimation in geographic units that do not coincide with administrative regions, such as set of coastal areas defined with a 10 km buffer. Some variance estimation issues with systematic sampling of clusters are analysed.The contingency table obtained overlaying CLC and LUCAS gives the fine scale composition of CLC classes. Using CLC for post-stratification of LUCAS is equivalent to the direct calibration estimator when the sampling units are points. Stratification is easier to adapt to a scheme in which the sampling units are the clusters of points used in LUCAS 2001/2003.  相似文献   

15.
A global systematic sampling scheme has been developed by the UN FAO and the EC TREES project to estimate rates of deforestation at global or continental levels at intervals of 5 to 10 years. This global scheme can be intensified to produce results at the national level. In this paper, using surrogate observations, we compare the deforestation estimates derived from these two levels of sampling intensities (one, the global, for the Brazilian Amazon the other, national, for French Guiana) to estimates derived from the official inventories. We also report the precisions that are achieved due to sampling errors and, in the case of French Guiana, compare such precision with the official inventory precision.We extract nine sample data sets from the official wall-to-wall deforestation map derived from satellite interpretations produced for the Brazilian Amazon for the year 2002 to 2003. This global sampling scheme estimate gives 2.81 million ha of deforestation (mean from nine simulated replicates) with a standard error of 0.10 million ha. This compares with the full population estimate from the wall-to-wall interpretations of 2.73 million ha deforested, which is within one standard error of our sampling test estimate. The relative difference between the mean estimate from sampling approach and the full population estimate is 3.1%, and the standard error represents 4.0% of the full population estimate.This global sampling is then intensified to a territorial level with a case study over French Guiana to estimate deforestation between the years 1990 and 2006. For the historical reference period, 1990, Landsat-5 Thematic Mapper data were used. A coverage of SPOT-HRV imagery at 20 m × 20 m resolution acquired at the Cayenne receiving station in French Guiana was used for year 2006.Our estimates from the intensified global sampling scheme over French Guiana are compared with those produced by the national authority to report on deforestation rates under the Kyoto protocol rules for its overseas department. The latter estimates come from a sample of nearly 17,000 plots analyzed from same spatial imagery acquired between year 1990 and year 2006. This sampling scheme is derived from the traditional forest inventory methods carried out by IFN (Inventaire Forestier National). Our intensified global sampling scheme leads to an estimate of 96,650 ha deforested between 1990 and 2006, which is within the 95% confidence interval of the IFN sampling scheme, which gives an estimate of 91,722 ha, representing a relative difference from the IFN of 5.4%.These results demonstrate that the intensification of the global sampling scheme can provide forest area change estimates close to those achieved by official forest inventories (<6%), with precisions of between 4% and 7%, although we only estimate errors from sampling, not from the use of surrogate data.Such methods could be used by developing countries to demonstrate that they are fulfilling requirements for reducing emissions from deforestation in the framework of an REDD (Reducing Emissions from Deforestation in Developing Countries) mechanism under discussion within the United Nations Framework Convention on Climate Change (UNFCCC). Monitoring systems at national levels in tropical countries can also benefit from pan-tropical and regional observations, to ensure consistency between different national monitoring systems.  相似文献   

16.
For assessment of growing stock, the role of aerial photographs mairly consists of volume class stratification, knowing proportion of various stratum and in providing layout for ground sample plots along with their precise location on the the ground. Plain Sal stratum was stratified into three volume classes on the basis of volume stereograms and standard deviation in each stratum estimated on the basis of reconnaissance data. 63 ground plots were needed for ± 5 cum (E = ± 5) accuracy for optimum allocation. Volume in 0.1 hectare circular plots was obtained from measurement of all trees above 10 cm dbh. The mean volume was 124 cum per hectare ± 9.55 cum at 95% probability level. A comparison with Working Plan figures revealed a close similarity. Advantage in time and cost for getting information on growing stock by the use of aerial photographs have been highlighted.  相似文献   

17.
This paper presents novel techniques to estimate the uncertainty in extrapolations of spatially-explicit land-change simulation models. We illustrate the concept by mapping a historic landscape based on: 1) tabular data concerning the quantity in each land cover category at a distant point in time at the stratum level, 2) empirical maps from more recent points in time at the grid cell level, and 3) a simulation model that extrapolates land-cover change at the grid cell level. This paper focuses on the method to show uncertainty explicitly in the map of the simulated landscape at the distant point in time. The method requires that validation of the land-cover change model be quantified at the grid-cell level by Kappa for location (Klocation). The validation statistic is used to estimate the certainty in the extrapolation to a point in time where an empirical map does not exist. As an example, we reconstruct the 1951 landscape of the Ipswich River Watershed in Massachusetts, USA. The technique creates a map of 1951 simulated forest with an overall estimated accuracy of 0.91, with an estimated users accuracy ranging from 0.95 to 0.84. We anticipate that this method will become popular, because tabular information concerning land cover at coarse stratum-level scales is abundant, while digital maps of the specific location of land cover are needed at a finer spatial resolution. The method is a key to link non-spatial models with spatially-explicit models.  相似文献   

18.
This paper provides numerical examples for the prediction of height anomalies by the solution of Molodensky's boundary value problem. Computations are done within two areas in the Canadian Rockies. The data used are on a grid with various grid spacings from 100 m to 5 arc-minutes. Numerical results indicate that the Bouguer or the topographicisostatic gravity anomalies should be used in gravity interpolation. It is feasible to predict height anomalies in mountainous areas with an accuracy of 10 cm (1) if sufficiently dense data grids are used. After removing the systematic bias, the differences between the geoid undulations converted from height anomalies and those derived from GPS/levelling on 50 benchmarks is 12 cm (1) when the grid spacing is 1km, and 50 cm (1) when the grid spacing is 5. It is not necessary, in most cases, to require a grid spacing finer than 1 km, because the height anomaly changes only by 3 cm (1) when the grid spacing is increased from 100 m to 1000 m. Numerical results also indicate that, only the first two terms of the Molodensky series have to be evaluated in all but the extreme cases, since the contributions of the higher order terms are negligible compared to the objective accuracy.  相似文献   

19.
A global, 2-hourly atmospheric precipitable water (PW) dataset is produced from ground-based GPS measurements of zenith tropospheric delay (ZTD) using the International Global Navigation Satellite Systems (GNSS) Service (IGS) tropospheric products (~80–370 stations, 1997–2006) and US SuomiNet product (169 stations, 2003–2006). The climate applications of the GPS PW dataset are highlighted in this study. Firstly, the GPS PW dataset is used as a reference to validate radiosonde and atmospheric reanalysis data. Three types of systematic errors in global radiosonde PW data are quantified based on comparisons with the GPS PW data, including measurement biases for each of the fourteen radiosonde types along with their characteristics, long-term temporal inhomogeneity and diurnal sampling errors of once and twice daily radiosonde data. The comparisons between the GPS PW data and three reanalysis products, namely the NCEP-NCAR (NNR), ECMWF 40-year (ERA-40) and Japanese reanalyses (JRA), show that the elevation difference between the reanalysis grid box and the GPS station is the primary cause of the PW difference. Secondly, the PW diurnal variations are documented using the 2-hourly GPS PW dataset. The PW diurnal cycle has an annual-mean, peak-to-peak amplitude of 0.66, 0.53 and 1.11 mm for the globe, Northern Hemisphere, and Southern Hemisphere, respectively, with the time of the peak ranging from noon to late evening depending on the season and region. Preliminary analyses suggest that the PW diurnal cycle in Europe is poorly represented in the NNR and JRA products. Several recommendations are made for future improvements of IGS products for climate applications.  相似文献   

20.
With the development of Volunteered Geographical Information (VGI) data, the OpenStreetMap has high research value in terms of project activity, social influence, urban development, application scope, and historical richness and the number of buildings or roads is increasing every day. However, how to evaluate the quality of a large amount OpenStreetMaps efficiently and accurately is still not fully understood. This article presents the development of an approach regarding multilevel stratified spatial sampling based on slope knowledge and official 1:1000 thematic maps as the reference dataset for OpenStreetMap data quality inspection of Hong Kong. This multilevel stratified spatial sampling plan is as follows: (1) The terrain characteristics of Hong Kong are fully considered by dividing grids into quality estimate strata based on the slope information; (2) Spatial sampling for the selection of grids or objects is used; (3) A more reliable sampling subset is made, regarding the representation of the entire OpenStreetMap dataset of Hong Kong. This sampling plan displays a 10% higher sampling accuracy, but without increasing the sample size, particularly as regards building completeness inspection compared with simple random sampling and systematic random sampling. This research promotes further applications of the Open-Street-Map dataset, thus enabling us to have a better understanding of the OpenStreetMap data quality in urban areas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号