首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is well established that digital elevation models (DEMs) derived from unmanned aerial vehicle (UAV) images and processed by structure from motion may contain important systematic vertical errors arising from limitations in camera geometry modelling. Even when significant, such ‘dome’-shaped errors can often remain unnoticed unless specific checks are conducted. Previous methods used to reduce these errors have involved: the addition of convergent images to supplement traditional vertical datasets, the usage of a higher number of ground control points, precise direct georeferencing techniques (RTK/PPK) or more refined camera pre-calibration. This study confirms that specific UAV flight designs can significantly reduce dome errors, particularly those that have a higher number of tie points connecting distant images, and hence contribute to a strengthened photogrammetric network. A total of 22 flight designs were tested, including vertical, convergent, point of interest (POI), multiscale and mixed imagery. Flights were carried out over a 300 × 70 m2 flat test field area, where 143 ground points were accurately established. Three different UAVs and two commercial software packages were trialled, totalling 396 different tests. POI flight designs generated the smallest systematic errors. In contrast, vertical flight designs suffered from larger dome errors; unfortunately, a configuration that is ubiquitous and most often used. By using the POI flight design, the accuracy of DEMs will improve without the need to use more ground control or expensive RTK/PPK systems. Over flat terrain, the improvement is especially important in self-calibration projects without (or with just a few) ground control points. Some improvement will also be observed on those projects using camera pre-calibration or with stronger ground control. © 2020 John Wiley & Sons, Ltd.  相似文献   

2.
Unmanned aerial vehicles (UAVs) and structure-from-motion photogrammetry enable detailed quantification of geomorphic change. However, rigorous precision-based change detection can be compromised by survey accuracy problems producing systematic topographic error (e.g. ‘doming’), with error magnitudes greatly exceeding precision estimates. Here, we assess survey sensitivity to systematic error, directly correcting topographic data so that error magnitudes align more closely with precision estimates. By simulating conventional grid-style photogrammetric aerial surveys, we quantify the underlying relationships between survey accuracy, camera model parameters, camera inclination, tie point matching precision and topographic relief, and demonstrate a relative insensitivity to image overlap. We show that a current doming-mitigation strategy of using a gently inclined (<15°) camera can reduce accuracy by promoting a previously unconsidered correlation between decentring camera lens distortion parameters and the radial terms known to be responsible for systematic topographic error. This issue is particularly relevant for the wide-angle cameras often integrated into current-generation, accessible UAV systems, frequently used in geomorphic research. Such systems usually perform on-board image pre-processing, including applying generic lens distortion corrections, that subsequently alter parameter interrelationships in photogrammetric processing (e.g. partially correcting radial distortion, which increases the relative importance of decentring distortion in output images). Surveys from two proglacial forefields (Arolla region, Switzerland) showed that results from lower-relief topography with a 10°-inclined camera developed vertical systematic doming errors > 0·3 m, representing accuracy issues an order of magnitude greater than precision-based error estimates. For higher-relief topography, and for nadir-imaging surveys of the lower-relief topography, systematic error was < 0·09 m. Modelling and subtracting the systematic error directly from the topographic data successfully reduced error magnitudes to values consistent with twice the estimated precision. Thus, topographic correction can provide a more robust approach to uncertainty-based detection of event-scale geomorphic change than designing surveys with small off-nadir camera inclinations and, furthermore, can substantially reduce ground control requirements. © 2020 The Authors. Earth Surface Processes and Landforms published by John Wiley & Sons Ltd  相似文献   

3.
Array observation is an efficient tool to investigate various characteristics of earthquake ground motion. However, seismographs used in arrays may involve unexpected errors in their orientations. Methods of orientation error estimation were developed in three-dimensional space by comparing recorded ground motions at a reference point with those at a checking point. A maximum cross-correlation method and a maximum coherence method were proposed and their accuracy was demonstrated. The earthquake ground motions recorded in the Chiba array and in two other arrays were used in numerical examples. Non-trivial orientation errors were detected for all these arrays. The cross-correlation coefficients and the coherence values between two points increased significantly by correcting the estimated orientation errors.  相似文献   

4.
It is often very useful to be able to smooth velocity fields estimated from exploration seismic data. For example seismic migration is most successful when accurate but also smooth migration velocity fields are used. Smoothing in one, two and three dimensions is examined using North Sea velocity data. A number of ways for carrying out this smoothing are examined, and the technique of locally weighted regression (LOESS) emerges as most satisfactory. In this method each smoothed value is formed using a local regression on a neighbourhood of points downweighted according to their distance from the point of interest. In addition the method incorporates ‘blending’ which saves computations by using function and derivative information, and ‘weighting and robustness’ which allows the smooth to be biased towards reliable points, or away from unreliable ones. A number of other important factors are also considered: namely, the effect of changing the scales of axes, or of thinning the velocity field, prior to smoothing, as well as the problem of smoothing on to irregular subsurfaces.  相似文献   

5.
High resolution digital elevation models (DEMs) are increasingly produced from photographs acquired with consumer cameras, both from the ground and from unmanned aerial vehicles (UAVs). However, although such DEMs may achieve centimetric detail, they can also display systematic broad‐scale error that restricts their wider use. Such errors which, in typical UAV data are expressed as a vertical ‘doming’ of the surface, result from a combination of near‐parallel imaging directions and inaccurate correction of radial lens distortion. Using simulations of multi‐image networks with near‐parallel viewing directions, we show that enabling camera self‐calibration as part of the bundle adjustment process inherently leads to erroneous radial distortion estimates and associated DEM error. This effect is relevant whether a traditional photogrammetric or newer structure‐from‐motion (SfM) approach is used, but errors are expected to be more pronounced in SfM‐based DEMs, for which use of control and check point measurements are typically more limited. Systematic DEM error can be significantly reduced by the additional capture and inclusion of oblique images in the image network; we provide practical flight plan solutions for fixed wing or rotor‐based UAVs that, in the absence of control points, can reduce DEM error by up to two orders of magnitude. The magnitude of doming error shows a linear relationship with radial distortion and we show how characterization of this relationship allows an improved distortion estimate and, hence, existing datasets to be optimally reprocessed. Although focussed on UAV surveying, our results are also relevant to ground‐based image capture. © 2014 The Authors. Earth Surface Processes and Landforms published by John Wiley & Sons Ltd.  相似文献   

6.
The concept of dynamic equilibrium has provided geomorphologists with a challenging paradigm for studying landform evolution but quantitative evidence for its existence has proved illusive, particularly for complex geomorphological systems. The authors believe that the principle has now been verified through the application of the ‘archival photogrammetric technique’ to a sequence of historical photographs spanning 50 years of process at the Black Ven mudslide complex in Dorset, U.K. The principles and limitations of the archival photogrammetric technique are described. The method is applied to oblique and vertical aerial photographs of Black Ven at five epochs, commencing in 1946, continuing at approximately 10 year intervals until 1988. The technique is used to generate plans/contours/sections and a dense and accurate digital elevation model (DEM) of the whole site at each epoch. This is used to generate ‘DEMs of difference’ and a ‘distribution of slope angle’ which suggest that the mudslides are in equilibrium despite the removal of 200 000 m3 of sediment between 1958 and 1988. Extrapolation of the slope distribution through time suggests that the frequency of an episodic landform change model at Black Ven may be approximately 60 years.  相似文献   

7.
Anderson WP  Evans DG 《Ground water》2007,45(4):499-505
Ground water recharge is often estimated through the calibration of ground water flow models. We examine the nature of calibration errors by considering some simple mathematical and numerical calculations. From these calculations, we conclude that calibrating a steady-state ground water flow model to water level extremes yields estimates of recharge that have the same value as the time-varying recharge at the time the water levels are measured. These recharge values, however, are a subdued version of the actual transient recharge signal. In addition, calibrating a steady-state ground water flow model to data collected during periods of rising water levels will produce recharge values that underestimate the actual transient recharge. Similarly, calibrating during periods of falling water levels will overestimate the actual transient recharge. We also demonstrate that average water levels can be used to estimate the actual average recharge rate provided that water level data have been collected for a sufficient amount of time.  相似文献   

8.
3D stochastic inversion of magnetic data   总被引:1,自引:0,他引:1  
  相似文献   

9.
An envelope‐based pushover analysis procedure is presented that assumes that the seismic demand for each response parameter is controlled by a predominant system failure mode that may vary according to the ground motion. To be able to simulate the most important system failure modes, several pushover analyses need to be performed, as in a modal pushover analysis procedure, whereas the total seismic demand is determined by enveloping the results associated with each pushover analysis. The demand for the most common system failure mode resulting from the ‘first‐mode’ pushover analysis is obtained by response history analysis for the equivalent ‘modal‐based’ SDOF model, whereas demand for other failure modes is based on the ‘failure‐based’ SDOF models. This makes the envelope‐based pushover analysis procedure equivalent to the N2 method provided that it involves only ‘first‐mode’ pushover analysis and response history analysis of the corresponding ‘modal‐based’ SDOF model. It is shown that the accuracy of the approximate 16th, 50th and 84th percentile response expressed in terms of IDA curves does not decrease with the height of the building or with the intensity of ground motion. This is because the estimates of the roof displacement and the maximum storey drift due to individual ground motions were predicted with a sufficient degree of accuracy for almost all the ground motions from the analysed sets. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
A comparison of two stochastic inverse methods in a field-scale application   总被引:1,自引:0,他引:1  
Inverse modeling is a useful tool in ground water flow modeling studies. The most frequent difficulties encountered when using this technique are the lack of conditioning information (e.g., heads and transmissivities), the uncertainty in available data, and the nonuniqueness of the solution. These problems can be addressed and quantified through a stochastic Monte Carlo approach. The aim of this work was to compare the applicability of two stochastic inverse modeling approaches in a field-scale application. The multi-scaling (MS) approach uses a downscaling parameterization procedure that is not based on geostatistics. The pilot point (PP) approach uses geostatistical random fields as initial transmissivity values and an experimental variogram to condition the calibration. The studied area (375 km2) is part of a regional aquifer, northwest of Montreal in the St. Lawrence lowlands (southern Québec). It is located in limestone, dolomite, and sandstone formations, and is mostly a fractured porous medium. The MS approach generated small errors on heads, but the calibrated transmissivity fields did not reproduce the variogram of observed transmissivities. The PP approach generated larger errors on heads but better reproduced the spatial structure of observed transmissivities. The PP approach was also less sensitive to uncertainty in head measurements. If reliable heads are available but no transmissivities are measured, the MS approach provides useful results. If reliable transmissivities with a well inferred spatial structure are available, then the PP approach is a better alternative. This approach however must be used with caution if measured transmissivities are not reliable.  相似文献   

11.
A post audit of a model-designed ground water extraction system   总被引:1,自引:0,他引:1  
Andersen PF  Lu S 《Ground water》2003,41(2):212-218
Model post audits test the predictive capabilities of ground water models and shed light on their practical limitations. In the work presented here, ground water model predictions were used to design an extraction/treatment/injection system at a military ammunition facility and then were re-evaluated using site-specific water-level data collected approximately one year after system startup. The water-level data indicated that performance specifications for the design, i.e., containment, had been achieved over the required area, but that predicted water-level changes were greater than observed, particularly in the deeper zones of the aquifer. Probable model error was investigated by determining the changes that were required to obtain an improved match to observed water-level changes. This analysis suggests that the originally estimated hydraulic properties were in error by a factor of two to five. These errors may have resulted from attributing less importance to data from deeper zones of the aquifer and from applying pumping test results to a volume of material that was larger than the volume affected by the pumping test. To determine the importance of these errors to the predictions of interest, the models were used to simulate the capture zones resulting from the originally estimated and updated parameter values. The study suggests that, despite the model error, the ground water model contributed positively to the design of the remediation system.  相似文献   

12.
Seismic wave attenuation in porous rocks consists of intrinsic or anelastic attenuation (the lost energy is converted into heat due to interaction between the waves and the rocks) and the extrinsic or geometric attenuation (the energy is lost due to beam spreading, transmission loss and scattering). The first is of great importance because it can give additional information on the petrophysical properties of rocks (permeability, degree of saturation, type of saturant, etc.). The most difficult problem in attenuation measurements is estimating or eliminating extrinsic attenuation, so that the intrinsic attenuation can be obtained. To date, in laboratory attenuation measurements using wave propagation, several methods have been used. The difficulties vary with the method. The coupling effect and the geometric divergence or beam spreading are the major problems. Papadakis’ diffraction corrections have been used extensively by Winkler and Plona in their modified pulse-echo high-pressure attenuation measurements. These corrections are computed for homogeneous liquid media and their failure to fit data for solid material implies that these corrections must be used with caution, especially for high Q values. Three new methods for laboratory ultrasonic attenuation measurements are presented. The first is the ‘ultrasonic lens’ method for attenuation measurements at atmospheric pressure, in which an ultrasonic lens placed between transmitter and sample transforms the initially oblique incident beam into normal incidence so that the geometric divergence is eliminated. The second method is the ‘panoramic receiver’, in which the beam spreading can be eliminated by integrating the ultrasonic energy over a large area. The third method is called 'self-spectral ratio’ and is applicable for all pressure conditions. Attenuation is estimated by comparing two signals recorded on the same rock but with two slightly different thicknesses under the same pressure conditions. Hence the extrinsic attenuation for both thicknesses is approximately the same. A comparison between the self-spectral ratio method and that of Winkler and Plona demonstrates a very good agreement for a broad band of frequencies. Hence the Winkler-Plona technique and Papadakis’ diffraction corrections can be accepted as reliable in any future work.  相似文献   

13.
14.
15.
Root zone soil water content impacts plant water availability, land energy and water balances. Because of unknown hydrological model error, observation errors and the statistical characteristics of the errors, the widely used Kalman filter (KF) and its extensions are challenged to retrieve the root zone soil water content using the surface soil water content. If the soil hydraulic parameters are poorly estimated, the KF and its extensions fail to accurately estimate the root zone soil water. The H‐infinity filter (HF) represents a robust version of the KF. The HF is widely used in data assimilation and is superior to the KF, especially when the performance of the model is not well understood. The objective of this study is to study the impact of uncertain soil hydraulic parameters, initial soil moisture content and observation period on the ability of HF assimilation to predict in situ soil water content. In this article, we study seven cases. The results show that the soil hydraulic parameters hold a critical role in the course of assimilation. When the soil hydraulic parameters are poorly estimated, an accurate estimation of root soil water content cannot be retrieved by the HF assimilation approach. When the estimated soil hydraulic parameters are similar to actual values, the soil water content at various depths can be accurately retrieved by the HF assimilation. The HF assimilation is not very sensitive to the initial soil water content, and the impact of the initial soil water content on the assimilation scheme can be eliminated after about 5–7 days. The observation interval is important for soil water profile distribution retrieval with the HF, and the shorter the observation interval, the shorter the time required to achieve actual soil water content. However, the retrieval results are not very accurate at a depth of 100 cm. Also it is complex to determine the weighting coefficient and the error attenuation parameter in the HF assimilation. In this article, the trial‐and‐error method was used to determine the weighting coefficient and the error attenuation parameter. After the first establishment of limited range of the parameters, ‘the best parameter set’ was selected from the range of values. For the soil conditions investigated, the HF assimilation results are better than the open‐loop results. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
ABSTRACT

Turbulence is considered to generate and drive most geophysical processes. The simplest case is isotropic turbulence. In this paper, the most common three-dimensional power-spectrum-based models of isotropic turbulence are studied in terms of their stochastic properties. Such models often have a high order of complexity, lack stochastic interpretation and violate basic stochastic asymptotic properties, such as the theoretical limits of the Hurst coefficient, when Hurst-Kolmogorov behaviour is observed. A simpler and robust model (which incorporates self-similarity structures, e.g. fractal dimension and Hurst coefficient) is proposed using a climacogram-based stochastic framework and tested over high-resolution observational data of laboratory scale as well as hydro-meteorological observations of wind speed and precipitation intensities. Expressions of other stochastic tools such as the autocovariance and power spectrum are also produced from the model and show agreement with data. Finally, uncertainty, discretization and bias related errors are estimated for each stochastic tool, showing lower errors for the climacogram-based ones and larger for power spectrum ones.  相似文献   

17.
Accurate stream discharge measurements are important for many hydrological studies. In remote locations, however, it is often difficult to obtain stream flow information because of the difficulty in making the discharge measurements necessary to define stage‐discharge relationships (rating curves). This study investigates the feasibility of defining rating curves by using a fluid mechanics‐based model constrained with topographic data from an airborne LiDAR scanning. The study was carried out for an 8m‐wide channel in the boreal landscape of northern Sweden. LiDAR data were used to define channel geometry above a low flow water surface along the 90‐m surveyed reach. The channel topography below the water surface was estimated using the simple assumption of a flat streambed. The roughness for the modelled reach was back calculated from a single measurment of discharge. The topographic and roughness information was then used to model a rating curve. To isolate the potential influence of the flat bed assumption, a ‘hybrid model’ rating curve was developed on the basis of data combined from the LiDAR scan and a detailed ground survey. Whereas this hybrid model rating curve was in agreement with the direct measurements of discharge, the LiDAR model rating curve was equally in agreement with the medium and high flow measurements based on confidence intervals calculated from the direct measurements. The discrepancy between the LiDAR model rating curve and the low flow measurements was likely due to reduced roughness associated with unresolved submerged bed topography. Scanning during periods of low flow can help minimize this deficiency. These results suggest that combined ground surveys and LiDAR scans or multifrequency LiDAR scans that see ‘below’ the water surface (bathymetric LiDAR) could be useful in generating data needed to run such a fluid mechanics‐based model. This opens a realm of possibility to remotely sense and monitor stream flows in channels in remote locations. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
The seismological model was developed initially from the fundamental relationship between earthquake ground motion properties and the seismic moment generated at the source of the earthquake. Following two decades of continuous seismological research in the United States, seismological models which realistically account for both the source and path effects on the seismic shear waves have been developed and their accuracy rigorously verified (particularly in the long and medium period ranges). An important finding from the seismological research by Atkinson and Boore and their co‐investigators is the similarity of the average frequency characteristics of seismic waves generated at the source between the seemingly very different seismic environments of Eastern and Western North America (ENA and WNA, respectively). A generic definition of the average source properties of earthquakes has therefore been postulated, referred to herein as the generic source model. Further, the generic ‘hard rock’ crustal model which is characteristic of ENA and the generic ‘rock’ crustal model characteristic of WNA have been developed to combine with the generic source model, hence enabling simulations to be made of the important path‐related modifications to ground motions arising from different types of crustal rock materials. It has been found that the anelastic contribution to whole path attenuation is consistent between the ENA and WNA models, for earthquake ground motions (response spectral velocities and displacements) in the near and medium fields, indicating that differences in the ENA and WNA motions arise principally from the other forms of path‐related modifications, namely the mid‐crust amplification and the combined effect of the upper‐crust amplification and attenuation, both of which are significant only for the generic WNA ‘rock’ earthquake ground motions. This paper aims to demonstrate the effective utilization of the latest seismological model, comprising the generic source and crustal models, to develop a response spectral attenuation model for direct engineering applications. The developed attenuation model also comprises a source factor and several crustal (wave‐path modification) component factors, and thus has also been termed herein the component attenuation model (CAM). Generic attenuation relationships in CAM, which embrace both ENA and WNA conditions, have been developed using stochastic simulations. The crustal classification of a region outside North America can be based upon regional seismological and geological information. CAM is particularly useful for areas where local strong motion data are lacking for satisfactory empirical modelling. In the companion paper entitled ‘response spectrum modelling for rock sites in low and moderate seismicity regions combining velocity, displacement and acceleration predictions’, the CAM procedure has been incorporated into a response spectrum model which can be used to effectively define the seismic hazard of bedrock sites in low and moderate seismicity regions. This paper and the companion paper constitute the basis of a long‐term objective of the authors, to develop and effectively utilize the seismological model for engineering applications worldwide.  相似文献   

19.
Expansion of a Plane Wave into Gaussian Beams   总被引:6,自引:0,他引:6  
Magnetic susceptibility measurements on topsoils have often been used during the last few years to detect anthropogenic pollution. In most cases, a Bartington susceptibility meter for field measurements was used. However, up to now, no standard procedure has been developed for carrying out such investigations. The purpose of our study was to test the compatibility of different set-ups of instruments used for this purpose and the possible influences of subjective (human) factors. Field magnetic susceptibility measurements, carried out with four different Bartington MS2D instruments in strictly defined positions, are very consistent both for low and high values. The correlation coefficient between the magnetic susceptibility values recorded with different Bartington MS2D probes reached 97–98%. A test area was mapped independently by two groups, without any restrictions concerning the choice and distribution of the measured points, but respecting a few standard conditions (e.g., measuring at a distance from tree trunks; on the flattest place possible; recording between 10–30 values per point). The resulting susceptibility maps show the same general features in both cases, suggesting that the measuring strategy applied is suitable for topsoil magnetic screening. The methodology proposed can be used to map magnetic susceptibility on a larger scale—for example Europe—providing large sets of representative data and eliminating border-transition biases and human errors.  相似文献   

20.
A new approach is demonstrated that permits a reliable estimate of specific yield using published values of the van Genuchten water retention parameters and effective grain sizes and the measured effective grain sizes of soil samples. The specific yield distribution of the soil texture was computed using the published values of the van Genuchten parameters. The specific yield values and the published values of effective grain sizes were then used to construct a specific yield–effective grain size curve, which estimates the ‘point’ specific yield of the soil samples. Applying the central limit theorem, the point specific yields could be transformed into an ‘areal’ specific yield for a study area. Compared with other commonly used approaches, the present procedure requires relatively low computational efforts and readily obtainable data. It is cost effective and does not depend on soil texture classification. More importantly, it incorporates the depth to water table and the variations in grain sizes inherent in natural soil conditions in the estimation. The approach developed was applied for estimating the specific yield of an unconfined sandy aquifer created by land reclamation in the equatorial region. The values obtained were compared with field measurements and the typical ranges of specific yield from the literature. Instead of a single estimate of the specific yield, the method yields a confidence interval with a high confidence level of 95% and with a narrower range than the typical ranges from the literature. In addition, the estimated values are close to the field measurements; hence, the procedure provides a cost‐effective alternative to field measurement. The applicability of the present approach could be extended to sites with heterogeneity in the horizontal direction. Nevertheless, the applicability of the present approach for layered soil profiles requires further evaluations. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号