排序方式: 共有92条查询结果,搜索用时 15 毫秒
1.
2.
3.
Spatial Modeling and Analysis for Shoreline Change Detection and Coastal Erosion Monitoring 总被引:3,自引:0,他引:3
Coastal erosion presents a serious problem throughout U.S. coastal areas. The Ohio Geological Survey estimates that more than 3,200 acres of Ohio's Lake Erie shore have been lost to erosion since the 1870s, resulting in economic losses exceeding tens of millions of dollars per year. This article presents research results of a project that monitors shoreline erosion using high-resolution imagery and examines erosion causes. Spatial modeling and analysis methods are applied to the project area along the south shore of Lake Erie. The shoreline is represented as a dynamically-segmented linear model that is linked to a large amount of data describing shoreline changes. A new method computes an instantaneous shoreline using a digital water level model, a coastal terrain model, and bathymetric data. This method provides an algorithm for deriving the Mean-Lower Low Water (MLLW) and the Mean High Water (MHW) shorelines that are essential to navigation charts. The results describe a part of our effort towards a coastal spatial information infrastructure to support management and decision-making in the dynamic coastal environment. 相似文献
4.
On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms 总被引:2,自引:1,他引:1
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model (Y−E
Y
= (X−E
X
) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E
Y
and E
X
. Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation
matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix
X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem
by using the nonlinear Euler–Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative
algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition.
For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert
cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing
the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment
of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift
to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335–342). The differences between the standard least-squares
and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion
matrix of the estimated parameters. 相似文献
5.
6.
7.
8.
9.
Correlations between photon currents from separate light-collectors provide information on the shape of the source. When the light-collectors are well separated, for example in space, transmission of these currents to a central correlator is limited by band-width. We study the possibility of compression of the photon fluxes and find that traditional compression methods have a similar chance of achieving this goal compared to compressed sensing. 相似文献
10.
In experiments that are aimed at detecting astrophysical sources such as neutrino telescopes, one usually performs a search over a continuous parameter space (e.g. the angular coordinates of the sky, and possibly time), looking for the most significant deviation from the background hypothesis. Such a procedure inherently involves a “look elsewhere effect”, namely, the possibility for a signal-like fluctuation to appear anywhere within the search range. Correctly estimating the p-value of a given observation thus requires repeated simulations of the entire search, a procedure that may be prohibitively expansive in terms of CPU resources. Recent results from the theory of random fields provide powerful tools which may be used to alleviate this difficulty, in a wide range of applications. We review those results and discuss their implementation, with a detailed example applied for neutrino point source analysis in the IceCube experiment. 相似文献