排序方式: 共有5条查询结果,搜索用时 203 毫秒
1
1.
We present a new inversion method to estimate, from prestack seismic data, blocky P‐ and S‐wave velocity and density images and the associated sparse reflectivity levels. The method uses the three‐term Aki and Richards approximation to linearise the seismic inversion problem. To this end, we adopt a weighted mixed l2, 1‐norm that promotes structured forms of sparsity, thus leading to blocky solutions in time. In addition, our algorithm incorporates a covariance or scale matrix to simultaneously constrain P‐ and S‐wave velocities and density. This a priori information is obtained by nearby well‐log data. We also include a term containing a low‐frequency background model. The l2, 1 mixed norm leads to a convex objective function that can be minimised using proximal algorithms. In particular, we use the fast iterative shrinkage‐thresholding algorithm. A key advantage of this algorithm is that it only requires matrix–vector multiplications and no direct matrix inversion. The latter makes our algorithm numerically stable, easy to apply, and economical in terms of computational cost. Tests on synthetic and field data show that the proposed method, contrarily to conventional l2‐ or l1‐norm regularised solutions, is able to provide consistent blocky and/or sparse estimators of P‐ and S‐wave velocities and density from a noisy and limited number of observations. 相似文献
2.
Danilo R. Velis 《Mathematical Geology》2007,39(4):409-417
Stationary segments in well log sequences can be automatically detected by searching for change points in the data. These
change points, which correspond to abrupt changes in the statistical nature of the underlying process, can be identified by
analysing the probability density functions of two adjacent sub-samples as they move along the data sequence. A statistical
test is used to set a significance level of the probability that the two distributions are the same, thus providing a means
to decide how many segments comprise the data by keeping those change points that yield low probabilities. Data from the Ocean
Drilling Program were analysed, where a high correlation between the available core-log lithology interpretation and the statistical
segmentation was observed. Results show that the proposed algorithm can be used as an auxiliary tool in the analysis and interpretation
of geophysical log data for the identification of lithology units and sequences. 相似文献
3.
4.
5.
T. J. Ulrych D. R. Velis A. D. Woodbury M. D. Sacchi 《Stochastic Environmental Research and Risk Assessment (SERRA)》2000,14(1):50-68
It is well known that the computation of higher order statistics, like skewness and kurtosis, (which we call C-moments) is
very dependent on sample size and is highly susceptible to the presence of outliers. To obviate these difficulties, Hosking
(1990) has introduced related statistics called L-moments. We have investigated the relationship of these two measures in
a number of different ways. Firstly, we show that probability density functions (pdf ) that are estimated from L-moments are
superior estimates to those obtained using C-moments and the principle of maximum entropy. C-moments computed from these pdf's
are not however, contrary to what one may have expected, better estimates than those estimated from sample statistics. L-moment
derived distributions for field data examples appear to be more consistent sample to sample than pdf 's determined by conventional
means. Our observations and conclusions have a significant impact on the use of the conventional maximum entropy procedure
which typically uses C-moments from actual data sets to infer probabilities. 相似文献
1