首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4篇
  免费   1篇
地球物理   3篇
地质学   1篇
自然地理   1篇
  2017年   1篇
  2007年   1篇
  2006年   1篇
  2001年   1篇
  2000年   1篇
排序方式: 共有5条查询结果,搜索用时 203 毫秒
1
1.
We present a new inversion method to estimate, from prestack seismic data, blocky P‐ and S‐wave velocity and density images and the associated sparse reflectivity levels. The method uses the three‐term Aki and Richards approximation to linearise the seismic inversion problem. To this end, we adopt a weighted mixed l2, 1‐norm that promotes structured forms of sparsity, thus leading to blocky solutions in time. In addition, our algorithm incorporates a covariance or scale matrix to simultaneously constrain P‐ and S‐wave velocities and density. This a priori information is obtained by nearby well‐log data. We also include a term containing a low‐frequency background model. The l2, 1 mixed norm leads to a convex objective function that can be minimised using proximal algorithms. In particular, we use the fast iterative shrinkage‐thresholding algorithm. A key advantage of this algorithm is that it only requires matrix–vector multiplications and no direct matrix inversion. The latter makes our algorithm numerically stable, easy to apply, and economical in terms of computational cost. Tests on synthetic and field data show that the proposed method, contrarily to conventional l2‐ or l1‐norm regularised solutions, is able to provide consistent blocky and/or sparse estimators of P‐ and S‐wave velocities and density from a noisy and limited number of observations.  相似文献   
2.
Stationary segments in well log sequences can be automatically detected by searching for change points in the data. These change points, which correspond to abrupt changes in the statistical nature of the underlying process, can be identified by analysing the probability density functions of two adjacent sub-samples as they move along the data sequence. A statistical test is used to set a significance level of the probability that the two distributions are the same, thus providing a means to decide how many segments comprise the data by keeping those change points that yield low probabilities. Data from the Ocean Drilling Program were analysed, where a high correlation between the available core-log lithology interpretation and the statistical segmentation was observed. Results show that the proposed algorithm can be used as an auxiliary tool in the analysis and interpretation of geophysical log data for the identification of lithology units and sequences.  相似文献   
3.
4.
5.
 It is well known that the computation of higher order statistics, like skewness and kurtosis, (which we call C-moments) is very dependent on sample size and is highly susceptible to the presence of outliers. To obviate these difficulties, Hosking (1990) has introduced related statistics called L-moments. We have investigated the relationship of these two measures in a number of different ways. Firstly, we show that probability density functions (pdf ) that are estimated from L-moments are superior estimates to those obtained using C-moments and the principle of maximum entropy. C-moments computed from these pdf's are not however, contrary to what one may have expected, better estimates than those estimated from sample statistics. L-moment derived distributions for field data examples appear to be more consistent sample to sample than pdf 's determined by conventional means. Our observations and conclusions have a significant impact on the use of the conventional maximum entropy procedure which typically uses C-moments from actual data sets to infer probabilities.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号