首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   143篇
  免费   20篇
  国内免费   9篇
测绘学   58篇
大气科学   9篇
地球物理   29篇
地质学   42篇
海洋学   5篇
天文学   1篇
综合类   14篇
自然地理   14篇
  2022年   11篇
  2021年   2篇
  2020年   8篇
  2019年   9篇
  2018年   8篇
  2017年   19篇
  2016年   12篇
  2015年   10篇
  2014年   15篇
  2013年   10篇
  2012年   13篇
  2011年   7篇
  2010年   2篇
  2009年   8篇
  2008年   5篇
  2007年   3篇
  2006年   3篇
  2005年   3篇
  2004年   5篇
  2003年   4篇
  2001年   1篇
  2000年   2篇
  1999年   3篇
  1998年   2篇
  1997年   1篇
  1996年   1篇
  1995年   1篇
  1992年   2篇
  1988年   1篇
  1986年   1篇
排序方式: 共有172条查询结果,搜索用时 187 毫秒
1.
常规声波反演的方法原理和反演技术以层状介质为基础,其研究目标多是层状储层。碳酸盐岩溶洞型储层具有非规则形态、非均匀散布的特征,常规声波反演技术有其不适应之处。笔者研究的测井约束多重反演技术,解决了非层状、非均匀储层的地震反演问题,得出反映碳酸盐岩溶洞储层信息的波阻抗数据体,提取了突出溶洞型储集体低速特征的差异波阻抗,为寻找碳酸盐岩溶洞型油气藏提供了可靠的依据。  相似文献   
2.
本文从最大后验概率密度观点出发,在数据噪音向量和待求模型向量为具有零均值的独立高斯随机过程的假设前提下,建立起了随机反演的非线性系统方程;给出了模型方差估计的函数表达式,并在文章最后,证明了反演解的稀疏性,即解释了随机反演的输出解的高分辨率特征。文章在最小二乘反演方法的基础上,发展并完善了随机反演方法的理论基础;揭示了随机反演方法与最小二乘反演方法之间的本质区别;阐述了随机反演方法的优越性,并指出了其广阔的应用前景。  相似文献   
3.
Historically, observing snow depth over large areas has been difficult. When snow depth observations are sparse, regression models can be used to infer the snow depth over a given area. Data sparsity has also left many important questions about such inference unexamined. Improved inference, or estimation, of snow depth and its spatial distribution from a given set of observations can benefit a wide range of applications from water resource management, to ecological studies, to validation of satellite estimates of snow pack. The development of Light Detection and Ranging (LiDAR) technology has provided non‐sparse snow depth measurements, which we use in this study, to address fundamental questions about snow depth inference using both sparse and non‐sparse observations. For example, when are more data needed and when are data redundant? Results apply to both traditional and manual snow depth measurements and to LiDAR observations. Through sampling experiments on high‐resolution LiDAR snow depth observations at six separate 1.17‐km2 sites in the Colorado Rocky Mountains, we provide novel perspectives on a variety of issues affecting the regression estimation of snow depth from sparse observations. We measure the effects of observation count, random selection of observations, quality of predictor variables, and cross‐validation procedures using three skill metrics: percent error in total snow volume, root mean squared error (RMSE), and R2. Extremes of predictor quality are used to understand the range of its effect; how do predictors downloaded from internet perform against more accurate predictors measured by LiDAR? Whereas cross validation remains the only option for validating inference from sparse observations, in our experiments, the full set of LiDAR‐measured snow depths can be considered the ‘true’ spatial distribution and used to understand cross‐validation bias at the spatial scale of inference. We model at the 30‐m resolution of readily available predictors, which is a popular spatial resolution in the literature. Three regression models are also compared, and we briefly examine how sampling design affects model skill. Results quantify the primary dependence of each skill metric on observation count that ranges over three orders of magnitude, doubling at each step from 25 up to 3200. Whereas uncertainty (resulting from random selection of observations) in percent error of true total snow volume is typically well constrained by 100–200 observations, there is considerable uncertainty in the inferred spatial distribution (R2) even at medium observation counts (200–800). We show that percent error in total snow volume is not sensitive to predictor quality, although RMSE and R2 (measures of spatial distribution) often depend critically on it. Inaccuracies of downloaded predictors (most often the vegetation predictors) can easily require a quadrupling of observation count to match RMSE and R2 scores obtained by LiDAR‐measured predictors. Under cross validation, the RMSE and R2 skill measures are consistently biased towards poorer results than their true validations. This is primarily a result of greater variance at the spatial scales of point observations used for cross validation than at the 30‐m resolution of the model. The magnitude of this bias depends on individual site characteristics, observation count (for our experimental design), and sampling design. Sampling designs that maximize independent information maximize cross‐validation bias but also maximize true R2. The bagging tree model is found to generally outperform the other regression models in the study on several criteria. Finally, we discuss and recommend use of LiDAR in conjunction with regression modelling to advance understanding of snow depth spatial distribution at spatial scales of thousands of square kilometres. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
4.
稀疏多项式逻辑回归在分类中仅利用图像光谱信息,导致分类效果不太理想。本文提出了一种顾及局部与结构特征的稀疏多项式逻辑回归高光谱图像分类方法。首先利用加权均值滤波与拓展形态学多属性剖面对原始高光谱图像进行局部与结构特征提取;然后对二者进行加权平均特征级融合以获取更具唯一性的像元特征;最后由稀疏多项式逻辑回归分类器对融合结果进行分类。结果表明,本文方法能有效地提高分类精度,而且具有较强的稳健性。  相似文献   
5.
从分析基于支持向量机和相关向量机的高光谱影像分类方法的优势和不足出发,将基于概率分类向量机的方法用于高光谱影像分类试验。在贝叶斯理论框架下,概率分类向量机为基函数权值引入截断Gauss先验概率分布,使得不同类别的基函数权值具有不同符号的先验分布,并利用EM算法进行参数推断,得到足够稀疏的概率模型,弥补了相关向量机选取错误类别的样本作为相关向量的不足,从而有效地提高了模型的分类精度和稳定性。OMIS和PHI影像分类试验表明,概率分类向量机能够很好地应用在高光谱影像分类。  相似文献   
6.
基于改进K-SVD字典学习方法的地震数据去噪   总被引:2,自引:0,他引:2  
为实现更好的地震数据去噪技术,笔者引入一种新的算法:快速迭代收缩阀值法(FISTA),通过FISTA和K-奇异值分解(K-SVD)不断迭代更新K-SVD字典,利用更新得到的K-SVD字典对地震数据进行稀疏表示,去除稀疏系数中较小的数值,使数据中的随机噪声得到压制。对层状模型合成地震记录,Marmousi模型合成地震记录以及实际地震数据进行对比实验,得出FISTA算法较OMP算法能更好地提高地震数据的信噪比,同时有效地保护了反射信号。  相似文献   
7.
为了提高人脸识别率及更好地显示人脸特征,本文提出了一种基于镜像图的LRC和CRC偏差结合的人脸识别方法.该方法首先生成一种镜像人脸,再通过融合原始人脸和镜像人脸形成新的混合训练样本,最后利用LRC和CRC偏差结合进行人脸识别.新方法增加了训练样本的数目,克服了由于光照和姿态等外部因素带来的影响.实验结果表明,镜像图与LRC和CRC偏差结合的人脸识别方法提高了人脸识别的准确性.  相似文献   
8.
In this paper, we present a new approach to estimate high-resolution teleseismic receiver functions using a simultaneous iterative time-domain sparse deconvolution. This technique improves the deconvolution by using reweighting strategies based on a Cauchy criterion. The resulting sparse receiver functions enhance the primary converted phases and its multiples. To test its functionality and reliability, we applied this approach to synthetic experiments and to seismic data recorded at station ABU, in Japan. Our results show Ps conversions at approximately 4.0 s after the primary P onset, which are consistent with other seismological studies in this area. We demonstrate that the sparse deconvolution is a simple, efficient technique in computing receiver functions with significantly greater resolution than conventional approaches.  相似文献   
9.
根据GPS数据处理中的Kalman滤波状态转移矩阵和设计矩阵大量存在零元素的特点,将其构造成特定稀疏矩阵。再利用稀疏矩阵乘法,同时结合矩阵对称性、矩阵求逆降维等方法,可大大减少Kalman滤波的乘法次数。在非差C/A伪距情况下,该算法乘法总次数不到传统算法的1/3;在双差伪距P1,P2 双差载波情况下,该算法乘法总次数甚至不到1/6;其耗时也只有传统算法的1/3左右,因而大大提高了Kalman滤波的计算效率。  相似文献   
10.
Studying strong motion records and the spatial distribution of ground shaking is of great importance in understanding the underlying causes of damage in earthquakes. Many regions in the world are either not instrumented or are sparsely instrumented. As such, significant opportunities for motion-damage correlations are lost. Two recent and damaging earthquakes belong to the class of lost opportunities, namely the Kashmir (Pakistan) earthquake of October 2005 and the Yogyakarta (Indonesia) earthquake of May 2006. In this paper, an overview of the importance of supply and demand studies in earthquake-stricken regions is given, followed by two examples of investigative engineering seismology aimed at reconstructing the hazard from sparse data. The paper closes with a plea for responsible authorities to invest in seismic monitoring networks in the very near future.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号