首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The use of a cross-correlation prefiltering technique to enhance the ability of Jansson's iterativedeconvolution procedure to deconvolve extremely noisy chromatographic data is investigated.Test casesinclude peaks whose resolutions are as low as 0.35 and whose signal-to-noise ratios are as low as 1:1.Evaluation criteria include RMS error,relative peak error and peak area repeatability.For comparisonpurposes,relative peak area errors and peak area variances are also evaluated for noisy but well resolvedpeaks that have only been prefiltered with the cross-correlation filter.Jansson's method in conjunctionwith cross-correlation prefiltering is shown not only to resolve overlapped peaks but in some cases toimprove their signal-to-noise ratios.The study also establishes some limits to the capabilities of Jansson'smethod will regard to adverse data conditions.  相似文献   

2.
This paper will consider the use of an iterative ratio technique called Gold's ratio method as an alternativeto iterative constrained deconvolution methods for the restoration of overlapped and noisychromatograpic peaks.The study will consist of first describing the technique and then evaluating itsperformance with respect to Jansson's deconvolution procedure.A Hewlett-Packard 5890A gaschromatograph will be used to generate most of the test data.The evaluation criteria will includeconvergence rates,peak area errors and variances,retention time variances and noise performance.  相似文献   

3.
Summary. This paper describes a new method of smoothing noisy data, such as palaeomagnetic directions, in which the optimum degree of smoothing is determined objectively from the internal evidence of the data alone. As well as providing a best-fitting smooth curve, the method indicates, by means of confidence limits, which oscillations or fluctuations in the fitted curve are real. The procedure, which is illustrated by an analysis of palaeomagnetic decimation directions from Lake Windermere, has potential applications throughout the Earth Sciences. It may be used in any investigation requiring the estimation of a smooth function from noisy data, provided certain basic assumptions are reasonably satisfied.  相似文献   

4.
A theoretical model of conventional oil production has been developed. The model does not assume Hubbert’s bell curve, an asymmetric bell curve, or a reserve-to-production ratio method is correct, and does not use oil production data as an input. The theoretical model is in close agreement with actual production data until the 1979 oil crisis, with an R 2 value of greater than 0.98. Whilst the theoretical model indicates that an ideal production curve is slightly asymmetric, which differs from Hubbert’s curve, the ideal model compares well with the Hubbert model, with R 2 values in excess of 0.95. Amending the theoretical model to take into account the 1979 oil crisis, and assuming the ultimately recoverable resources are in the range of 2–3 trillion barrels, the amended model predicts conventional oil production to peak between 2010 and 2025. The amended model, for the case when the ultimately recoverable resources is 2.2 trillion barrels, indicates that oil production peaks in 2013.  相似文献   

5.
In this paper, sparse data problem in neural network and geostatistical modeling for ore-grade estimation was addressed in the Nome offshore placer gold deposit. The problem of sparse data arises because of the random data division into training, validation, and test subsets during ore-grade modeling. In this regard, the possibility of generating statistically dissimilar data subsets by random data division was also explored through a simulation exercise. A combined approach of data segmentation and application of a Kohonen network then was used to solve the data division problem. Two neural networks and five kriging models were applied for grade modeling. The neural network was trained using an early stopping method. Performance evaluation of the models was carried out on the test data set. The study results indicated that all the models that were investigated in this study performed almost equally. It was also revealed that by using the secondary variable watertable depth the neural network and the kriging models slightly improved their prediction precision. Further, the overall R 2 of the models was poor as a result of high nugget (noisy) component in ore-grade variation.  相似文献   

6.
A seismogram that is several times the length of the source-receiver wavelet is windowed into two parts—these may overlap—to obtain two seismograms with approximately the same source function but different Green's functions. A similarly windowed synthetic seismogram gives two corresponding synthetic seismograms. The spectral product of the window 1 data with the window 2 synthetic is equal to the spectral product of the window 1 synthetic with the window 2 data only if the correct earth model is used to compute the synthetic. This partition principle is applied to well-log sonic waveform data from Ocean Drilling Project hole 806B, a slow formation, and used there to estimate Poisson's ratio from a single seismogram whose transmitter and receiver functions are unknown. A multichannel extension of the algorithm gives even better results. The effective borehole radius R b, was included in the inversion procedure, because of waveform sensitivity to R b. Inversion results for R b agreed with the sonic caliper, but not the mechanical caliper; thus if R b is not included in the inversion its value should be taken from the sonic caliper.  相似文献   

7.
Summary. King & Rees (1979), in commenting on our paper (Clark & Thompson 1978), have raised a number of subjects beyond the main theme of our paper. We described a statistical method of smoothing noisy data and illustrated its application using a set of previously published palaeomagnetic declination data. King & Rees acknowledge our success in achieving the main aim of our paper, namely the construction of objective confidence limits for the smooth curve presumed to underlie the data. However, they seem to question the value of our general technique of data-analysis when used to estimate past variations in the geomagnetic field. As they interpose in their comments on our statistical analyses references to 'techniques of sedimentary analysis', 'inclination errors', 'mode of acquisition of remanence' and 'stability tests', we take the opportunity to discuss these additional topics as well as those relating to our original paper.  相似文献   

8.
In this paper, we present a new approach to estimate high-resolution teleseismic receiver functions using a simultaneous iterative time-domain sparse deconvolution. This technique improves the deconvolution by using reweighting strategies based on a Cauchy criterion. The resulting sparse receiver functions enhance the primary converted phases and its multiples. To test its functionality and reliability, we applied this approach to synthetic experiments and to seismic data recorded at station ABU, in Japan. Our results show Ps conversions at approximately 4.0 s after the primary P onset, which are consistent with other seismological studies in this area. We demonstrate that the sparse deconvolution is a simple, efficient technique in computing receiver functions with significantly greater resolution than conventional approaches.  相似文献   

9.
Monitoring and predicting traffic conditions are of utmost importance in reacting to emergency events in time and for computing the real-time shortest travel-time path. Mobile sensors, such as GPS devices and smartphones, are useful for monitoring urban traffic due to their large coverage area and ease of deployment. Many researchers have employed such sensed data to model and predict traffic conditions. To do so, we first have to address the problem of associating GPS trajectories with the road network in a robust manner. Existing methods rely on point-by-point matching to map individual GPS points to a road segment. However, GPS data is imprecise due to noise in GPS signals. GPS coordinates can have errors of several meters and, therefore, direct mapping of individual points is error prone. Acknowledging that every GPS point is potentially noisy, we propose a radically different approach to overcome inaccuracy in GPS data. Instead of focusing on a point-by-point approach, our proposed method considers the set of relevant GPS points in a trajectory that can be mapped together to a road segment. This clustering approach gives us a macroscopic view of the GPS trajectories even under very noisy conditions. Our method clusters points based on the direction of movement as a spatial-linear cluster, ranks the possible route segments in the graph for each group, and searches for the best combination of segments as the overall path for the given set of GPS points. Through extensive experiments on both synthetic and real datasets, we demonstrate that, even with highly noisy GPS measurements, our proposed algorithm outperforms state-of-the-art methods in terms of both accuracy and computational cost.  相似文献   

10.
This paper presents a simple non-linear method of magnetotelluric inversion that accounts for the computation of depth averages of the electrical conductivity profile of the Earth. The method is not exact but it still preserves the non-linear character of the magnetotelluric inverse problem. The basic formula for the averages is derived from the well-known conductance equation, but instead of following the tradition of solving directly for conductivity, a solution is sought in terras of spatial averages of the conductivity distribution. Formulas for the variance and the resolution are then readily derived. In terms of Backus-Gilbert theory for linear appraisal, it is possible to inspect the classical trade-off curves between variance and resolution, but instead of resorting to linearized iterative methods the curves can be computed analytically. The stability of the averages naturally depends on their variance but this can be controlled at will. In general, the better the resolution the worse the variance. For the case of optimal resolution and worst variance, the formula for the averages reduces to the well-known Niblett-Bostick transformation. This explains why the transformation is unstable for noisy data. In this respect, the computation of averages leads naturally to a stable version of the Niblett-Bostick transformation. The performance of the method is illustrated with numerical experiments and applications to field data. These validate the formula as an approximate but useful tool for making inferences about the deep conductivity profile of the Earth, using no information or assumption other than the surface geophysical measurements.  相似文献   

11.
基于数字高程模型数据和地质数据,首先对青藏高原西北缘西昆仑山脉的山脊线和山麓线进行地形剖面及其地面组成物质形成的地质年代分析,据此获取了从西北到东南的5座典型山峰:昆盖山、慕士塔格山、塔什库祖克山、慕士山和托库孜达坂山的相关数据;然后以公格尔山为例,探讨了山峰区域典型地形剖面线的获取方法;最后对5座山峰进行了典型地形剖面获取及其对应山体组成物质形成的地质年代分析,计算了每座山峰在不同地质年代的组成物质下的地形抬升速率.研究结果表明:西昆仑山脉从西北到东南的5座典型山峰,地形抬升速率在两端较大,中间部位则相对较小,在塔什库祖克山最小,呈近似“V”形;从西北部的昆盖山到东南部的托库孜达坂山,山体组成物质形成的地质年代数量为3-4-5-4-3,呈现先增多后减少的变化趋势,呈“A”形;因此,地形抬升速率与地质年代数量之间呈现负相关的关系.  相似文献   

12.
A Magnetotelluric (MT) sounding was carried out at a site in south-east Queensland, in the Clarence-Moreton Basin. The synoptic recordings were taken over a period of four months at sampling frequencies from 500 Hz to 5 × 10-5 Hz. The resulting data was analysed by the stationary cross-frequency and the Cone kernel time-frequency distribution (TFD) methods of MT analysis. The results were compared as apparent resistivities on a daily basis for frequencies above 1 Hz, as well as over all the available data. The TFD MT apparent-resistivity results were more stable and less noisy on an daily basis than the cross-frequency results. Similarly the TFD analysis gave less noisy results than the cross-frequency analysis when all available data was processed. Application of these new non-stationary analysis techniques to MT processing should decrease the bias error problem of the MT methods and so increase reliability and repeatability of MT soundings.  相似文献   

13.
We present the extension of stereotomography to P - and S -wave velocity estimation from PP - and PS -reflected/diffracted waves. In this new context, we greatly benefit from the use of locally coherent events by stereotomography. In particular, when applied to S -wave velocity estimation from PS -data, no pairing of PP - and PS -events is a priori required. In our procedure the P -wave velocity model is obtained first using stereotomography on PP -arrivals. Then the S -wave velocity model is obtained using PS -stereotomography on PS -arrivals fixing the P -wave velocity model. We present an application to an 'ideal' synthetic data set demonstrating the relevance of the approach, which allows us to recover depth consistent P - and S -waves velocity models even if no pairing of PP - and PS -events is introduced. Finally, results to a real data set from the Gulf of Mexico are presented demonstrating the potential of the method in a noisy data context.  相似文献   

14.
利用EM31-ICE电磁感应仪与激光测距仪、声纳等设备在中国第四次北极科学考察过程对海冰厚度进行走航观测,通过对数据预处理获得了海冰厚度数据,结合长期冰站上采集的多组冰厚样本,获得EM31-ICE的各项校正参数。同时通过对异常数据的分析以及小波去噪、统计处理等方法,得出最终的解释结果:在去程时海冰厚度主要分布在0.5-2.5m之间,而返程时的海冰厚度有明显变薄的倾向,主要分布在0.5-2.0m之间。在高纬度地区海冰厚度明显大于低纬度地区,并且在高纬度地区海冰厚度统计图中常出现两个波峰,而低纬度地区常是一个波峰。在整个去程与返程过程中,超过4m厚的海冰不到1%。  相似文献   

15.
In this paper, modelled hydrological data are used to quantify the effects of regulation on the flow regime of the lower Murrumbidgee River in the period 1970–1998. Although other studies report historical changes in flood frequency and duration, this study uses modelled natural daily flow data rather than pre-regulation records or aggregated modelled monthly data. The comparison of modelled natural and regulated daily flows shows the magnitude of changes to mean and seasonal flows, flood peaks and flow duration. At gauges upstream of major irrigation off-takes, mean flows have been increased by approximately 10 per cent, flood peaks have been reduced by 21–46 per cent, and there has been a seasonal redistribution such that flows in summer and autumn have been increased at the expense of those in winter and spring. At gauges downstream of the major irrigation off-takes, mean flows have been reduced by 8–46 per cent, flood peaks have been reduced by 16–61 per cent, and flows have been decreased in all seasons.  相似文献   

16.
Summary. Phase velocity variations obtained in the previous paper are inverted by the Backus–Gilbert method for the velocity structure of the upper mantle. Spheroidal modes and toroidal modes in the period range of 125–260 s are used in the inversion. The data cannot constrain all six parameters in a transversely isotropic medium and we chose to perturb only two parameters, SH and SV velocities. SV velocities are resolved between the depths of about 200 and 400 km and SH velocities between 0 and 200 km. Resolution kernels have half-peak widths of about 200–300 km in depth, becoming broader for deeper target depths. SV velocity kernels show secondary peaks near the surface of the Earth, with widths varying from 50 to 100 km. The deeper the target depths, the wider the secondary peaks near the surface. SH velocity kernels do not possess such secondary peaks. The trade-off between SV and SH velocities is small. SV velocity is essentially determined by spheroidal modes and SH velocity by toroidal modes. Because of the broad width of the resolution kernels, the structure in the resolved region is difficult to detect from our data set; for example the differences in SV velocity structure between 250 and 350 km or the differences in SH velocity between 100 and 200 km are difficult to distinguish. Considering the horizontal resolution of about 2000 km, obtained in the previous paper, averaging kernels for 3-D structure are quite elongated in the horizontal dimension.  相似文献   

17.
利用2002-2008年6~9月EOS/MODIS卫星晴空资料,计算分析了融雪期库玛拉克河流域的积雪面积、覆盖率、雪深及雪水量;利用气象、水文台站的观测资料,对2002-2008年积雪变化与气象因子间的相互关系,2002-2008年7次洪峰时间段内最高温度的有效作用时间和12 h降水的有效影响时间等进行了分析与研究。结果表明:2002-2008年盛夏库玛拉克河流域高温融雪的主导作用比较明显,当流域内山区积雪量在5.5×108 m3以上、0 ℃层平均高度上升到4 500 m以上并且能维持4 d,库玛拉克河流域融雪型洪水的融雪量可达1.8×108~10.3×108 m3,夏季0 ℃层高度的变化可作为融雪型洪水预测的较好指标。2002-2008年这个历史时期实际积雪融化后产生的雪水当量9.88×108 t,全部融化后产生的最大可能雪水当量小于11.18×108 t;这个历史时期理论最大可能积雪融化后产生的雪水当量为17.55×108 t,全部融化后产生的雪水当量小于17.75×108 t。估算实际融化和理论融化的雪水当量,可为积雪融化后产生的最大洪水量估算提供数据支持。  相似文献   

18.
三角洲感潮河段洪潮水位频率分析方法的初步研究   总被引:3,自引:0,他引:3  
根据珠江三角洲感潮河段年最高洪潮水位存在长期上升趋势的事实,提出了河口区感潮河段洪潮水位频率分析一种新的计算方法,新方法的分析结果表明,未来不同年代同频率的设计洪潮水位各不相同,也存在相应的长期变化趋势。以灯笼山站为例,新方法计算的2030年和2050年的百年一遇设计洪潮水位分别为3.12m和3.28m,比传统频率分析方法计算的结果分别高出0.24m和0.40m。建立有关部门在制定河口区的防洪标准时,根据洪潮水位的长期变化趋势作相应的动态调整,以适应河口区洪潮水位变化的实际情况。  相似文献   

19.
In order to control the air pollution caused by ships and improve ambient air quality, China set up three domestic emission control areas (DECAs) in 2015 in the Pearl River Delta, the Yangtze River Delta and Bohai Rim (Beijing-Tianjin-Hebei) waters. In order to meet the emission requirements established at the 70th meeting of the Marine Environmental Protection Committee (MEPC), China intends to apply for the establishment of three international Emission Control Area (ECA) in 2030 for these DECAs. This paper discusses existing technologies to reduce emissions of nitrogen oxides (NOx) and sulphur oxides (SOx), and examines the abatement costs for the shipping industry in the year 2030 to comply with this action. Based on an examination of the literature and data collected for this study, four traditional alternatives, low-sulphur fuel, sulphur scrubbers/exhaust gas cleaning systems (EGCS), selective catalytic reduction (SCR), and exhaust gas recirculation, are analyzed. The analysis finds that switching to low-sulphur fuel is the best technical solution for SOx emission reduction, and the installation of SCR is the best technology for reducing nitrogen. In addition to traditional emission reduction technologies, the use of shore power facilities and liquefied natural gas (LNG), two alternatives welcomed by China’s green shipping industry, are also considered in this paper. The expected average abatement cost of these alternatives in the year 2030 are USD 2.866 billion, 0.324 billion, 1.071 billion, 0.402 billion, 0.232 billion and 0.34 billion, respectively.  相似文献   

20.
恰功铁矿是近几年新发现的矿山,然而矿体的形态、大小、埋深、位置、产状、边界等几何特征还没有被认识清楚。为了弄清这些问题,本文通过欧拉反褶积方法对磁异常化极数据进行了反演,反演矿体深度为0-120m;在C-6异常中心选择了两条剖面进行了2.5维拟合反演,反演矿体厚度为20~30m,欧拉反褶积反演结果和2.5维拟合反演结果与ZK32钻孔验证结果相吻合。最后,通过建立恰功矽卡岩型铁矿地质—地球物理找矿模型,为该地区寻找隐伏夕卡岩型铁矿提供了思路。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号