首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   72篇
  免费   4篇
  国内免费   15篇
测绘学   20篇
大气科学   19篇
地球物理   14篇
地质学   22篇
海洋学   8篇
自然地理   8篇
  2021年   2篇
  2020年   6篇
  2019年   1篇
  2018年   1篇
  2017年   8篇
  2016年   7篇
  2015年   4篇
  2014年   4篇
  2013年   6篇
  2012年   5篇
  2010年   3篇
  2009年   4篇
  2008年   8篇
  2007年   9篇
  2006年   5篇
  2005年   1篇
  2004年   3篇
  2003年   5篇
  2002年   2篇
  2001年   1篇
  2000年   2篇
  1997年   2篇
  1996年   1篇
  1992年   1篇
排序方式: 共有91条查询结果,搜索用时 0 毫秒
1.
With the increasing global distribution of high rate dual-frequency global positioning system (GPS) receivers, the production of a real-time atmospheric constituent definition, total electron content (TEC), has become a beneficial contributor to the modeling applications used in the assessment of GPS position accuracy and the composition of the ionosphere, plasmasphere, and troposphere. Historically, TEC measurements have been obtained through post processing techniques to produce the quality of data necessary for modeling applications with rigorous error estimate requirements. These procedures necessitated the collection of large volumes of data to address the various abnormalities in the computation of TEC associated with the use of greater data quality controls and source selection while real-time modeling environments must rely on autonomous controls and filtration techniques to prevent the production of erroneous model results. In this paper we present methods for processing TEC in real time, which utilize several procedures including the application of an ionospheric model to automatically perform quality control on the TEC output and the computational techniques used to address receiver multipath, faulty receiver observations, cycle-slips, segmented processing, and receiver calibrations. The resulting TEC measurements are provided with rigorous error estimates validated using the vertical TEC from the Jason satellite mission.
Nelson A. BonitoEmail:
  相似文献   
2.
This paper describes how simplified auxiliary models—metamodels—can be used to create benchmarks for validating ship manoeuvring simulation models. A metamodel represents ship performance for a limited range of parameters, such as rudder angles and surge velocity. In contrast to traditional system identification methods, metamodels are identified from multiple trial recordings, each containing data on the ship’s inherent dynamics (similar for all trials) and random disturbances such as environmental effects and slightly different loading conditions. Thus, metamodels can be used to obtain these essential data, where simple averaging is not possible. In addition, metamodels are used to represent a ship’s behaviour and not to obtain physical insights into ship dynamics. The experimental trials used for the identification of metamodels can be found in in-service recorded data. After the metamodel is identified, it is used to simulate trials without substantial deviations from the ship state parameters used for the identification. Subsequently, the predictions of the metamodels are compared with the predictions of a tested manoeuvring simulation model. We present two case studies to demonstrate the application of metamodels for moderate turning motions of two ships.  相似文献   
3.
A fixed link (tunnel and bridge, in total 16 km) was constructed between Sweden and Denmark during 1995-2000. As part of the work, approximately 16 million tonnes of seabed materials (limestone and clay till) were dredged, and about 0.6 million tonnes of these were spilled in the water. Modelling of the spreading and sedimentation of the spilled sediments took place as part of the environmental monitoring of the construction activities. In order to verify the results of the numerical modelling of sediment spreading and sedimentation, a new method with the purpose of distinguishing between the spilled sediments and the naturally occurring sediments was developed. Because the spilled sediments tend to accumulate at the seabed in areas with natural sediments of the same size, it is difficult to separate these based purely on the physical properties. The new method is based on the geo-chemical differences between the natural sediment in the area and the spill. The basic propertiesused are the higher content of calcium carbonate material in the spill as compared to the natural sediments and the higher Ca/Sr ratio in the spill compared to shell fragments dominating the natural calcium carbonate deposition in the area. The reason for these differences is that carbonate derived from recent shell debris can be discriminated from Danien limestone, which is the material in which the majority of the dredging took place, on the basis of the Ca/Sr ratio being 488 in Danien Limestone and 237 in shell debris. The geochemical recognition of the origin of the sediments proved useful in separating the spilled from the naturally occurring sediments. Without this separation, validation of the modelling of accumulation of spilled sediments would not have been possible. The method has general validity and can be used in many situations where the origin of a given sediment is sought.  相似文献   
4.
The application of numerical models for the simulation of coastal hydro-and sediment dynamics requires model verification, calibration and validation with field data. Yet, no commonly accepted rules for the evaluation of sediment transport models exist. This paper discusses the significance of statistical parameters and their limitations considering common time lags in tidal environments. It is shown that the occasionally used discrepancy ratio lacks quantitative and qualitative information on model performance, as the time context information on time series characteristics is lost. As an initial measure of association, the simple linear correlation coefficient r2 is proposed. To account for time lag errors in suspended transport models, a separate cross-correlation analysis for the flood and ebb tidal phase is proposed. For a comparison with other model applications, a concluding rating of model performance can be expressed by a dimensionless error definition which takes into account the quality of field data.  相似文献   
5.
Geospatial technologies and digital data have developed and disseminated rapidly in conjunction with increasing computing efficiency and Internet availability. The ability to store and transmit large datasets has encouraged the development of national infrastructure datasets in geospatial formats. National datasets are used by numerous agencies for analysis and modeling purposes because these datasets are standardized and considered to be of acceptable accuracy for national scale applications. At Oak Ridge National Laboratory a population model has been developed that incorporates national schools data as one of the model inputs. This paper evaluates spatial and attribute inaccuracies present within two national school datasets, Tele Atlas North America and National Center of Education Statistics (NCES). Schools are an important component of the population model, because they are spatially dense clusters of vulnerable populations. It is therefore essential to validate the quality of school input data. Schools were also chosen since a validated schools dataset was produced in geospatial format for Philadelphia County; thereby enabling a comparison between a local dataset and the national datasets. Analyses found the national datasets are not standardized and incomplete, containing 76 to 90 percent of existing schools. The temporal accuracy of updating annual enrollment values resulted in 89 percent inaccuracy for 2003. Spatial rectification was required for 87 percent of NCES points, of which 58 percent of the errors were attributed to the geocoding process. Lastly, it was found that by combining the two national datasets, the resultant dataset provided a more useful and accurate solution.  相似文献   
6.
 In this paper we use a combination of numerical modeling and data analysis to gain a better understanding of the major characteristics of the circulation in the East Frisian Wadden Sea. In particular, we concentrate on the asymmetry of the tidal wave and its modulation in the coastal area, which results in a complex pattern of responses to the sea-level forcing from the North Sea. The numerical simulations are based on the 3-D primitive equation General Estuarine Transport Model (GETM) with a horizontal resolution of 200 m and terrain-following vertical coordinates. The model is forced at its open boundaries with sea-level data from an operational model for the German Bight (German Hydrographic Office). The validation data for our model simulations include time series of tidal gauge data and surface currents measured at a pile in the back-barrier basin of the Island Langeoog, as well as several ADCP transects in the Accumer Ee tidal inlet. Circulation and turbulence characteristics are investigated for typical situations driven by spring and neap tides, and the analysis is focused on dominating temporal and spatial patterns. By investigating the response of five back-barrier basins with rather different morphologies to external forcing, an attempt is made to elucidate the dominating physical balances controlling the circulation in the individual sub-basins. It is demonstrated that the friction at the seabed tends to slow down the tidal signal in the shallow water. This leads to the establishment of flood dominance in the shallow sea north of the barrier islands. South of the islands, where the water volume of the channels at low tide is smaller than the tidal prism, the asymmetry of the tidal signal is shifted towards ebb dominance, a feature which is particularly pronounced at spring tide. At the northern open boundary, the tidal wave propagating from west to east generates a sea-level difference of ∼1 m along the boundary, and thereby triggers vigorous alongshore currents. The frictional control in the model is located in the inlets, as well as along the northern boundary. The correlation between velocity and turbulent kinetic energy tends to the establishment of a net southward transport, giving theoretical support to the observed accumulation of sediments on the intertidal flats. Weak turbulence along the northern shores of the barrier islands and the small magnitude of the residual currents there promote accumulation of suspended matter in these areas, although wave action will generally counteract this effect. Received: 29 May 2002 / Accepted: 26 September 2002 Responsible Editor: Jean-Marie Beckers Acknowledgements We are indebted to S. Dick for providing the data from the operational model of BSH and to B. Flemming for the useful discussions. The topography data and Fig. 1 have been prepared in cooperation with F. Meyer. Figure 2 has been prepared by G. Brink-Spalink. We also thank for the comments from an anonymous reviewer which helped to improve our paper.  相似文献   
7.
文中重点分析了中国科学院大气物理研究所LASG最新发展的全球大气环流谱模式(R42L9)与一全球海洋环流模式(T63L30)耦合形成的全球海洋-大气-陆面气候系统模式(GOALS/LASG)新版本已积分30 a的模拟结果,通过与多种观测资料的对比分析,讨论了赤道太平洋海表温度(SST)的年际变化及其纬向传播、赤道东太平洋SST异常与其他洋面SST变化之间的遥相关关系、赤道太平洋浅表层海温的年际变化特征等研究内容.结果表明,COALS模式模拟出了赤道太平洋SST异常出现不规则的年际变化特点;赤道东太平洋SST异常的向西传播过程;赤道太平洋混合层海温变化由西向东、由深层向浅层的传播过程;同时也模拟出了赤道东太平洋SST变化与赤道西太平洋以及与西南太平洋海温之间的反相关关系,与南印度洋和副热带大西洋SST之间的正遥相关关系等实际观测现象.但COALS模式也存在明显的不足,如对赤道东、中太平洋SST异常的年际变化幅度明显偏小,没能模拟出赤道东太平洋的SST变化比赤道中太平洋强的特点;赤道太平洋SST从东向西的传播速度明显比实际观测慢得多,但混合层海温极值变化由西向东的传播速度明显比实际情况快得多;没能模拟出赤道东太平洋SST变化同西北太平洋SST的负相关和北印度洋海温变化的正相关现象,因此也影响了对南亚、东南亚降水年际变化的模拟能力.  相似文献   
8.
Fuzzy set map comparison offers a novel approach to map comparison.The approach is specifically aimed at categorical raster maps and applies fuzzy set techniques, accounting for fuzziness of location and fuzziness of category, to create a similarity map as well as an overall similarity statistic: the Fuzzy Kappa. To date, the calculation of the Fuzzy Kappa (or K-fuzzy) has not been formally derived, and the documented procedure was only valid for cases without fuzziness of category. Furthermore, it required an infinitely large, edgeless map. This paper presents the full derivation of the Fuzzy Kappa; the method is now valid for comparisons considering fuzziness of both location and category and does not require further assumptions. This theoretical completion opens opportunities for use of the technique that surpass the original intentions. In particular, the categorical similarity matrix can be applied to highlight or disregard differences pertaining to selected categories or groups of categories and to distinguish between differences due to omission and commission.  相似文献   
9.
The development of instrumental analytics such as the LC-MS/MS has made it possible to quickly determine many component concentrations in a single chromatogram. However, the validation of such multi-methods needs new strategies for robustness and optimization. Statistical execution of analytical tests is one tool that can be utilized to meet this requirement. A Central Composite Design (CCD) was utilized for the validation of an LC-MS/MS multi-method for 84 analytes. The experimental design includes six design variables and two non-design variables (response variables). Concentration, ionization temperature, dwell time, gradient, flow (of eluent), and spraying/curtain gas (continuous design variables) were varied on five different levels; the whole design encompassed 91 runs. To investigate the robustness of a LC-MS/MS method both peak sensitivity and chromatographic separation had to be verified. Therefore, two non-design variables were necessary. The distribution of the peaks over analysis time was applied to describe the quality of the chromatographic separation. The sensitivity was described with the signal to noise ratio (S/N). The evaluation of the measured data was accomplished with the Analysis of Variance (ANOVA) and the Response Surface Methodology (RSM). Three main effects (concentration, ionization temperature, dwell time) and no significant interaction effect were found for the response variable “S/N”. The variables of concentration, ionization temperature, and dwell time had no significant effects for the response variable “S/N”. The ANOVA of the response variable chromatographic separation abandoned no significant effects as well. Therefore, robustness of the method can be guaranteed for all non significant design variables.  相似文献   
10.
Land and Sea Surface Temperatures (LST and SST) are both recognized as Essential Climate Variables, and are routinely retrieved by a wealth of satellites. However, for validated approaches, the latest data are usually not available to the general public. We offer to bridge this gap, by using Meteosat Second Generation (MSG) Spinning Enhanced Visible and InfraRed Imager (SEVIRI), with its 15 min temporal resolution. Here, we present generic algorithms for the retrieval of both LST and SST, valid for the SEVIRI instrument onboard MSG platforms 8–11, which we validate using hourly data of 4 ground stations and 11 buoys in Spain over the years 2015 to 2018. These validations show that in the best conditions of surface homogeneity (cloud-free summer nights), errors in our LST estimation are below 1.5 K for stations with good thermal homogeneity. Comparison with LSA-SAF (Land Surface Analysis - Satellite Application Facility) LST shows differences below 2 K for most of SEVIRI disk, with higher differences in arid areas and during daytime. As for SST retrieval, the average error amount to 0.67 K for cloud-free buoy data. These algorithms have been implemented in a near-real time processing chain, which provide actualized LST and SST maps every 15 min within 5 min of image reception. These maps, along with other products, can be freely consulted from a dedicated webpage (https://www.uv.es/iplsat).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号