首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
地学数据产品的开发、发布与共享   总被引:6,自引:1,他引:5  
数据是科学研究的基础,数据共享可以最大程度地发挥数据的使用价值,数据是实现数据共享最基本的要素。地学数据具有空间性、综合性、时间性、海量性、多源性等特点。属性数据、遥感数据、矢量数据是加工生产地学数据产品的重要数据源,属性数据空间化是加工、生产地学数据产品的重要技术手段。地学数据共享发布平台应当具备用户管理、数据目录查询、元数据管理、数据查询与浏览、数据下载等基本功能。推进科学数据共享,必须要有相应的政策措施保证,必须建立公正、合理的数据工作评价体系。  相似文献   

2.
The Open-source Project for a Network Data Access Protocol (OPeNDAP) software framework has evolved over the last 10 years to become a robust, high performance, service oriented architecture for the access and transport of scientific data from a broad variety of disciplines, over the Internet. Starting with version 4.0 of the server release, Hyrax, has at its core, the Back-End Server (BES). The BES offers ease of programming, extensibility and reliability allowing added functionality especially related to high-performance data Grids, that goes beyond its original goal of distributed data access. We present the fundamentals of the BES server as a component based architecture, as well as our experiences using the BES to distribute data processing in Grid-oriented intensive parallel computational tasks.  相似文献   

3.
The current availability of thousands of processors at many high performance computing centers has made it feasible to carry out, in near real time, interactive visualization of 3D mantle convection temperature fields, using grid configurations having 10–100 million unknowns. We will describe the technical details involved in carrying out this endeavor, using the facilities available at the Laboratory of Computational Science and Engineering (LCSE) at the University of Minnesota. These technical details involve the modification of a parallel mantle convection program, ACuTEMan; the usage of client–server socket based programs to transfer upwards of a terabyte of time series scientific model data using a local network; a rendering system containing multiple nodes; a high resolution PowerWall display, and the interactive visualization software, DSCVR. We have found that working in an interactive visualizastion mode allows for fast and efficient analysis of mantle convection results. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

4.
The practice of conducting quality control and quality assurance in the construction of data sets is often an overlooked and underestimated task of many research projects in the Earth Sciences. The development of software to effectively process and quickly analyze measurements is a critical aspect of a research project. An evolutionary approach has been used at the University of North Dakota to develop and implement software to process and analyze airborne measurements. Development over the past eight years has resulted in a collection of software named the Airborne Data Processing and Analysis (ADPAA) package which has been published as an open source project on Source Forge. The ADPAA package is intended to fully automate data processing while incorporating the concept of missing value codes and levels of data processing. At each data level, ADPAA utilizes a standard ASCII file format to store measurements from individual instruments into separate files. After all data levels have been processed, a summary file containing parameters of scientific interest for the field project is created for each aircraft flight. All project information is organized into a standard directory structure. ADPAA contains several tools that facilitate quality control procedures conducted on instruments during field projects and laboratory testing. Each quality control procedure is designed to ensure proper instrument performance and hence the validity of the instrument’s measurement. Data processing by ADPAA allows edit files to be created that are automatically used to insert missing value codes into a time period that had instrument problems. The creation of edit files is typically done after the completion of a field project when scientists are performing quality assurance of the data set. Since data processing is automatic, preliminary data can be created and analyzed within hours of an aircraft flight and a complete field project data set can be reprocessed many times during the quality assurance process. Once a final data set has been created, ADPAA provides several tools for visualization and analysis. In addition to aircraft data, ADPAA can be used on any data set that is based on time series measurements. The concepts illustrated by ADPAA and components of ADPAA, such as the Cplot visualization tool, are applicable to areas of Earth Science that work with time series measurements.  相似文献   

5.
Estimation of Pearson’s correlation coefficient between two time series, in the evaluation of the influences of one time-dependent variable on another, is an often used statistical method in climate sciences. Data properties common to climate time series, namely non-normal distributional shape, serial correlation, and small data sizes, call for advanced, robust methods to estimate accurate confidence intervals to support the correlation point estimate. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, Math Geol 35(6):651–665, 2003), where the main intention is to obtain accurate confidence intervals for correlation coefficients between two time series by taking the serial dependence of the data-generating process into account. However, Monte Carlo experiments show that the coverage accuracy of the confidence intervals for smaller data sizes can be substantially improved. In the present paper, the existing program is adapted into a new version, called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique that performs a second bootstrap loop (it resamples from the bootstrap resamples). It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap resampling is used to preserve the serial dependence of both time series. The calibration is applied to standard error-based bootstrap Student’s $t$ confidence intervals. The performance of the calibrated confidence interval is examined with Monte Carlo simulations and compared with the performance of confidence intervals without calibration. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small already (i.e., within a few percentage points) for data sizes as small as 20.  相似文献   

6.
Data management is the key of geotechnical risk management and disaster prevention providing right information at right time and right place. It supports regular construction cycles as well as handling of exceptional situations occurring probably during execution stages where the detailed knowledge of the actual state of construction is especially important. The web-based client–server software platform DoMaMoS was developed to cover all aspects in a new fashion. Main parts of the software are a graphical user interface, a SQL database and a controller application. Software development concerned user-friendly handling, geotechnical monitoring, security aspects, rapid access and adaptivity during a running project. Basic ideas and main features of the developed software are described and the practical application is shown.  相似文献   

7.
This paper describes the DUST-2 (Data Utilization software Tools) software, which is a possibility in visualizing and processing ozone and water vapor data of the Earth atmosphere as measured by satellite instruments (TOMS, GOME, MAS) and provided by different data centers. In addition, a new search tool (S4 tool) which allows to search for comparable ozone and water vapor data in four dimensions (location and time) within the DUST-2 data base and a hdf2csv conversion tool are represented. The software package as well as complementary information and data examples are published on CD-ROM (“Data Utilization Software Tools — 2”, DUST-2, Hartmann et al. 2000) under ISBN 3-9804862-3-0 which is available via www.copernicus.org.  相似文献   

8.
Autoplot is software developed for the Virtual Observatories in Heliophysics to provide intelligent and automated plotting capabilities for many typical data products that are stored in a variety of file formats or databases. Autoplot has proven to be a flexible tool for exploring, accessing, and viewing data resources as typically found on the web, usually in the form of a directory containing data files with multiple parameters contained in each file. Data from a data source is abstracted into a common internal data model called QDataSet. Autoplot is built from individually useful components, and can be extended and reused to create specialized data handling and analysis applications and is being used in a variety of science visualization and analysis applications. Although originally developed for viewing heliophysics-related time series and spectrograms, its flexible and generic data representation model makes it potentially useful for the Earth sciences.  相似文献   

9.
The relation between rainfall and the discharge from two springs, located at the base of different karst massifs in southern Italy, is investigated by cross-correlation analyses. Data are derived from a continuous time window of 13 years. The input signal involves multiple rainfall time series (cumulative rainfall over varying time windows), while the time series of daily spring discharges are used as the output signal. Analyses were first conducted on the unprocessed data and then on data for which linear trends and seasonal components had been removed, the latter by a spectral analysis. Analyses contributed to the investigation of the time required for water to flow through the karst aquifers at the two sites. Long time intervals of the cumulative rainfall (>60 days) appear to be the main component affecting the spring discharge hydrographs; shorter time intervals seem to be related to quick-flow paths. Some statistics about the linear regression and the meaning of the cross-correlation analysis are discussed. Cross-correlation analysis can provide strong support for identification of the main rainfall contribution and the travel time through the main infiltration pathways in aquifers.  相似文献   

10.
介绍了模型库工作平台的主要内容:(1)移植并优化在各种起伏地形下二维、二维半、三维在乎面、剖面上磁场分布的计算程序;(2)移植并优化一套剖面和平面的位场转换系统;(3)移植和开发几种实用的直接反演磁性参数的软件;(4)设计了软件检测手段和方法。论述了利用MicrosoftC7.0及MicrosoftSDK3.1编程技术,开发具有良好人机交互、汉字Windows界面模型库工作平台的方法及技术关键;初步实现了图形输出、存储及编辑;初步确立了各类方法软件的数据格式,并为进一步吸收各类方法软件提供了大量的软件接口。利用工作平台建立了一套内容丰富的磁模型库,存储736个典型矿区实例,为磁测资料解释人员提供一种快速解释手段和工具,为目前各种磁测数据软件提供了质量评价依据  相似文献   

11.
In this study, we successfully present the analysis and forecasting of Caspian Sea level pattern anomalies based on about 15 years of Topex/Poseidon and Jason-1 altimetry data covering 1993–2008, which are originally developed and optimized for open oceans but have the considerable capability to monitor inland water level changes. Since these altimetric measurements comprise of a large datasets and then are complicated to be used for our purposes, principal component analysis is adopted to reduce the complexity of large time series data analysis. Furthermore, autoregressive integrated moving average (ARIMA) model is applied for further analyzing and forecasting the time series. The ARIMA model is herein applied to the 1993–2006 time series of first principal component scores (sPC1). Subsequently, the remaining data acquired from sPC1 is used for verification of the model prediction results. According to our analysis, ARIMA (1,1,0)(0,1,1) model has been found as optimal representative model capable of predicting pattern of Caspian Sea level anomalies reasonably. The analysis of the time series derived by sPC1 reveals the evolution of Caspian Sea level pattern can be subdivided into five different phases with dissimilar rates of rise and fall for a 15-year time span.  相似文献   

12.
The differential rotation of the solar corona is studied using the brightness of the Fe XIV 530.3 nm green coronal line collected over 5.5 solar-activity cycles. The total observed velocity of the coronal rotation is analyzed as a superposition of two modes—fast and slow. A technique for separating two data series composing the initial data set and corresponding to the two differential-rotation modes of the solar corona is proposed. The first series is obtained by averaging the initial data set over six successive Carrington rotations; this series corresponds to long-lived, large-scale coronal regions. The second series is the difference between the initial data and the averaged series, and corresponds to relatively quickly varying coronal component. The coronal rotation derived from the first series coincides with the fast mode detected earlier using the initial data set; i.e., the synodic period of this mode is 27 days at the equator, then weakly increases with latitude, slightly exceeding 28 days at high latitudes. The second series describes a slow rotation displaying a synodic period of about 34 days. This coincides with the period of rotation of the high-latitude corona derived by M. Waldmeier for polar faculae. We expect that coronal objects corresponding to the fast mode are associated with magnetic fields on the scales typical for large activity complexes. The slow mode may be associated with weak fields on small scales.  相似文献   

13.
Predictive modeling of hydrological time series is essential for groundwater resource development and management. Here, we examined the comparative merits and demerits of three modern soft computing techniques, namely, artificial neural networks (ANN) optimized by scaled conjugate gradient (SCG) (ANN.SCG), Bayesian neural networks (BNN) optimized by SCG (BNN.SCG) with evidence approximation and adaptive neuro-fuzzy inference system (ANFIS) in the predictive modeling of groundwater level fluctuations. As a first step of our analysis, a sensitivity analysis was carried out using automatic relevance determination scheme to examine the relative influence of each of the hydro-meteorological attributes on groundwater level fluctuations. Secondly, the result of stability analysis was studied by perturbing the underlying data sets with different levels of correlated red noise. Finally, guided by the ensuing theoretical experiments, the above techniques were applied to model the groundwater level fluctuation time series of six wells from a hard rock area of Dindigul in Southern India. We used four standard quantitative statistical measures to compare the robustness of the different models. These measures are (1) root mean square error, (2) reduction of error, (3) index of agreement (IA), and (4) Pearson’s correlation coefficient (R). Based on the above analyses, it is found that the ANFIS model performed better in modeling noise-free data than the BNN.SCG and ANN.SCG models. However, modeling of hydrological time series correlated with significant amount of red noise, the BNN.SCG models performed better than both the ANFIS and ANN.SCG models. Hence, appropriate care should be taken for selecting suitable methodology for modeling the complex and noisy hydrological time series. These results may be used to constrain the model of groundwater level fluctuations, which would in turn, facilitate the development and implementation of more effective sustainable groundwater management and planning strategies in semi-arid hard rock area of Dindigul, Southern India and alike.  相似文献   

14.
Oil formation volume factor (FVF) is considered as relative change in oil volume between reservoir condition and standard surface condition. FVF, always greater than one, is dominated by reservoir temperature, amount of dissolved gas in oil, and specific gravity of oil and dissolved gas. In addition to limitations on reliable sampling, experimental determination of FVF is associated with high costs and time-consumption. Therefore, this study proposes a novel approach based on hybrid genetic algorithm-pattern search (GA-PS) optimized neural network (NN) for fast, accurate, and cheap determination of oil FVF from available measured pressure-volume-temperature (PVT) data. Contrasting to traditional neural network which is in danger of sticking in local minima, GA-PS optimized NN is in charge of escaping from local minima and converging to global minimum. A group of 342 data points were used for model construction and a group of 219 data points were employed for model assessment. Results indicated superiority of GA-PS optimized NN to traditional NN. Oil FVF values, determined by GA-PS optimized NN were in good agreement with reality.  相似文献   

15.
基于Web平台的岩石矿物数据处理软件新进展   总被引:1,自引:0,他引:1  
基于Web(WordwideWeb万维网 )平台的岩石矿物数据处理软件是按照浏览器 服务器的模式工作 ,即用户以客户端浏览器为计算平台 ,而计算的应用程序及数据库在服务器端运行。本文介绍了当前岩石矿物数据处理软件的发展现状和趋势及成功开发基于Web平台的岩石数据处理程序的范例 ,探讨了开发基于Web平台的岩石矿物数据处理软件的实现方法及优越性 ,从而论证了建立基于Web平台的岩石矿物数据处理软件的必要性和可行性  相似文献   

16.
杨之江  扈震  常晓婕 《地球科学》2010,35(3):475-479
为了提高地理信息系统(geographic information system, GIS) Server的稳定性、易开发性和扩展性,提出基于插件技术多进程模式GIS Server的解决方案.服务器模型由进程调度管理模块和执行进程组成,利用资源分配解决多用户并发操作的问题.按照执行功能作用,服务器模型分为服务层、GIS操作层、用户管理层、执行进程和地图文档层5个模块.在实现方面,根据服务器响应请求的流程将功能实现的过程分解,提出功能插件化的设计思想,使其与服务器相对独立.该模型实现了负载均衡,运行更加稳定,采用插件和Web Service技术开发应用功能,更加简单高效.   相似文献   

17.

Various methods have been used to model the time-varying curves within the global positioning system (GPS) position time series. However, very few consider the level of noise a priori before the seasonal curves are estimated. This study is the first to consider the Wiener filter (WF), already used in geodesy to denoise gravity records, to model the seasonal signals in the GPS position time series. To model the time-varying part of the signal, a first-order autoregressive process is employed. The WF is then adapted to the noise level of the data to model only those time variabilities which are significant. Synthetic and real GPS data is used to demonstrate that this variation of the WF leaves the underlying noise properties intact and provides optimal modeling of seasonal signals. This methodology is referred to as the adaptive WF (AWF) and is both easy to implement and fast, due to the use of the fast Fourier transform method.

  相似文献   

18.
中国岩石圈三维结构数据库地理信息系统设计   总被引:6,自引:0,他引:6  
数据是地理信息系统的基础,数据库的建设是地理信息系统建设的关键,也是最重要的一步。按照ArcSDE GeoDatabase模型结构进行空间数据库建模,将数据库存放在数据库服务器端,用户通过空间数据库引擎ArcSDE访问数据库,便于实现数据共享和多用户并发操作。ArcObjects是ESRI公司开发的面向对象的地理数据模型和高度集成的软件组件库,是一套完全符合COM标准的大型软件架构。在服务器端以SQL Server 2000+ArcSDE8.1为空间数据库平台,客户端的管理系统通过ArcObjects+VB6来开发设计。  相似文献   

19.
Karst database development in Minnesota: design and data assembly   总被引:1,自引:0,他引:1  
The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces.  相似文献   

20.
吴信才  徐世武  万波  吴亮 《地球科学》2014,39(2):221-226
随着云GIS理念的提出及深化, 继局部网软件的C/S结构、互联网软件的B/S结构之后, 提出新一代适合云计算、云服务的软件结构——T-C-V结构(terminal-cloud-virtual), 并从基本概念、总体结构及核心技术等方面对其进行了详细阐述.在该结构中, V层基于底层的虚拟化软硬件设备, 屏蔽不同计算机、不同网络、不同存储设备的异构特性, 为上层应用提供统一高效的运行环境; C层是海量地理信息数据、服务和资源管理与服务体系框架; T层面向政府、企业、公众等信息使用者, 提供标准访问接口, 搭建各类终端应用.可以说, T-C-V结构将改变地理信息服务模式、计算模式和商业模式, 可以更好的交互、更加透明化的创建面向大众和企业的应用.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号