首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The growth of data volumes in science is reaching epidemic proportions. Consequently, the status of data-oriented science as a research methodology needs to be elevated to that of the more established scientific approaches of experimentation, theoretical modeling, and simulation. Data-oriented scientific discovery is sometimes referred to as the new science of X-Informatics, where X refers to any science (e.g., Bio-, Geo-, Astro-) and informatics refers to the discipline of organizing, describing, accessing, integrating, mining, and analyzing diverse data resources for scientific discovery. Many scientific disciplines are developing formal sub-disciplines that are information-rich and data-based, to such an extent that these are now stand-alone research and academic programs recognized on their own merits. These disciplines include bioinformatics and geoinformatics, and will soon include astroinformatics. We introduce Astroinformatics, the new data-oriented approach to 21st century astronomy research and education. In astronomy, petascale sky surveys will soon challenge our traditional research approaches and will radically transform how we train the next generation of astronomers, whose experiences with data are now increasingly more virtual (through online databases) than physical (through trips to mountaintop observatories). We describe Astroinformatics as a rigorous approach to these challenges. We also describe initiatives in science education (not only in astronomy) through which students are trained to access large distributed data repositories, to conduct meaningful scientific inquiries into the data, to mine and analyze the data, and to make data-driven scientific discoveries. These are essential skills for all 21st century scientists, particularly in astronomy as major new multi-wavelength sky surveys (that produce petascale databases and image archives) and grand-scale simulations (that generate enormous outputs for model universes, such as the Millennium Simulation) become core research components for a significant fraction of astronomical researchers.  相似文献   

2.
3.
Within the TERENO initiative, four terrestrial observatories, collecting huge amounts of environmental data, are being set up since 2008. To manage, describe, exchange and publish these data, the distributed Spatial Data Infrastructure TEODOOR (http://www.tereno.net) was created. Each institution responsible for an individual observatory sets up its own local data infrastructure, which may communicate with each other to exchange data and metadata internally or to the public by OGC compliant Web services. The TEODOOR data portal serves as a database node to provide scientists and decision makers with reliable and well-accessible data and data products. Various tools like hierarchical search or Web-GIS functions allow a deeper insight into the different observatories, test sites and sensor networks. Sensor data can be queried and selected for measured parameters, stations and/or time periods, and can be visualized and downloaded according to a shared TERENO data policy. Currently, TEODOOR provides free access to data from more than 500 monitoring stations.  相似文献   

4.
Scientific data are strategic resources, and the aggregation of scientific data is an important method to seize the upstream and competitive highlands of scientific data. Notably, it is challenging to grasp the international situation and the scientific laws concerning the mode of scientific data aggregation; exploring the modes and methods of scientific data aggregation that are suitable for China's national conditions is also difficult. This paper investigated and analyzed the modes of scientific data aggregation both at home and abroad from the viewpoints of international organizations, international scientific programs, government agencies, and professional data centers. Five modes of scientific data aggregation were summarized, including scientific research projects converging to designated data centers/repositories, scientific research projects dispersing to data centers/repositories, individual scientists submitting datasets to data centers/repositories with published papers, scientific research projects/individual scientists sharing directories/networks, big data computing/processing platform, and citizen science models of open and public convergence. This paper analyzed each mode and the corresponding cases. On this basis, the paper put forward six suggestions for the reasonable aggregation of scientific data in China, including the implementation of the “Measurement of Scientific Data Management”, certification of data aggregation centers, scientific data collection and publishing in journals, construction of data aggregation networks, aggregation of international resources, and construction of the whole data aggregation chain.  相似文献   

5.
Virtual observatories have been introduced by the astrophysics community as an environment connecting distributed data sources with a unified interface. The heliophysics community soon recognized that they faced a similar problem of many distributed data sets with varying amount of information about them and several discipline specific virtual observatories have been established. Two of them, the virtual heliospheric observatory (VHO) and the virtual magnetospheric observatory (VMO), share a common architecture design with development efforts oriented towards a structured data search. This paper describes the VHO/VMO middleware and its components from metadata preparation and processing to the user interface.  相似文献   

6.
The need for a unified and improved data access system for the nation’s vast hydrologic data holdings has increased over the past few years as researchers strive for better understanding the human impact on the nation’s water cycle. Large mission oriented data repositories such as the USGS’ National Water Information System (NWIS) and EPA’s Storage and Retrieval System (EPA STORET) play a crucial role in providing a substantial amount of the nationwide coverage, however they do differ regionally in terms of coverage (parameters) and geospatial data density. Besides the differences in geographic distribution, repositories tend to undergo changes in mission statements and as such have different foci in their data collection activities that change as time progresses. This paper places the two water information systems next to each in an attempt to work out the differences in terms of coverage and content and how they complement each other when overlaid. This is done through the use of a number the CUAHSI Hydrologic Information Systems components, namely a web-service suite called WaterOneFlow that permits interrogation of the available data content of a national water metadata catalogue into which these two information systems have been integrated.  相似文献   

7.
当前世界进入基于大数据进行数据密集型科学研究的时代,协同操作的数据、信息、系统、空间基础设施对整个地球科学研究和应对巨大社会挑战至关重要.2012年8月在澳大利亚布里斯班召开的第34届国际地质大会,吸引了世界范围内的行业领军科学家交流讨论了地学信息领域取得的进展、成果和发展趋势.从本届地质大会来看,计算机技术、数据库技术、网络技术、虚拟技术等现代化的技术深入应用到了地学众多专业领域,地学信息产品服务成为了信息化时代各国为公众提供公益性服务的主流渠道,地学信息逐步突破孤立的专题、地区、国家的限制,跨专业、跨学科、跨国乃至跨大洲级别的数据共享将逐步达成共识.当前,全球的地学信息科学家都在朝着这同一方向努力,让地学知识能够快速、便捷、高效的为变化的地球服务.  相似文献   

8.
We describe the Russian Virtual Observatory (RVO), a prestigious international project sponsored by the Russian Academy of Sciences (RAS). In 2001, the RAS Scientific Council on Astronomy included this project in a list of the most important international projects of the RAS. Its main goal to create and develop the RVO, intended to provide Russian astronomers with direct and effective access to worldwide astronomical data resources. The RVO is one component of the International Virtual Observatory (IVO), a system in which vast astronomical archives and databases around the world, together with analysis tools and computational services, are linked together into an integrated facility. The IVO unites all important national and international projects to create virtual observatories, coordinated by the International Virtual Observatory Alliance. The RVO is one of the organizers and an important participant of the IVO Alliance.  相似文献   

9.
The recent Heliophysics Virtual Observatory (VxO) effort involves the development of separate observatories with a low overlap in physical domain or area of scientific specialization and a high degree of overlap in metadata management needs. VxOware is a content and metadata management system. While it is intended for use by a VxO specifically, it can also be used by any entity that manages structured metadata. VxOware has many features of a content management system and extensively uses the W3C recommendations for XML (Extensible Markup Language), XQuery (XML Query), and XSLT (Extensible Style Sheet Language Transformations). VxOware has features such as system and user administration, search, user-editable content, version tracking, and a wiki. Besides virtual observatories, the intended user-base of VxOware includes a group or an instrument team that has developed a directory structure of data files and would like to make this data, and its associated metadata, available in the virtual observatory network. One of the most powerful features of VxOware is the ability to link any type of object in the observatory to other objects and the ability for every object to be tagged.  相似文献   

10.
Amira is a powerful three-dimensional visualization package that has been employed recently by the science and engineering communities to gain insight into their data. We discuss a new paradigm for the use of Amira in the Earth sciences that relies on the client-server paradigm. We have developed a module called WEB-IS2, which provides web-based access to Amira. This tool allows Earth scientists to manipulate Amira controls remotely and to analyze, render and view large datasets through the Internet without regard for time or location. This could have important ramifications for GRID computing.  相似文献   

11.
The EU funded SIMDAT project is aimed at applying generic grid technology for the solution of complex application problems in several representative fields including automotive, aerospace, pharmaceutical and meteorology. To satisfy the requirements of the World Meteorological Organization (WMO) and the WMO Information Systems (WIS), the partners in the meteorology activity within SIMDAT (ECMWF, Deutscher Wetterdienst, the UK Met office, EUMETSAT and Météo-France), have developed grid-enabled software that provides generic distributed access to distributed meteorological data repositories via web-based portals, through a series of nodes organized in a mesh network. However, granting access to such an infrastructure, especially considering its fully distributed nature, is a serious challenge and a risk to the security of the overall grid infrastructure. SIMDAT solves this problem by implementing a security model based on a decentralized fine-grained access control mechanism that federates data providers and security issues using the notion of “trust domains”. In this paper we highlight the main features of the SIMDAT grid application and describe its security model in detail.  相似文献   

12.
There are many scientific applications that have high performance computing (HPC) demands. Such demands are traditionally supported by cluster- or Grid-based systems. Cloud computing, which has experienced a tremendous growth, emerged as an approach to provide on-demand access to computing resources. The cloud computing paradigm offers a number of advantages over other distributed platforms. For example, the access to resources is flexible and cost-effective since it is not necessary to invest a large amount of money on a computing infrastructure nor pay salaries for maintenance functions. Therefore, the possibility of using cloud computing for running high performance computing applications is attractive. However, it has been shown elsewhere that current cloud computing platforms are not suitable for running some of these kinds of applications since the performance offered is very poor. The reason is mainly the overhead from virtualisation which is extensively used by most cloud computing platforms as a means to optimise resource usage. Furthermore, running HPC applications in current cloud platforms is a complex task that in many cases requires configuring a cluster of virtual machines (VMs). In this paper, we present a lightweight virtualisation approach for efficiently running the Weather Research and Forecasting (WRF) model (a computing- and communication-intensive application) in a cloud computing environment. Our approach also provides a higher-level programming model that automates the process of configuring a cluster of VMs. We assume such a cloud environment can be shared with other types of HPC applications such as mpiBLAST (an embarrassingly parallel application), and MiniFE (a memory-intensive application). Our experimental results show that lightweight virtualisation imposes about 5 % overhead and it substantially outperforms traditional heavyweight virtualisation such as KVM.  相似文献   

13.
    
Amira is a powerful three-dimensional visualization package that has been employed recently by the science and engineering communities to gain insight into their data. We discuss a new paradigm for the use of Amira in the Earth sciences that relies on the client-server paradigm. We have developed a module called WEB-IS2, which provides web-based access to Amira. This tool allows Earth scientists to manipulate Amira controls remotely and to analyze, render and view large datasets through the Internet without regard for time or location. This could have important ramifications for GRID computing.Electronic Supplementary Material Supplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s10069-003-0013-y  相似文献   

14.
    
Amira is a powerful three-dimensional visualization package that has been employed recently by the science and engineering communities to gain insight into their data. We discuss a new paradigm for the use of Amira in the Earth sciences that relies on the client-server paradigm. We have developed a module called WEB-IS2, which provides web-based access to Amira. This tool allows Earth scientists to manipulate Amira controls remotely and to analyze, render and view large datasets through the Internet without regard for time or location. This could have important ramifications for GRID computing.Electronic Supplementary Material Supplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s10069-003-0013-y  相似文献   

15.
以核废料贮库裂隙岩体介质热-液-力耗散过程的定力解方程为基础,结合核废料地下贮存,分析了热、液、力三方面的边值及初始条件。根据计算力学加权残值法,导出了定解问题的加权积分方程,为实现核废料贮库围岩介质THM耦合有限元数值计算,打下了理论基础。  相似文献   

16.
Scientific visualization is an integral part of the modeling workflow, enabling researchers to understand complex or large data sets and simulation results. A high-resolution stereoscopic virtual reality (VR) environment further enhances the possibilities of visualization. Such an environment also allows collaboration in work groups including people of different backgrounds and to present results of research projects to stakeholders or the public. The requirements for the computing equipment driving the VR environment demand specialized software applications which can be run in a parallel fashion on a set of interconnected machines. Another challenge is to devise a useful data workflow from source data sets onto the display system. Therefore, we develop software applications like the OpenGeoSys Data Explorer, custom data conversion tools for established visualization packages such as ParaView and Visualization Toolkit as well as presentation and interaction techniques for 3D applications like Unity. We demonstrate our workflow by presenting visualization results for case studies from a broad range of applications. An outlook on how visualization techniques can be deeply integrated into the simulation process is given and future technical improvements such as a simplified hardware setup are outlined.  相似文献   

17.
The scope of the FUNMIG Integrated Project (IP) was to improve the knowledge base on biogeochemical processes in the geosphere which are relevant for the safety of radioactive waste repositories. An important part of this project involved the interaction between data producers (research) and data users (radioactive waste management organisations in Europe). The aim thereof was to foster the benefits of the research work for performance assessment (PA), and in a broader sense, for the safety case of radioactive waste repositories. For this purpose a specifically adapted procedure was elaborated. Thus, relevant features, events and processes (FEPs) for the three host rock types, clay, crystalline and salt, were taken from internationally accepted catalogues and mapped onto each of the 108 research tasks conducted during the FUNMIG project by a standardised procedure. The main outcome thereof was a host-rock specific tool (Task Evaluation Table) in which the relevance and benefits of the research results were evaluated both from the PA and research perspective. Virtually all generated data within FUNMIG are related to the safety-relevant FEP-groups “transport mechanisms” and “retardation”.  相似文献   

18.
冯飞  程耀  焦旭明  程浩  王德利 《世界地质》2016,35(4):1109-1118
由于OBC(ocean bottom cable)数据的采集方式为接收电缆铺设在海底,炮点在海面,常规的层间多次波压制方法不再适用,这给实际数据的处理带来了巨大的困难。本文利用地震波场干涉重建方法,对采集到的OBC数据进行波场重建,得到炮点和检波点都位于海底的虚拟炮集记录,与常规的在地表激发、地表接收的炮集一样,把OBC数据的层间多次波转化成常规的表面相关多次波和层间多次波,再进行预测。将其结果与共聚焦点(CFP)边界算法去除层间多次波的结果进行了对比,体现干涉重建技术在OBC数据层间多次波预测方面的准确性。  相似文献   

19.
基于曙光3000计算环境的寒旱区资源环境数据平台建设   总被引:2,自引:0,他引:2  
通过对寒区旱区资源环境研究中已积累的大量特色数据、应用模式和程序的分析,对分布式网络环境中的数据采集与管理、应用模式和程序的移植集成、应用程序与数据的交互访问等技术进行了讨论,提出建立由数据中心、中间件服务、应用服务、一站式管理组成的数据平台是提高数据利用效率,方便实现数据与应用集成,提高科研效率的一种有效途径之一.对数据平台建设的主要思路及实现技术进行了详细讨论,并在实验基础上,初步建立了基于曙光3000高性能计算环境的寒区旱区资源环境数据平台,为以后的数据网格建设做了一定的技术准备.  相似文献   

20.
Lateral variations of the mid-mantle conductance beneath Europe   总被引:2,自引:0,他引:2  
Europe is a region with the largest density of geomagnetic observatories and several authors have used these data to estimate local geomagnetic response functions for various period ranges, typically of the width of 1.5 to 2.5 decades. By collecting the local response functions from 35 European observatories, and by their precise selection and subsequent combination, the independent regional geomagnetic induction data set could be extended to a period range of 4.5 decades. The initial local responses that were estimated by two magnetovariation methods, with two different external source fields employed, have been further supplemented by continental and global 11-year data, providing thus a data set extending over a period range from the harmonics of the daily variations up to 11 years. The combined responses have been inverted individually for each observatory by two techniques, by an Occam procedure and a stochastic 1D inversion for spherically symmetric Earth. The integrated mantle conductance has revealed rather regular lateral changes that have been used to design a mantle conductance image down to a depth of about 770 km. The presented conductance image can be correlated with major European tectonic units like the Baltic Shield and the Trans-European Suture Zone.To examine possible distortions to the inferred mantle conductance models due to large-scale near-surface heterogeneities, specifically those caused by the oceans, seas and large sedimentary basins, a spherical forward modeling was carried out for a radially symmetric conductor coated by an inhomogeneous thin shell with the variable surface conductance. The model responses for the 35 observatory positions were inverted in the same way as previously the experimental data. The results for 28 observatories have shown that the depth down to a pre-defined conductance level could be retrieved with a high accuracy of a few percents, but for seven southernmost observatories the recovery error increased up to 9%. With these seven observatories removed from the analysis, the effect of the seas and oceans on the upper and mid-mantle conductance estimates beneath Europe can be considered negligible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号