首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Images can contain chemical information and many chemical methods can generate image data. For anefficient extraction of chemical data from images, data analysis techniques are necessary, It is a greatadvantage to be able to work on multivariate images. Many imaging techniques allow the extraction ofchemical information. Inorganic analytical chemistry seems to have the longest tradition here, butorganic chemistry and biochemistry may soon be catching up. Also large data arrays from non-imagingtechniques can be combined with image analysis in a useful way, provided certain conditions are fulfilled.  相似文献   

2.
Abstract

During the 1980s techniques for analysis of geographical patterns have been refined to the point that they may be applied to data from many fields. Quantitative spatial analysis and existing functions available in geographical information systems (GIS) enable computerized implementations of these spatial analysis methods. This paper describes the application of quantitative spatial analysis and GIS functions to analysis of language data, using the extensive files of the Linguistic Atlas of the Middle and South Atlantic States (LAMSAS). A brief review of recent development of using quantitative and statistical methods for analysing linguistic data is also included.  相似文献   

3.
基于神经网络的单元自动机CA及真实和优化的城市模拟   总被引:78,自引:8,他引:78  
黎夏  叶嘉安 《地理学报》2002,57(2):159-166
提出了一种基于神经网络的单元自动机(CA)。CA已被越来越多地应用在城市及其它地理现象的模拟中。CA模拟所碰到的最大问题是如何确定模型的结构和参数。模拟真实的城市涉及到使用许多空间变量和参数。当模型较复杂时,很难确定模型的参数值。本模型的结构较简单,模型的参数能通过对神经网络的训练来自动获取。分析表明,所提出的方法能获得更高的模拟精度,并能大大缩短寻找参数所需要的时间。通过筛选训练数据,本模型还可以进行优化的城市模拟,为城市规划提供参考依据。  相似文献   

4.
分析及其在生态环境领域研究中的应用   总被引:3,自引:0,他引:3  
郭明  李新 《中国沙漠》2009,29(5):911-919
Meta分析(Meta-Analysis)是当今比较流行的综合具有同一主题的多个独立研究的统计学方法,是较高一级逻辑形式上的定量文献综述。20世纪90年代后,Meta分析被引入生态学领域的研究,并得到高度的重视和长足的发展。文章介绍了Meta分析的基本概念;回顾了Mata分析的发展历程,及其在生态环境领域研究中的应用;归纳了如何进行Meta分析;最后讨论了Meta分析的局限性。以期能合理应用Meta分析,为生态环境领域综合研究提供合理有效的分析方法。  相似文献   

5.
The wider uptake of GIS tools by many application areas outside GIScience means that many newer users of GIS will have high-level knowledge of the wider task, and low-level knowledge of specific system commands as given in reference manuals. However, these newer users may not have the intermediate knowledge that experts in GI science have gained from working with GI systems over several years. Such intermediate knowledge includes an understanding of the assumptions implied by the use of certain functions, and an appreciation of how to combine functions appropriately to create a workflow that suits both the data and overall goals of the geographical analysis task.

Focusing on the common but non-trivial task of interpolating spatial data, this paper considers how to help users gain the necessary knowledge to complete their task and minimise the possibility of methodological error. We observe that both infometric (or cognitive) knowledge and statistical knowledge are usually required to find a solution that jointly and efficiently meets the requirements of a particular user and data set. Using the class of interpolation methods as an example, we outline an approach that combines knowledge from multiple sources and argue the case for designing a prototype ‘intelligent’ module that can sit between a user and a given GIS.

The knowledge needed to assist with the task of interpolation is constructed as a network of rules, structured as a binary decision tree, that assist the user in selecting an appropriate method according to task-related knowledge (or ‘purpose’) and the characteristics of the data sets. The decision tree triggers exploratory diagnostics that are run on the data sets when a rule requires to be evaluated. Following evaluation of the rules, the user is advised which interpolation method might be and should not be considered for the data set. Any parameters required to interpolate the particular data set (e.g. a distance decay parameter for Inverse Distance Weighting) are also supplied through subsequent optimisation and model selection routines. The rationale of the decision process may be examined, so the ‘intelligent interpolator’ also acts as a learning tool.  相似文献   

6.
Error estimates from statistical regression analysis are often obviously too small, leading to doubts about the given equations, the statistical method itself and finally, with resignation, to the conclusion that mathematical equations and reality never agree. However, for magnetotelluric data we have found an almost perfect fit between observed scattering and predicted confidence limits of regression coefficients after accounting for a systematic error—the bias.
Different methods to compensate for bias in magnetotelluric impedance estimation have been described using additional data from a reference station. However, sufficiently accurate reference data are often not available. A new method has been developed that enables bias compensation without additional data. For the new method we derive a linear relationship between the effect of bias and an expression depending on the data fit. From this we extrapolate the solution for the unbiased impedance. The new method assumes a special model of uncorrelated noise as well as an approximation for the structure of the impedance tensor. From each pair of components of the unrotated impedance tensor corresponding to the same output channel, one of the pair can be compensated if its magnitude is large compared to that of the other.
The method has been successfully applied in many cases. We claim that the solution is closer to the true impedance than any solution based on the selection of events. It gives a measure of the partitioning of noise between the electric and magnetic channels.
We applied the method to measurements from the North Anatolian Fault Zone (Turkey) and from the Merapi volcano (Central Java) in the period range 10–2500 s. Different instrumentation was used for the two sets of measurements, but in both cases we used fluxgate magnetometers to measure the magnetic variations.  相似文献   

7.
Abstract

Remote sensing is an important source of land cover data required by many GIS users. Land cover data are typically derived from remotely–sensed data through the application of a conventional statistical classification. Such classification techniques are not, however, always appropriate, particularly as they may make untenable assumptions about the data and their output is hard, comprising only the code of the most likely class of membership. Whilst some deviation from the assumptions may be tolerated and a fuzzy output may be derived, making more information on class membership properties available, alternative classification procedures are sometimes required. Artificial neural networks are an attractive alternative to the statistical classifiers and here one is used to derive a fuzzy classification output from a remotely–sensed data set that may be post–processed with ancillary data available in a GIS to increase the accuracy with which land cover may be mapped. With the aid ancillary information on soil type and prior knowledge of class occurrence the accuracy of an artificial neural network classification was increased by 29–93 to 77–37 per cent. An artificial neural network can therefore be used generate a fuzzy classification output that may be used with other data sets in a GIS, which may not have been available to the producer of the classification, to increase the accuracy with which land cover may be classified.  相似文献   

8.
Abstract

Triangulated irregular networks (TINs) are increasingly popular for their efficiency in data storage and their ability to accommodate irregularly spaced elevation points for many applications of geographical information systems. This paper reviews and evaluates various methods for extracting TINs from dense digital elevation models (DEMs) on a sample DEM. Both structural and statistical comparisons show that the methods perform with different rates of success in different settings. Users of DEM to TIN conversion methods should be aware of the strengths and weaknesses of the methods in addition to their own purposes before conducting the conversion.  相似文献   

9.
Eye movement data convey a wealth of information that can be used to probe human behaviour and cognitive processes. To date, eye tracking studies have mainly focused on laboratory-based evaluations of cartographic interfaces; in contrast, little attention has been paid to eye movement data mining for real-world applications. In this study, we propose using machine-learning methods to infer user tasks from eye movement data in real-world pedestrian navigation scenarios. We conducted a real-world pedestrian navigation experiment in which we recorded eye movement data from 38 participants. We trained and cross-validated a random forest classifier for classifying five common navigation tasks using five types of eye movement features. The results show that the classifier can achieve an overall accuracy of 67%. We found that statistical eye movement features and saccade encoding features are more useful than the other investigated types of features for distinguishing user tasks. We also identified that the choice of classifier, the time window size and the eye movement features considered are all important factors that influence task inference performance. Results of the research open doors to some potential real-world innovative applications, such as navigation systems that can provide task-related information depending on the task a user is performing.  相似文献   

10.
A generalized database of global palaeomagnetic data from 3719 lava flows and thin dykes of age 0–5 Ma has been constructed for use with a relational database management system. The database includes all data whose virtual geomagnetic poles (VGP) lie within 45 of the spin axis and can be used for studies of palaeosecular variation and for geomagnetic field modelling. Because many of these data were collected and processed more than 15–20 years ago, each result has been characterized according to the demagnetization procedures carried out. Analysis of these data in terms of the latitude variation of the angular dispersion of VGPs (palaeosecular variation from lavas) strongly suggests that careful data selection is required and that many of the older studies may need to be redone using more modern methods. Differences between the angular dispersions for separate normal- and reverse-polarity data sets confirm that many older studies have not been adequately cleaned magnetically. Therefore, the use of the database for geomagnetic field modelling should be carried out with some caution. Using a VGP cut-off angle that varies with latitude, the best data set consists of 2636 results that show a smooth increase of VGP angular dispersion with latitude. Model G for palaeosecular variation, which is based on modelling of the antisymmetric (dipole) and symmetric (quadrupole) dynamo families, provides a good fit to these results.  相似文献   

11.
Matrix factorization is one of the most popular methods in recommendation systems. However, it faces two challenges related to the check-in data in point of interest (POI) recommendation: data scarcity and implicit feedback. To solve these problems, we propose a Feature-Space Separated Factorization Model (FSS-FM) in this paper. The model represents the POI feature spaces as separate slices, each of which represents a type of feature. Thus, spatial and temporal information and other contexts can be easily added to compensate for scarce data. Moreover, two commonly used objective functions for the factorization model, the weighted least squares and pairwise ranking functions, are combined to construct a hybrid optimization function. Extensive experiments are conducted on two real-life data sets: Gowalla and Foursquare, and the results are compared with those of baseline methods to evaluate the model. The results suggest that the FSS-FM performs better than state-of-the-art methods in terms of precision and recall on both data sets. The model with separate feature spaces can improve the performance of recommendation. The inclusion of spatial and temporal contexts further leverages the performance, and the spatial context is more influential than the temporal context. In addition, the capacity of hybrid optimization in improving POI recommendation is demonstrated.  相似文献   

12.
LbooductionThereisanughneedfOrgreaterunderstandingofnatureandcauseofthefluCtuationinclimatethathaveoccurredsincethemoStrediceage.AccordingtooneresearchstreaminPastGobalChangsstudy,findingmulti-proxydataonclimatcvanatonwithhighresolutionisanimPohanobjeCtiv…  相似文献   

13.
Abstract

Although biological diversity has emerged in the 1980s as a major scientific and political issue, efforts at scientific assessment have been hampered by the lack of cohesive sets of data. We describe, in concept, a comprehensive national diversity information system, using geographical information system (GIS) techniques to organize existing data and improve spatial aspects of the assessment. One potential GIS analysis, to identify gaps in the network of nature reserves for California, is discussed in greater detail. By employing an information systems approach, available data can be used more effectively and better management strategies can be formulated.  相似文献   

14.
This article describes a high-resolution land cover data set for Spain and its application to dasymetric population mapping (at census tract level). Eventually, this vector layer is transformed into a grid format. The work parallels the effort of the Joint Research Centre (JRC) of the European Commission, in collaboration with Eurostat and the European Environment Agency (EEA), in building a population density grid for the whole of Europe, combining CORINE Land Cover with population data per commune. We solve many of the problems due to the low resolution of CORINE Land Cover, which are especially visible with Spanish data. An accuracy assessment is carried out from a simple aggregation of georeferenced point population data for the region of Madrid. The bottom-up grid constructed in this way is compared to our top-down grid. We show a great improvement over what has been reported from commune data and CORINE Land Cover, but the improvements seem to come entirely from the higher resolution data sets and not from the statistical modeling in the downscaling exercise. This highlights the importance of providing the research community with more detailed land cover data sets, as well as more detailed population data. The dasymetric grid is available free of charge from the authors upon request.  相似文献   

15.
本文采用条件分位数调整法,对二类气候代用资料,华山树木年轮年表和西安旱涝等级序列进行合并尝试,既最大限度地利用在年轮资料中的连续变化信息,又能使历史文献资料可以与其相互补充、校准,从而使得合并出来的序列更有助于对过去气候的重建。  相似文献   

16.
水文气象序列趋势分析与变异诊断的方法及其对比   总被引:5,自引:0,他引:5  
日趋频繁的极端天气和水文事件对经济发展和人类生命安全构成重大危害,水文气象序列的趋势变化分析与预测研究是避免和控制这些破坏性全球环境变化的前提,也是目前亟待解决的科学问题之一。基于现代数学和统计学理论,气象学和水文学研究人员对水文气象要素趋势检验和突变点识别的方法做了大量的研究。针对当今普遍采用的参数统计、非参数秩检验和小波分析方法及其本质原理,在分类阐述的基础上,系统归纳总结了各个方法在应用过程中存在的问题及解决方案,并以黑河流域托勒气象站年平均气温为实例对比分析各方法计算结果的差异性,凝练出水文气象序列趋势分析与变异诊断的理论与方法系统体系,为今后理论方法的进一步改进及应用发展提供参考。  相似文献   

17.
Abstract

Resource models integrating disparate nominal or class grid-cell data can be implemented by using spatial filters. Most modelling procedures do not adequately handle noise created during the process of merging and integrating multiple grid-cell data sets. Data integration can be best accomplished in an environment where ready access to statistical and database management systems support the reclassification of noise grid-cells. These systems provide access to functionality and information which support the design of the spatial filter and the evaluation of the result of the spatial filter and the resource model.  相似文献   

18.
A geochemical evaluation of the Szc-Halimba-Kisld area, Hungary, covering an area of more than 200 km2 is presented using different statistical and geostatistical methods. The study area is a representative example of allochtonous karst bauxite accumulation. The three groups of deposits studied here have been explored and mined since 1950. Several thousand boreholes have been drilled, and bauxite cores were analyzed for the five main chemical components. A total of 80,000 pleces of analytical data were processed, followed by a geological examination of borehole logs and of mining excavations.The quantitative geochemical evaluation of the data set led to both geochemical and practical results: The geochemical behavior of the allochtonous, clastic karst bauxite deposits differs essentially from that of the autochtonous and parautochtonous ones, as well as that of the lateritic bauxite deposits. The deposits of the study area can be split into several subsequent geochemical-sedimentological units, each representing an event of bauxite transport and accumulation. Clear regional patterns can be revealed in the composition of these units. The geostatistically measured chemical variability of the geochemical units is rather different, the lowest units showing the smallest variability. The interrelations of the main chemical components are weaker and more irregular in the studied deposits than in the autochtonous lateritic bauxite deposits. Additional local genetic features, such as transport routes, can be delineated by the methods applied. Within each deposit, local changes of chemical composition and of its variability can be determined more precisely. These results can be used in bauxite prospecting and exploration, because areas of high or low bauxite quality can be predicted.  相似文献   

19.
There exist many facets of error and uncertainty in digital spatial information. As error or uncertainty will not likely ever be completely eliminated, a better understanding of its impacts is necessary. Spatial analytical approaches, in particular, must somehow address data-quality issues. This can range from evaluating impacts of potential data uncertainty in planning processes that make use of methods to devising methods that explicitly account for error/uncertainty. To date, little has been done to structure methods accounting for error. This article develops an integrated approach to address data uncertainty in spatial optimization. We demonstrate that it is possible to characterize uncertainty impacts by constructing and solving a new multi-objective model that explicitly incorporates facets of data uncertainty. Empirical findings indicate that the proposed approaches can be applied to evaluate the impacts of data uncertainty with statistical confidence, which moves beyond popular practices of simulating errors in data.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号