首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   238篇
  免费   10篇
  国内免费   1篇
测绘学   12篇
大气科学   8篇
地球物理   83篇
地质学   115篇
海洋学   17篇
天文学   4篇
综合类   4篇
自然地理   6篇
  2025年   1篇
  2024年   2篇
  2023年   2篇
  2022年   6篇
  2021年   11篇
  2020年   14篇
  2019年   14篇
  2018年   28篇
  2017年   20篇
  2016年   26篇
  2015年   11篇
  2014年   18篇
  2013年   18篇
  2012年   17篇
  2011年   22篇
  2010年   12篇
  2009年   10篇
  2008年   4篇
  2007年   1篇
  2006年   2篇
  2005年   1篇
  2004年   1篇
  2002年   2篇
  2001年   1篇
  1995年   1篇
  1990年   1篇
  1984年   1篇
  1974年   1篇
  1972年   1篇
排序方式: 共有249条查询结果,搜索用时 15 毫秒
21.
22.
    
The accuracy of classification of the Spectral Angle Mapping (SAM) is warranted by choosing the appropriate threshold angles, which are normally defined by the user. Trial‐and‐error and statistical methods are commonly applied to determine threshold angles. In this paper, we discuss a real value–area (RV–A) technique based on the established concentration–area (C–A) fractal model to determine less biased threshold angles for SAM classification of multispectral images. Short wave infrared (SWIR) bands of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images were used over and around the Sar Cheshmeh porphyry Cu deposit and Seridune porphyry Cu prospect. Reference spectra from the known hydrothermal alteration zones in each study area were chosen for producing respective rule images. Segmentation of each rule image resulted in a RV–A curve. Hydrothermal alteration mapping based on threshold values of each RV–A curve showed that the first break in each curve is practical for selection of optimum threshold angles. The hydrothermal alteration maps of the study areas were evaluated by field and laboratory studies including X–ray diffraction analysis, spectral analysis, and thin section study of rock samples. The accuracy of the SAM classification was evaluated by using an error matrix. Overall accuracies of 80.62% and 75.45% were acquired in the Sar Cheshmeh and Seridune areas, respectively. We also used different threshold angles obtained by some statistical techniques to evaluate the efficiency of the proposed RV–A technique. Threshold angles provided by statistical techniques could not enhance the hydrothermal alteration zones around the known deposits, as good as threshold angles obtained by the RV–A technique. Since no arbitrary parameter is defined by the user in the application of the RV‐A technique, its application prevents introduction of human bias to the selection of optimum threshold angle for SAM classification.  相似文献   
23.
    
Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.  相似文献   
24.
    
A possible effective stress variable for wet granular materials is numerically investigated based on an adapted discrete element method (DEM) model for an ideal three‐phase system. The DEM simulations consider granular materials made of nearly monodisperse spherical particles, in the pendular regime with the pore fluid mixture consisting of distinct water menisci bridging particle pairs. The contact force‐related stress contribution to the total stresses is isolated and tested as the effective stress candidate for dense or loose systems. It is first recalled that this contact stress tensor is indeed an adequate effective stress that describes stress limit states of wet samples with the same Mohr‐Coulomb criterion associated with their dry counterparts. As for constitutive relationships, it is demonstrated that the contact stress tensor used in conjunction with dry constitutive relations does describe the strains of wet samples during an initial strain regime but not beyond. Outside this so‐called quasi‐static strain regime, whose extent is much greater for dense than loose materials, dramatic changes in the contact network prevent macroscale contact stress‐strain relationships to apply in the same manner to dry and unsaturated conditions. The presented numerical results also reveal unexpected constitutive bifurcations for the loose material, related to stick‐slip macrobehavior.  相似文献   
25.
We have used observations of sodium emission obtained with the McMath-Pierce solar telescope and MESSENGER’s Mercury Atmospheric and Surface Composition Spectrometer (MASCS) to constrain models of Mercury’s sodium exosphere. The distribution of sodium in Mercury’s exosphere during the period January 12-15, 2008, was mapped using the McMath-Pierce solar telescope with the 5″ × 5″ image slicer to observe the D-line emission. On January 14, 2008, the Ultraviolet and Visible Spectrometer (UVVS) channel on MASCS sampled the sodium in Mercury’s anti-sunward tail region. We find that the bound exosphere has an equivalent temperature of 900-1200 K, and that this temperature can be achieved if the sodium is ejected either by photon-stimulated desorption (PSD) with a 1200 K Maxwellian velocity distribution, or by thermal accommodation of a hotter source. We were not able to discriminate between the two assumed velocity distributions of the ejected particles for the PSD, but the velocity distributions require different values of the thermal accommodation coefficient and result in different upper limits on impact vaporization. We were able to place a strong constraint on the impact vaporization rate that results in the release of neutral Na atoms with an upper limit of 2.1 × 106 cm−2 s−1. The variability of the week-long ground-based observations can be explained by variations in the sources, including both PSD and ion-enhanced PSD, as well as possible temporal enhancements in meteoroid vaporization. Knowledge of both dayside and anti-sunward tail morphologies and radiances are necessary to correctly deduce the exospheric source rates, processes, velocity distribution, and surface interaction.  相似文献   
26.
Gravity gradients can be used to determine the local gravity field of the Earth. This paper investigates downward continuation of all elements of the disturbing gravitational tensor at satellite level using the second-order partial derivatives of the extended Stokes formula in the local-north oriented frame to determine the gravity anomaly at sea level. It considers the inversion of each gradient separately as well as their joint inversion. Numerical studies show that the gradients Tzz, Txx, Tyy and Txz have similar capability of being continued downward to sea level in the presence of white noise, while the gradient Tyz is considerably worse than the others. The bias-corrected joint inversion process shows the possibility of recovering the gravity anomaly with 1 mGal accuracy. Variance component estimation is also tested to update the observation weights in the joint inversion.  相似文献   
27.
The variogram is a key parameter for geostatistical estimation and simulation. Preferential sampling may bias the spatial structure and often leads to noisy and unreliable variograms. A novel technique is proposed to weight variogram pairs in order to compensate for preferential or clustered sampling . Weighting the variogram pairs by global kriging of the quadratic differences between the tail and head values gives each pair the appropriate weight, removes noise and minimizes artifacts in the experimental variogram. Moreover, variogram uncertainty could be computed by this technique. The required covariance between the pairs going into variogram calculation, is a fourth order covariance that must be calculated by second order moments. This introduces some circularity in the calculation whereby an initial variogram must be assumed before calculating how the pairs should be weighted for the experimental variogram. The methodology is assessed by synthetic and realistic examples. For synthetic example, a comparison between the traditional and declustered variograms shows that the declustered variograms are better estimates of the true underlying variograms. The realistic example also shows that the declustered sample variogram is closer to the true variogram.  相似文献   
28.
The present study adopts an integrative modelling methodology, which combines the strengths of the SLEUTH model and the Conservation Assessment and Prioritization System (CAPS) method. By developing a scenario-based geographic information system simulation environment for Hashtpar City, Iran, the manageability of the landscape under each urban growth scenario is analysed. In addition, the CAPS approach was used for biodiversity conservation suitability mapping. The SLEUTH model was implemented to generate predictive urban layers of the years 2020, 2030, 2040 and 2050 for each scenario (dynamic factors for conservation suitability mapping). Accordingly, conservation suitability surface of the area is updated for each time point and under each urban development storyline. Two-way analysis of variance and Duncan’s new multiple range tests were employed to compare the functionality of the three scenarios. Based on results, the managed urban growth scenario depicted better results for manageability of the landscape and less negative impact on conservation suitability values.  相似文献   
29.
In conventional seismic hazard analysis, uniform distribution over area and magnitude range is assumed for the evaluation of source seismicity which is not able to capture peculiar characteristic of near-fault ground motion well. For near-field hazard analysis, two important factors need to be considered: (1) rupture directivity effects and (2) occurrence of scenario characteristic ruptures in the nearby sources. This study proposed a simple framework to consider these two effects by modifying the predictions from the conventional ground motion model based on pulse occurrence probability and adjustment of the magnitude frequency distribution to account for the rupture characteristic of the fault. The results of proposed approach are compared with those of deterministic and probabilistic seismic hazard analyses. The results indicate that characteristic earthquake and directivity consideration both have significant effects on seismic hazard analysis estimates. The implemented approach leads to results close to deterministic seismic hazard analysis in the short period ranges (T < 1.0 s) and follows probabilistic seismic hazard analysis results in the long period ranges (T > 1.0 s). Finally, seismic hazard maps based on the proposed method could be developed and compared with other methods.  相似文献   
30.
How to select a limited number of strong ground motion records (SGMRs) is an important challenge for the seismic collapse capacity assessment of structures. The collapse capacity is considered as the ground motion intensity measure corresponding to the drift‐related dynamic instability in the structural system. The goal of this paper is to select, from a general set of SGMRs, a small number of subsets such that each can be used for the reliable prediction of the mean collapse capacity of a particular group of structures, i.e. of single degree‐of‐freedom systems with a typical behaviour range. In order to achieve this goal, multivariate statistical analysis is first applied, to determine what degree of similarity exists between each selected small subset and the general set of SGMRs. Principal Component analysis is applied to identify the best way to group structures, resulting in a minimum number of SGMRs in a proposed subset. The structures were classified into six groups, and for each group a subset of eight SGMRs has been proposed. The methodology has been validated by analysing a first‐mode‐dominated three‐storey‐reinforced concrete structure by means of the proposed subsets, as well as the general set of SGMRs. The results of this analysis show that the mean seismic collapse capacity can be predicted by the proposed subsets with less dispersion than by the recently developed improved approach, which is based on scaling the response spectra of the records to match the conditional mean spectrum. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号