首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
A series of 2D apparent resistivity data were generated over two synthetic models representing different geological or environmental conditions commonly associated with geophysical applications for hydrogeological, environmental and engineering investigations. The apparent resistivity data were generated for the following arrays: Wenner-alpha (WA), Wenner-beta (WB), Wenner–Schlumberger (WSC), dipole–dipole (DDP), pole–dipole (PDP) and pole–pole (PP) arrays, which were paired such that apparent resistivity data for 2D profiles in a parallel direction are obtained with a particular array type and those in a perpendicular direction are observed with a different array type. The 2D apparent resistivity data for the orthogonal paired-arrays were then collated to 3D data sets. The effectiveness and efficiency of the orthogonal paired-arrays in 3D geoelectrical resistivity imaging were evaluated by determining the mean absolute anomaly effects of the electrode configurations on the synthetic models. The results show that DDP–PDP, DDP–PP, DDP–WSC, PDP–PP, DDP–WB, PDP–WB and WB–WSC orthogonal paired-arrays produced higher anomaly effects on the synthetic models. This indicates that DDP–PDP, DDP–PP, DDP–WSC, PDP–PP, DDP–WB, PDP–WB and WB–WSC orthogonal paired-arrays are more sensitive to 3D features of the geologic models than the other orthogonal paired-arrays investigated.  相似文献   

2.
The Sungun porphyry copper deposit is hosted in a Diorite/granodioritic to quartz-monzonitic stock that intruded Eocene volcanosedimentary and Cretaceous carbonate rocks. Copper mineralization is associated mainly with potassic alteration and to a lesser extent with sericitic alteration. Based on previously published fluid inclusion and isotopic data by Hezarkhani and Williams-Jones most of the copper is interpreted to have deposited during the waning stages of orthomagmatic hydrothermal activity at temperatures of 400 to 300 °C. These data also indicate that the hydrothermal system involved meteoric waters, and boiled extensively. In this work, thermodynamic data are used to delineate the stability fields of alteration and ore assemblages as a function of fS2, fO2 and pH. The solubility of chalcopyrite was evaluated in this range of conditions using recently published experimental data. During early potassic alteration (>450 °C), Copper solubility is calculated to have been >50 000 ppm, whereas the copper content of the initial fluid responsible for ore deposition is estimated, from fluid inclusion data, to have been 1200–3800 ppm. This indicates that initially the fluid was highly undersaturated with respect to chalcopyrite, which agrees with the observation that veins formed at T > 400 °C contain molybdenite but rarely chalcopyrite. Copper solubility drops rapidly with decreasing temperature, and at 400 °C is approximately 1000 ppm, within the range estimated from fluid inclusion data, whereas at 350 °C it is only 25 ppm. These calculations are consistent with observations that the bulk of the chalcopyrite deposited at Sungun is hosted by veins formed at temperatures of 360 ± 60 °C. Other factors that, in principle, may reduce chalcopyrite solubility are increases in pH, and decreases in fO2 and aCl. Our analysis shows, however, that most of the change in pH occurred at high temperature when chalcopyrite was grossly undersaturated in the fluid, and that the direction of change in fO2 increased chalcopyrite solubility. We propose that the Sungun deposit formed mainly in response to the sharp temperature decrease that accompanied boiling, and partly as a result of the additional heat loss and decrease in aCl, which occurred as a result of mixing of acidic Cu-bearing magmatic waters with cooler meteoric waters of lower salinity. Received: 8 July 1998 / Accepted: 8 April 1999  相似文献   

3.
Benford’s Law gives the expected frequencies of the digits in tabulated data and asserts that the lower digits (1, 2, and 3) are expected to occur more frequently than the higher digits. This study tested whether the law applied to two large earth science data sets. The first test analyzed streamflow statistics and the finding was a close conformity to Benford’s Law. The second test analyzed the sizes of lakes and wetlands, and the finding was that the data did not conform to Benford’s Law. Further analysis showed that the lake and wetland data followed a power law. The expected digit frequencies for data following a power law were derived, and the lake data had a close fit to these expected digit frequencies. The use of Benford’s Law could serve as a quality check for streamflow data subsets, perhaps related to time or geographical area. Also, with the importance of lakes as essential components of the water cycle, either Benford’s Law or the expected digit frequencies of data following a power law could be used as an authenticity and validity check on future databases dealing with water bodies. We give several applications and avenues for future research, including an assessment of whether the digit frequencies of data could be used to derive the power law exponent, and whether the digit frequencies could be used to verify the range over which a power law applies. Our results indicate that data related to water bodies should conform to Benford’s Law and that nonconformity could be indicators of (a) an incomplete data set, (b) the sample not being representative of the population, (c) excessive rounding of the data, (d) data errors, inconsistencies, or anomalies, and/or (e) conformity to a power law with a large exponent.  相似文献   

4.
NOAA’s National Geophysical Data Center is using state-of-the-art Internet tools for natural hazards education, public outreach, and access to natural hazards data. For example, NGDC acquires, processes, and provides access to geologic hazards event data that are useful in natural hazards risk assessment and hazards-related research. In addition, a collection of natural hazards slides and a teacher’s guide on volcanoes are available online. NGDC also created an online “Kids Hazards Quiz” to test the user’s knowledge of disaster safety information. An online Natural Hazards Data Resources Directory provides access to information and links to organizations that provide natural hazards data and information. Expanded access to these data and information by the public and researchers can increase public awareness of natural hazards, improve hazards research, and ultimately reduce the devastating impacts of natural disasters.  相似文献   

5.
U-Pb analyses of single monazite grains from two granulite facies metapelites in the Ivrea Zone (Southern Alps) reveal the presence, in both samples, of at least three different ages and prove that earlier interpretations of supposedly concordant monazite data as cooling ages are unwarranted. One group of monazite data defines a subconcordant discordia line with an upper intercept age of 293.4 ± 5.8 Ma and a lower intercept age of 210 ± 14 Ma. The upper intercept is interpreted as the real cooling age of the monazites. The lower intercept is interpreted as an episode of fluid-driven Pb-loss, indicated by the presence of internal and external corrosion structures not only of the monazites but also of the zircons in the same samples that are also rejuvenated at 210 ± 12 Ma. Another group of monazite data lies above the concordia. The presence of excess 206Pb indicates that these crystals have grown below the monazite blocking temperature, thus after the granulite facies metamorphism. The age of growth of the new monazite crystals is approached by their 207Pb/235U ages that range between 273 and 244 Ma. The two groups of post-cooling age (post-293.4 ± 5.8 Ma) monazite data correspond to two distinct late- and post-Variscan geotectonic regimes that affected the Southern Alps, (1) Permian transtension with decompression and anatectic melting; (2) Upper Triassic to Lower Jurassic rifting with geographically dispersed hydrothermal activity and alkaline magmatism. Received: 7 July 1998 / Accepted: 4 November 1998  相似文献   

6.
Calibrations are presented for an independent set of four equilibria between end-members of garnet, hornblende, plagioclase and quartz. Thermodynamic data from a large internally-consistent thermodynamic dataset are used to determine the ΔG° of the equilibria. Then, with the known mixing properties of garnet and plagioclase, the non-ideal mixing in amphibole is derived from a set of 74 natural garnet–amphibole–plagioclase–quartz assemblages crystallised in the range 4–13 kbar and 500–800 °C. The advantage of using known thermodynamic data to calculate ΔG° is that correlated variations of composition with temperature and pressure are not manifested in fictive derived entropies and volumes, but are accounted for with non-ideal mixing terms. The amphibole is modelled using a set of ten independent end-members whose mixing parameters are in good agreement with the small amount of data available in the literature. The equilibria used to calibrate the amphibole non-ideal mixing reproduce pressures and temperatures with average absolute deviations of 1.1 kbar and 35 °C using an average pressure–temperature approach, and 0.8 kbar with an average pressure approach. The mixing data provide not only a basis for thermobarometry involving additional phases, but also for calculation of phase diagrams in complex amphibole-bearing systems. Received: 8 November 1999 / Accepted: 7 July 2000  相似文献   

7.
A compilation of B–Be–Li data on rocks that cover the entire eruptive history of Somma-Vesuvius is presented and interpreted in the light of evolution models for the Somma-Vesuvius rocks. Using major and trace element data, fractional crystalllization models are presented for different geochemical units. These data were used to constrain the source mineralogy of the Somma-Vesuvius rocks (ol-opx-cpx-gar-amp of 0.4-0.3-0.1-0.1-0.1), the amount of sediment added (5–10%) and the melt fraction from batch partial melting computations (0.05–0.1). From the B–Li data it is inferred that the main process responsible for the B isotopic signature is sediment recycling. However, the B–Li data show a major variation in Li abundances respect to B which is explained with Li dehydration before the fluid enriched the mantle wedge that produced the arc magmas. The Somma-Vesuvius B isotope composition is intermediate between that of the Campi Flegrei and the broad field of the Eolian Island arc. A low Be isotopes in the recent volcanic rocks can be explained as: (a) the top 1–22 m of the incoming sediment is accreted, (b) large amounts of sediment erosion, (c) a slow rate of subduction which have provoked a long magmatic history for the Vesuvius magma, (d) the sediment component takes several Myr longer than the subducting plate to reach the magma source region beneath Italy.  相似文献   

8.
MATLAB™ is a powerful, easy to use, software package suitable for many mathematical operations, which finds plenty of scientific applications. One such application is the fitting of trend lines for a given data set so as to interpret the relationship of the variance of the parameters involved. We provide here a code in MATLAB™ that performs the weighted linear regression with (correlated or uncorrelated) errors in bivariate data which can handle ‘force-fit’ regression as well.  相似文献   

9.
To tackle the difficulties of a 3-D full volume prestack migration based on the double-square-root (DSR) one-way wave equation in practical applications, the common-azimuth migration approach is first discussed using dual-domain wave propagators under the theoretical frame of crossline common-offset migration. Through coordinate transforming, a common-azimuth prestack tau migration technology that recursively continues the source and receiver wavefields and picks up the migrated results in the two-way vertical traveltime (tau) direction is developed. The migrations of synthetic data sets of SEG/EAGE salt model prove that our common-azimuth migration approaches are effective in both depth and tau domains. Two real data examples show the advantages of wave-theory based prestack migration methods in accuracy and imaging resolution over the conventional Kirchhoff integral methods. Translated from Oil Geophysical Prospecting, 2006, 41(6): 629–634 [译自: 石油地球物理勘探]  相似文献   

10.
This paper examines the possible storm surge damage from a major hurricane to hit the Houston Metropolitan Statistical Area (MSA.) Using storm surge analysis on a unique data set compiled from the Texas Workforce Commission (QCEW), the paper estimates the expected industry-level damage for each county in the Houston MSA. The advantages of using GIS to analyze the expected storm surge damage estimation is that it provides an accurate estimation of the number of affected employees and probable wages losses, by industry and county, based on QCEW data. The results indicate that the ‘Basic Chemical Manufacturing’ and ‘Oil and Gas Extraction’ industries incur the highest employee and payroll losses while the ‘Restaurants and Eateries’ has the largest establishment damage if a major hurricane were to hit the Houston MSA.  相似文献   

11.
The paper presents the molybdenum isotope data, along with the trace element content, to investigate the geochemical behavior of authigenic Mo during long-term burial in sediments in continental margin settings of the Yangtze block, as well as their indication to the burial of original organic carbon. The burial rate of original organic carbon was estimated on the basis of the amount of sedimentary sulfur (TS content), whilst the carbon loss by aerobic degradation was estimated according to calculated Mn contents. On these points, the original organic carbon flux was calculated, exhibiting a large range of variation (0.17–0.67 mmol/m2/day). The strong correlation between sedimentary Mo isotope values and organic carbon burial rates previously proposed on the basis of the investigations on modern ocean sediments, was also used here to estimate the organic carbon burial rate. The data gained through this model showed that organic carbon burial rates have large variations, ranging from 0.43–2.87 mmol/m2/day. Although the two sets of data gained through different geochemical records in the Yangtze block show a deviation of one order of magnitude, they do display a strong correlation. It is thus tempting to speculate that the Mo isotope signature of sediments may serve as a tracer for the accumulation rate of original organic carbon in the continental margin sediments. __________ Translated from Earth Science—Journal of China University of Geosciences, 2007, 32(6) [译自: 地球科学—中国地质大学学报]  相似文献   

12.
The petrogenetic potential of in situ laser ablation Hf isotope data from melt precipitated zircons was explored through the analyses of about 700 individual crystals derived from about 20 different granitic intrusions covering the Variscan basement segment of eastern Bavaria, SE Germany. In combination with geochemical features, four major suites of granitic rocks can be distinguished: (1) NE Bavarian redwitzites (52–57 wt% SiO2, intrusion ages around 323 Ma) have chondritic εHf(t) values (+0.8 to –0.4). The redwitzites are hybrid rocks and the Hf data are permissive of mixing of a mantle progenitor and crustal melts. (2) Various intermediate rock types (dioritic dyke, granodiorite, palite, 59–63 wt% SiO2, 334–320 Ma) from the Bavarian Forest yield negative εHf(t) values between –3.4 and –5.1. These values which apparently contradict a mantle contribution fingerprint an enriched (metasomatized) mantle component that was mixed with crustal material. (3) Voluminous, major crust forming granites sensu stricto (67–75 wt% SiO2, 328–298 Ma) are characterized by a range in εHf(t) values from –0.5 to –5.6. Different crustal sources and/or modification of crustal melts by various input of juvenile material can explain this variation. (4) Post-plutonic (c. 299 Ma) porphyritic dykes of dacitic composition (64–67 wt% SiO2) from the southern Bavarian Forest have chondritic εHf(t) values (+0.6 to –1.1) and display large intergrain Hf isotope variation. The dykes form a separate petrogenetic group and the Hf data suggest that the zircons crystallized when a pristine mantle-derived parental melt was modified by infiltration of crustal material. The zircon Hf data form a largely coherent positive array with the whole-rock Nd data and both systems yield similar two-stage depleted mantle model ages (1.1–1.7 Ga).  相似文献   

13.
Gravity data are the results of gravity force field interaction from all the underground sources. The objects of detection are always submerged in the background field, and thus one of the crucial problems for gravity data interpretation is how to improve the resolution of observed information. The wavelet transform operator has recently been introduced into the domain fields both as a filter and as a powerful source analysis tool. This paper studied the effects of improving resolution of gravity data with wavelet analysis and spectral method, and revealed the geometric characteristics of density heterogeneities described by simple shaped sources. First, the basic theory of the multiscale wavelet analysis and its lifting scheme and spectral method were introduced. With the experimental study on forward simulation of anomalies given by the superposition of six objects and measured data in Songliao plain, Northeast China, the shape, size and depth of the buried objects were estimated in the study. Also, the results were compared with those obtained by conventional techniques, which demonstrated that this method greatly improves the resolution of gravity anomalies. Translated from Progress in Geophysics, 2007, 22(1): 112–120 [译自: 地球物理学进展]  相似文献   

14.
Typically, if uncertainty in subsurface parameters is addressed, it is done so using probability theory. Probability theory is capable of only handling one of the two types of uncertainty (aleatory), hence epistemic uncertainty is neglected. Dempster–Shafer evidence theory (DST) is an approach that allows analysis of both epistemic and aleatory uncertainty. In this paper, DST combination rules are used to combine measured field data on permeability, along with the expert opinions of hydrogeologists (subjective information) to examine uncertainty. Dempster’s rule of combination is chosen as the combination rule of choice primarily due to the theoretical development that exists and the simplicity of the data. Since Dempster’s rule does have some criticisms, two other combination rules (Yager’s rule and the Hau–Kashyap method) were examined which attempt to correct the problems that can be encountered using Dempster’s rule. With the particular data sets used here, there was not a clear superior combination rule. Dempster’s rule appears to suffice when the conflict amongst the evidence is low.  相似文献   

15.
Potential for global mapping of development via a nightsat mission   总被引:1,自引:0,他引:1  
Nightsat is a concept for a satellite system capable of global observation of the location, form and density of lighted infrastructure and development within human settlements. Nightsat’s repeat cycle should be sufficient to produce an annual cloud-free composite of surface lighting to enable detection of growth rates. Airborne and satellite imagery have been used to define the range of spatial, spectral, and detection limit options for a future Nightsat mission. Our conclusion is that Nightsat should collect data from a near-synchronous orbit in the early evening with 50–100 m spatial resolution and have detection limits of 2.5E−8 W cm−2 sr−1 μm−1 or better. Multispectral low-light imaging data would be better than panchromatic data by providing valuable information on the type or character of lighting, a potentially stronger predictor of variables such as ambient population density and economic activity.  相似文献   

16.
Building of models in the Earth Sciences often requires the solution of an inverse problem: some unknown model parameters need to be calibrated with actual measurements. In most cases, the set of measurements cannot completely and uniquely determine the model parameters; hence multiple models can describe the same data set. Bayesian inverse theory provides a framework for solving this problem. Bayesian methods rely on the fact that the conditional probability of the model parameters given the data (the posterior) is proportional to the likelihood of observing the data and a prior belief expressed as a prior distribution of the model parameters. In case the prior distribution is not Gaussian and the relation between data and parameters (forward model) is strongly non-linear, one has to resort to iterative samplers, often Markov chain Monte Carlo methods, for generating samples that fit the data likelihood and reflect the prior model statistics. While theoretically sound, such methods can be slow to converge, and are often impractical when the forward model is CPU demanding. In this paper, we propose a new sampling method that allows to sample from a variety of priors and condition model parameters to a variety of data types. The method does not rely on the traditional Bayesian decomposition of posterior into likelihood and prior, instead it uses so-called pre-posterior distributions, i.e. the probability of the model parameters given some subset of the data. The use of pre-posterior allows to decompose the data into so-called, “easy data” (or linear data) and “difficult data” (or nonlinear data). The method relies on fast non-iterative sequential simulation to generate model realizations. The difficult data is matched by perturbing an initial realization using a perturbation mechanism termed “probability perturbation.” The probability perturbation method moves the initial guess closer to matching the difficult data, while maintaining the prior model statistics and the conditioning to the linear data. Several examples are used to illustrate the properties of this method.  相似文献   

17.
This study presents petrographic and compositional data for coexisting peralkaline silicate glass and quenched natrocarbonatite melt in nepheline phenocrysts from the 24 September 2007 and July 2008 eruptions of the natrocarbonatite volcano Oldoinyo Lengai (Tanzania). Data are also given for peralkaline residual glass in combeite nephelinite ash clasts occurring in the March–April 2006 large volume natrocarbonatite flow. These data are considered to demonstrate the occurrence of liquid immiscibility between strongly peralkaline Fe-rich nephelinite melt and natrocarbonatite at Oldoinyo Lengai. Compositional data for coexisting silicate–carbonate pairs in conjunction with previous experimental studies suggest that the size of the field of liquid immiscibility for carbonated nephelinitic magmas is a function of their peralkalinity. It is shown that peralkaline combeite wollastonite nephelinite was present at Oldoinyo Lengai prior to, and during, the 24 September 2007 ash eruption. It is postulated that the driving force for this major eruption was assimilation and decomposition of previously emplaced solid natrocarbonatite. Assimilation resulted in the formation of the unusual hybrid nepheline–andradite–melilite–combeite–phosphate magma represented by the 24 September 2007 ash.  相似文献   

18.
The current theoretical development of the analysis of compositional data in the article by Aitchison and Egozcue neglects the use of Harker’s variation diagrams and other similar plots as “meaningless” or “useless” on compositional data. In this work, it is shown that variation diagrams essentially are not a correlation tool but a graphical representation of the mass actions and mass balances principles in the context of a given geological system, and, when they are used correctly, they provide vital information for the igneous petrologist. The qualitative validity of the “spurious trends” in these diagrams is also shown, when they are interpreted in their proper geological framework. The example previously used by Rollinson to test the usefulness of the log-ratio transformation in the Aitchison and Egozcue article is revisited here in order to fully illustrate the proper use of this tool.  相似文献   

19.
Photosynthetically available radiation (PAR; 400–700 nm, E m−2 d−1) is the fraction of the total solar energy (Mjoules m−2 d−1) that is used by organisms for photosynthesis and vision. We present a statistical summary of a 17-yr time series of PAR data (1982–1998) collected near Chesapeake Bay as well as a second set of data on PAR and total solar energy gathered over a shorter time span (1997–1998). The time series data (5,126 daily totals) varied between 1–67 E m−2 d−1 and were used to estimate the minimum and maximum values of PAR as a function of day of the year. In monthly frequency distributions of the PAR data, three modes were observed corresponding to sunny, partly cloudy, and overcast days. The second set of PAR and total solar energy data were used to examine the ratio of PAR to total solar energy, which was 2.04 E Mjoule−1 for PAR between 10 and 70 E m−2 d−1. On overcast days, the ratio increased to as high as 3 E Mjoule−1 as PAR increased in importance as a fraction of the total solar energy. These values were consistent with others in the literature, and the relationships reported here can be used to predict the climatology of PAR and total solar energy within the Chesapeake region. The PAR data were also combined with reported minimum values of PAR for net primary production in the surface mixed layer of the water column of aquatic systems to estimate the combinations of mixed layer depth and diffuse attenuation coefficient (number of optical depths) under which light limitation of phytoplankton primary production is expected to occur.  相似文献   

20.
The fully temperature-dependent model of the effective pressure of the solid matrix and its related overpressure has been derived from the pressure balance equation, mass conservation, and Darcy’s law, and is directly useful in basin modeling. Application of the model in the Kuqa Depression of the Tarim Basin in western China proves that this overpressure model is highly accurate. The case of the present-day values of the calculated overpressure histories of Wells Kela2 and Yinan2 approach the field-measured data with mean absolute relative residuals of 3% and 5%, respectively. This indicates that the overpressure simulation is a practical alternative to using rock mechanics experiments for effective pressure measurement. Since calculation of the overpressure history uses the geohistory model and geothermal history model simulation outcomes, the relevant data used and the output of the two models of the Kela2 well are given as examples. The case studies show that the pore fluid density and viscosity used in the calculation of overpressures should be temperature-dependent, otherwise the calculation results would deviate far from the field-measured pressure data. They also show that the most sensitive parameter governing overpressure is permeability, and permeability can be calculated by using either the Kozeny–Carman formula or the porosity–power function. The Kozeny–Carman formula is better if accurate data for the specific surface area of the solid matrix (S a ) exists, otherwise, the porosity–power function is used. Furthermore, it is vital for calculating an accurate overpressure history that one can calibrate S a in the Kozeny–Carman formula, or index m in the porosity–power function by using field-measured pressure data as a constraint. In these specific case studies, the outcome found by using the Kozeny–Carman formula approaches the outcome found by using the porosity–power function with m=4, and both approach the field-measured pressure data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号