首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

The response of minor species to gravity waves in the thermosphere varies according to the mass of the species. The relative density perturbation of any minor constituent may be related to the relative density perturbation of the atmosphere due to the waves via a complex function of frequency and wavenumber that may be represented as an amplitude and phase response. Peaks and dips in response and large phase shifts are found that are associated with complex poles and zeroes of the response function. These poles and zeroes depend on background quantities, so that the nature of the response is model dependent; collisions are most important. The effects of background and collisions are examined using numerical computations. Relationships to some satellite observations are discussed.  相似文献   

2.
Popper's demarcation criterion should be applied to all our theories in geophysics to ensure that our science progresses. We must expose our theories to tests in which they stand some risk of being refuted. But if we have a theory which has no rivals it may be difficult in practice to devise a test in which the theory risks being refuted conclusively. The example of the deconvolution problem for seismic data is considered for the case where the source wavelet is unknown. It is shown that all our existing theories of deconvolutions are not scientific in Popper's sense; they are statistical models. We cannot compare these models in a way that is independent of the geology, for each model requires the geology to have a different set of statistical properties. Even in our chosen geology it may be extremely difficult to determine the most applicable model and hence determine the “correct” deconvolution theory. It is more scientific to attempt to solve the deconvolution problem (a) by finding the source wavelet first, deterministically, or (b) by trying to force the wavelet to be a spike—that is, by devising a “perfect” seismic source. A new method of seismic surveying, which has been proposed to tackle the deconvolution problem by the first of these approaches, is based on a theory which is open to refutation by a simple Popperian test. Since the theory makes no assumptions about the geology, the test has equal validity in any geology. It pays to frame our theories in such a way that they may easily be put at risk. Only in this way will we establish whether we are on firm ground. The alternative is simply to take things on trust.  相似文献   

3.
—?The number and geometric distribution of putative mantle up-welling centers and the associated convection cell boundaries are determined from the lithospheric plate motions as given by the 14 Euler poles of the observational NUVEL model. For an assumed distribution of up-welling centers (called “cell-cores”) the corresponding cell boundaries are constructed by a Voronoi division of the spherical surface; the resulting polygons are called “Bénard cells.” By assuming the flow-kinematics within a cell, the viscous coupling between the flow and the plates is estimated, and the Euler poles for the plates are computed under the assumption of zero-net-torque. The positions of the cell-cores are optimized for the HS2-NUVEL1 Euler poles by a method of successive approximation (“subplex”); convergence to one of many local minima occurred typically after ~20,000 iterations. Cell-cores associated with the fourteen HS2-NUVEL1 Euler poles converge to a relatively small number of locations (8 to 10, depending on interpretation), irrespective of the number of convection cells submitted for optimized distribution (from 6 to 50). These locations are correlated with low seismic propagation velocities in tomography, uniformly occur within hotspot provinces, and may specifically be associated with the Hawaiian, Iceland, Reunion/Kerguelen (Indian Ocean), Easter Island, Melanesia/Society Islands (South Pacific), Azores/Cape Verde/Canary Islands, Tristan da Cunha (South Atlantic), Balleny Islands, and possibly Yellowstone hotspots. It is shown that arbitrary Euler poles cannot occur in association with mantle Bénard convection, irrespective of the number and the distribution of convection cells. Nevertheless, eight of the observational Euler poles – including the five that are accurately determined in HS2-NUVEL1 (Australia, Cocos, Juan de Fuca, Pacific, and Philippine) – are “Bénard-valid” (i.e., can be explained by our Bénard model). Five of the remaining six observational poles must be relocated within their error-ellipses to become Bénard-valid; the Eurasia pole alone appears to be in error by ~115°, and may actually lie near 40°N, 154°E. The collective results strongly suggest Bénard-like mantle convection cells, and that basal shear tractions are the primary factor in determining the directions of the plate motions as given by the Euler poles. The magnitudes of the computed Euler vectors show, however, that basal shear cannot be the exclusive driving force of plate tectonics, and suggest force contributions (of comparable magnitude for perhaps half of the plates) from the lithosphere itself, specifically subducting slab-pull and (continental) collision drag, which are provisionally evaluated. The relationship of the putative mantle Bénard polygons to dynamic chaos and turbulent flow is discussed.  相似文献   

4.
Abstract

Two basically different models have been proposed in order to give a rational explanation of Horton's law of stream numbers: the “cyclic” model and the “random graph” model. In the cyclic model, a “Horton net” must be Hortonian in all its parts, and therefore channels of different (Strahler) order must be hierarchically arranged to form successive “generations” of rivers; in the random graph model, channels join in a completely random fashion, and a “Horton net” is simply a net in which Horton's law of stream numbers is numerically satisfied.

In the present paper, these two models have been tested on a large stream population: the Wabash river system, in the continental U.S.A. This network is Hortonian, since the law of stream numbers is numerically satisfied with little scatter; but it shows no structural regularity at all. This seems to be a fairly general case. Therefore, the concept of structural regularity does not have its counterpart in nature; accordingly, the cyclic model does not correspond to reality. The random graph model, on the contrary, explains very well the observed facts: its basic statistical assumption, moreover, is found to be in agreement with observation.  相似文献   

5.
The Navier–Stokes-α equation is a regularised form of the Euler equation that has been employed in representing the sub-grid scales in large-eddy simulations. Determined efforts have been made to place it on a secure deductive foundation. This requires two steps to be completed. The first is fundamental and consists of establishing from the equations governing the fluid flow, a relationship between two velocities called by Holm (Chaos, 2002a, 12, 518) the “filtered” and “unfiltered” velocities. The second consists of the relation between these two velocities. Until now, the preferred route to the first objective has been variational, by varying the action using Hamilton's principle. Soward and Roberts (J. Fluid Mech., 2008, 604, 297) followed that variational route and established the existence of an important but unwelcome term omitted by Holm in his derivation. It is shown here that the Soward and Roberts result may be derived from Euler's equation by a direct approach with considerably greater efficiency. Holm achieved the second objective by making a “Taylor hypothesis”, which we use here to evaluate the unwelcome term missing from his analysis of the first step. The resulting model equations differ from those of Holm's α model, and the attractive mean Kelvin's circulation theorem that follows from his α equations is no longer valid. For that reason, we call the term omitted by Holm unwelcome.  相似文献   

6.
Diffraction and anelasticity problems involving decaying, “evanescent” or “inhomogeneous” waves can be studied and modelled using the notion of “complex rays”. The wavefront or “eikonal” equation for such waves is in general complex and leads to rays in complex position-slowness space. Initial conditions must be specified in that domain: for example, even for a wave originating in a perfectly elastic region, the ray to a real receiver in a neighbouring anelastic region generally departs from a complex point on the initial-values surface. Complex ray theory is the formal extension of the usual Hamilton equations to complex domains. Liouville's phase-space-incompressibility theorem and Fermat's stationary-time principle are formally unchanged. However, an infinity of paths exists between two fixed points in complex space all of which give the same final slowness, travel time, amplitude, etc. This does not contradict the fact that for a given receiver position there is a unique point on the initial-values surface from which this infinite complex ray family emanates.In perfectly elastic media complex rays are associated with, for example, evanescent waves in the shadow of a caustic. More generally, caustics in anelastic media may lie just outside the real coordinate subspace and one must trace complex rays around the complex caustic in order to obtain accurate waveforms nearby or the turning waves at greater distances into the lit region. The complex extension of the Maslov method for computing such waveforms is described. It uses the complex extension of the Legendre transformation and the extra freedom of complex rays makes pseudocaustics avoidable. There is no need to introduce a Maslov/KMAH index to account for caustics in the geometrical ray approximation, the complex amplitude being generally continuous. Other singular ray problems, such as the strong coupling around acoustic axes in anisotropic media, may also be addressed using complex rays.Complex rays are insightful and practical for simple models (e.g. homogeneous layers). For more complicated numerical work, though, it would be desirable to confine attention to real position coordinates. Furthermore, anelasticity implies dispersion so that complex rays are generally frequency dependent. The concept of group velocity as the velocity of a spatial or temporal maximum of a narrow-band wave packet does lead to real ray/Hamilton equations. However, envelope-maximum tracking does not itself yield enough information to compute synthetic seismogramsFor anelasticity which is weak in certain precise senses, one can set up a theory of real, dispersive wave-packet tracking suitable for synthetic seismogram calculations in linearly visco-elastic media. The seismologically-accepiable constant-Q rheology of Liu et al. (1976), for example, satisfies the requirements of this wave-packet theory, which is adapted from electromagnetics and presented as a reasonable physical and mathematical basis for ray modelling in inhomogeneous, anisotropic, anelastic media. Dispersion means that one may need to do more work than for elastic media. However, one can envisage perturbation analyses based on the ray theory presented here, as well as extensions like Maslov's which are based on the Hamiltonian properties.  相似文献   

7.
Man's engineering activities are concentrated on the uppermost part of the earth's crust which is called engineering-geologic zone. This zone is characterized by a significant spatialtemporal variation of the physical properties status of rocks, and saturating waters. This variation determines the specificity of geophysical and, particularly, geoelectrical investigations. Planning of geoelectric investigations in the engineering-geologic zone and their subsequent interpretation requires a priori) geologic-geophysical information on the main peculiarities of the engineering-geologic and hydrogeologic conditions in the region under investigation. This information serves as a basis for the creation of an initial geoelectric model of the section. Following field investigations the model is used in interpretation. Formalization of this a priori) model can be achieved by the solution of direct geoelectric problems. An additional geologic-geophysical information realized in the model of the medium allows to diminish the effect of the “principle of equivalence” by introducing flexible limitations in the section's parameters. Further geophysical observations as well as the correlations between geophysical and engineering-geologic parameters of the section permit the following step in the specification of the geolectric model and its approximation to the real medium. Next correction of this model is made upon accumulation of additional information. The solution of inverse problems with the utilization of computer programs permits specification of the model in the general iterational cycle of interpretation.  相似文献   

8.
During our recent work with 3-D dynamic ray-tracing and velocity inversion problems, a new 3-D model generation system has been developed using a so-called “solid modeling” technique. The term “solid modeling” refers to the fact that the logical system governing the internal geometrical properties of the model describes the model as a combination of “solids” or “volumes” in 3-D space. In each of these volumes the physical parameters (such as seismic velocity, density) vary continuously. Discontinuous changes occur only across the model interfaces separating the volumes. The model is constructed by firstly forming a number of “simple volumes” from the given interfaces and then combining these simple volumes into more complex volumes which represent the physical volumes of the model. It is easy to make changes to the model, by adding volumes or subtracting volumes and perform more composite operations, all by simple use of Boolean expressions. Every time a model has been specified (or changed), the internal logic automatically carries out a check of physical consistency of the 3-D model space (no overlapping volumes, no holes). By including various types of coordinate transformations, different kinds of complex structures can be handled, such as salt domes and vertical and near vertical faulting.  相似文献   

9.
Nonlinear analysis of two-dimensional steady flows with density stratification in the presence of gravity is considered. Inadequacies of Long's model for steady stratified flow over topography are explored. These include occurrence of closed streamline regions and waves propagating upstream. The usual requirements in Long's model of constant dynamic pressure and constant vertical density gradient in the upstream condition are believed to be the cause of these inadequacies. In this article, we consider a relaxation of these requirements, and also provide a systematic framework to accomplish this. As illustrations of this generalized formulation, exact solutions are given for the following two special flow configurations: the stratified flow over a barrier in an infinite channel; the stratified flow due to a line sink in an infinite channel. These solutions exhibit again closed-streamline regions as well as waves propagating upstream. The persistence of these inadequacies in the generalized Long's model appears to indicate that they are not quite consequences of the assumptions of constant dynamic pressure and constant vertical density gradient in Long's model, contrary to previous belief.

On the other hand, solutions admitted by the generalized Long's model show that departures from Long's model become small as the flow becomes more and more supercritical. They provide a nonlinear mechanism for the generation of columnar disturbances upstream of the obstacle and lead in subcritical flows to qualitatively different streamline topological patterns involving saddle points, which may describe the lee-wave-breaking process in subcritical flows and could serve as seats of turbulence in real flows. The occurrences of upstream disturbances in the presence of lee-wave-breaking activity described by the present solution are in accord with the experiments of Long (Long, R.R., “Some aspects of the flow of stratified fluids, Part 3. Continuous density gradients”, Tellus 7, 341--357 (1955)) and Davis (Davis, R.E., “The two-dimensional flow of a stratified fluid over an obstacle”, J. Fluid Mech. 36, 127–143 ()).  相似文献   

10.
We present an analysis of scattering diagrams (i.e., Feynman-like diagrams for wave scattering) of the correlation-type representation theorem for ordinary inhomogeneous media with both positive stiffnesses and positive Poisson's ratios. This analysis reveals scattering events whose scattering diagrams include “negative” bending (i.e., bending in the opposite direction of that of scattering diagrams in ordinary inhomogeneous media). Unlike common scattering events, these events are inconsistent with the current interpretation of some of the basic physical laws, such as Snell's law, just like the so-called “negative refraction” in optics. Yet we find them very useful, for instance, in suppressing some undesired events from scattering data.  相似文献   

11.
ABSTRACT

A monthly conceptual rainfall—runoff model that enjoys fairly widespread use in South Africa was calibrated for each of 50 calibration samples of lengths 3–20 years, drawn from a synthetic 101-year semiarid streamflow time series generated with the Stanford Watershed Model. The ability of Pitman's model to reconstruct the original 101 years of monthly streamflow for each of the 50 calibrations was then examined against a set of statistics of monthly and annual streamflows. The variabilities of key model parameters associated with different lengths of calibration period were also investigated. The results show that it is well worthwhile to increase the calibration period to about 15 years in order to reduce errors in reconstructed flow statistics. Merely increasing the length of calibration period from 6 to 10 years may decrease the error in most regenerated flow statistics by 30–50%. A fair amount of variability in “optimum” parameters, however, seems unavoidable, even at longer calibration periods, though this may also partly be due to imperfect model calibration. The effects of this variability are greatly attenuated in the reconstructed flow statistics due to parameter interdependence.  相似文献   

12.
New paleomagnetic measurements have been made on Tertiary volcanic rocks from northeast Jalisco, Mexico (20.7°N, 102.3°W). The pole position obtained from this study (68°N, 181°E) is consistent with two other Oligocene poles from Mexico. Mexican poles form a coherent group which differs from poles of similar ages from North America. This suggests a possible tectonic rotation of the sampling sites of Mexico with respect to “stable” North America.  相似文献   

13.
For the two and three layer cases geo-electrical sounding graphs can be rapidly and accurately evaluated by comparing them with an adequate set of standard model graphs. The variety of model graphs required is reasonably limited and the use of a computer is unnecessary for this type of interpretation. For more than three layers a compilation of model graphs is not possible, because the variety of curves required in practice increases immensely. To evaluate a measured graph under these conditions, a model graph is calculated by computer for an approximately calculated resistivity profile which is determined, for example, by means of the auxiliary point methods. This model graph is then compared with the measured curve, and from the deviation between the curves a new resistivity profile is derived, the model graph of which is calculated for another comparison procedure, etc. This type of interpretation, although exact, is very inconvenient and time-consuming, because there is no simple method by which an improved resistivity profile can be derived from the deviations between a model graph and a measured graph. The aim of this paper is, on the one hand, to give a simple interpretation method, suitable for use during field work, for multi-layer geo-electrical sounding graphs, and, on the other hand, to indicate an automatic evaluation procedure based on these principles, suitable for use by digital computer. This interpretation system is based on the resolution of the kernel function of Stefanescu's integral into partial fractions. The system consists of a calculation method for an arbitrary multi-layer case and a highly accurate approximation method for determining those partial fractions which are important for interpretation. The partial fractions are found by fitting three-layer graphs to a measured curve. Using the roots and coefficients of these partial fractions and simple equations derived from the kernel function of Stefanescu's integral, the thicknesses and resistivities of layers may be directly calculated for successively increasing depths. The system also provides a simple method for the approximative construction of model graphs.  相似文献   

14.
The “modified Picard” iteration method, which offers global mass conservation, can also be described as a form of Newton's iteration with lagged nonlinear coefficients. It converges to a time step with first-order discretization error. This paper applies second- and third-order diagonally implicit Runge Kutta (DIRK) time steps to the modified Picard method in one example. It demonstrates improvements over the first-order time step in rms error and error-times-effort model quality by factors ranging from two to over two orders of magnitude, showing that the “modified Picard” and DIRK methods are compatible.  相似文献   

15.
It has been necessary to resort to the use of “long-line” refraction marine operations in certain areas where it proved impossible to eliminate singing from reflection records despite the number and variety of programs at our disposal for this purpose. Experience has shown that manual processing of offshore refraction records takes a disproportionate length of time in comparison with the surveys themselves, and this is incompatible with the requirements for choosing the site of an exploration well. It thus became necessary to find an “industrial approach” to the solution of this processing problem. It was apparent that automatic picking could also facilitate the interpretation of land refraction data, and that in the case of both marine and land work the interpretations would be more accurate when factors were taken into account which could not be considered when working without the aid of a digital computer. For these reasons a set of programs was developed for automatic picking and interpretation of refracted arrivals. The picking itself consists in searching for the maximum values of the normalized cross-correlation functions of the traces with a “model” trace. The first results thus supplied are: “picked” times, intercept times, maximum values of the correlations, and the values of the tie constants between overlapping spreads. Next, the construction of the relative intercept time curves is performed; a statistical analysis of these curves then allows the determination of the offset distance. From these elements, ⊙ either the delay time curve is produced, after ensuring correct reciprocal times by means of additional minor corrections. Such work is carried out in order to enable the geophysicist to gain a sound idea of the quality of the interpretation. To assist in this aim, part of the trace on both sides of the pick is plotted on the final documents. Valid groupings of several traces involving the same amount of refraction data are thus possible. ⊙ or the refractor depth is constructed with the wavefront method, making use of the relative intercept times. Such a procedure, which is normally applied to first breaks, can also be used for later arrivals exhibiting slight interference and should represent an important step towards processing secondary arrivals with high interference. The development of this package, in response to a need which is shared by both SNPA and CGG, is the result of the joint efforts of the Geophysical Group of SNPA's Pau Research Center and CGG's Technical and Scientific Departments.  相似文献   

16.
The reliability of an Apparent Polar Wander Path (APWP) obviously depends on the paleomagnetic poles used to determine it. The APWP of Africa and South America are fairly well defined for the 330–260 Ma interval. However, this study pointed out a moderate shift between these two curves, and an incoherency of the South American data, contrary to the African ones, which are homogeneous. A number of South American pole positions were re-evaluated in an effort to better constrain the APWP for the entire continent. Most of discarded poles correspond to sites at the area of the junction of Cordillera with the stable craton. That could have structural implications for the evolution of the western margin of the Gondwana. A new criterion for the evaluation of paleomagnetic poles reliability for APWP is presented. Based on comparison of data from different continents and labeled “coherence” criterion, it is independent from Van der Voo’s ones.  相似文献   

17.
In this study, we attempt to offer a solid physical basis for the deterministic fractal–multifractal (FM) approach in geophysics (Puente, Phys Let A 161:441–447, 1992; J Hydrol 187:65–80, 1996). We show how the geometric construction of derived measures, as Platonic projections of fractal interpolating functions transforming multinomial multifractal measures, naturally defines a non-trivial cascade process that may be interpreted as a particular realization of a random multiplicative cascade. In such a light, we argue that the FM approach is as “physical” as any other phenomenological approach based on Richardson’s eddies splitting, which indeed lead to well-accepted models of the intermittencies of nature, as it happens, for instance, when rainfall is interpreted as a quasi-passive tracer in a turbulent flow. Although neither a fractal interpolating function nor the specific multipliers of a random multiplicative cascade can be measured physically, we show how a fractal transformation “cuts through” plausible scenarios to produce a suitable realization that reflects specific arrangements of energies (masses) as seen in nature. This explains why the FM approach properly captures the spectrum of singularities and other statistical features of given data sets. As the FM approach faithfully encodes data sets with compression ratios typically exceeding 100:1, such a property further enhances its “physical simplicity.” We also provide a connection between the FM approach and advection–diffusion processes.  相似文献   

18.
We propose a fast method for imaging potential field sources. The new method is a variant of the “Depth from Extreme Points,” which yields an image of a quantity proportional to the source distribution (magnetization or density). Such transformed field is here transformed into source‐density units by determining a constant with adequate physical dimension by a linear regression of the observed field versus the field computed from the “Depth from Extreme Points” image. Such source images are often smooth and too extended, reflecting the loss of spatial resolution for increasing altitudes. Consequently, they also present too low values of the source density. We here show that this initial image can be improved and made more compact to achieve a more realistic model, which reproduces a field consistent with the observed one. The new algorithm, which is called “Compact Depth from Extreme Points” iteratively produces different source distributions models, with an increasing degree of compactness and, correspondingly, increasing source‐density values. This is done through weighting the model with a compacting function. The compacting function may be conveniently expressed as a matrix that is modified at any iteration, based on the model obtained in the previous step. At any iteration step the process may be stopped when the density reaches values higher than prefixed bounds based on known or assumed geological information. As no matrix inversion is needed, the method is fast and allows analysing massive datasets. Due to the high stability of the “Depth from Extreme Points” transformation, the algorithm may be also applied to any derivatives of the measured field, thus yielding an improved resolution. The method is investigated by application to 2D and 3D synthetic gravity source distributions, and the imaged sources are a good reconstruction of the geometry and density distributions of the causative bodies. Finally, the method is applied to microgravity data to model underground crypts in St. Venceslas Church, Tovacov, Czech Republic.  相似文献   

19.
It is shown that in the dynamics of a deep fluid of planetary scale such as the Earth's core, compressibility, stratification and self-gravitation are all important as well as rotation. The existing proof of Cowling's theorem prohibiting non-stationary axisymmetric dynamos, and the application of the Proudman-Taylor theorem to core flows, both based on the assumption of solenoidal flow, need to be reconsidered. For sufficiently small (subacoustic) frequencies or reciprocal time scales, an approximation which neglects the effect of flow pressure on the density is valid. We call this the “subseismic approximation” and show that it leads to a new second-order partial differential equation in a single scalar variable describing the low frequency dynamical behaviour. The new “subseismic wave equation” allows a direct connection to be made between the various possible physical regimes of core structure and its dynamics.  相似文献   

20.
The Cole-Cole relaxation model has been found to provide good fits to multifrequency IP data and is derivable mathematically from a reasonable, albeit greatly simplified, physical model of conduction in porous rocks. However, the Cole-Cole model is used to represent the mutual impedance due to inductive or electromagnetic coupling on an empirical basis: this use has not been similarly justified by derivation from any simple physical representation of, say, a half-space, layered or uniform. A uniform conductive half-space can be represented as a simple subsurface loop with particular resistive and inductive properties. Based upon this, a mathematical expression for the mutual impedance between the two pairs of electrodes of a dipole-dipole array is derived and designated “model I”. It is seen that a degenerate case of model I is the Cole-Cole model with frequency exponent c= 1. Model I is thus more general than the Cole-Cole expression and must provide at least as good a fit to a set of field data. Provision for variation of c from unity could be made in model I equally well as for the Cole-Cole model although, at present, this would be a purely empirical alteration. Model I contains four parameters, one of which is, in effect, the resistivity of the half-space. Therefore only three parameters are involved in the model I expressions for normalized amplitude and for phase of the EM-coupling mutual impedance. Model I is compared with previously published “standard” values for two different dipole separations. Under particular constraints, model I is shown to provide better fits than the Cole-Cole model (with c= 1) over particular frequency ranges, specifically at very low frequencies and at moderately high frequencies where the model I phase curve follows the standard phase curve across the axis to positive values (negative coupling).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号