首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
There has been considerable interest in estimating secular trends in precipitation data in various regions of the world. It is therefore important to ascertain the manner in which errors of observation affect estimated trends. For this purpose we have compared trends at 1219 stations in the contiguous United States for two data sets: (a) original observations, also called raw observations, and (b) the observations, adjusted to compensate for suspected errors. The adjustments were made at the National Climate Data Center, Asheville (Quinlan et al., 1987;karl andWilliams, 1987), In order to focus on the effects of observational errors we attempted to avoid the effects of filling of missing data by limiting the analysis to the period 1940–1984 for which the number of missing values is much smaller than earlier periods. A least-square linear regression was performed on the raw and adjusted data for each station and the slopes of the fitted lines were compared. The comparison was made for monthly, seasonal and annual precipitation values.The results for annual precipitation showed that 23 percent of the stations have trends of opposite signs in the raw and adjusted data. The trends were identical in annual data at only 11 percent of the stations. When monthly data are combined to form seasonal and annual averages the magnitude of the difference between the slopes of the adjusted and the raw observations generally increases, indicating that the errors in the individual monthly observations are correlated. When the station data were averaged to obtain state-wide averages, the effects of the errors became less pronounced in most of the states. These results indicate that obtaining trends in precipitation from station data is a more difficult problem than has been realized.  相似文献   

2.
Summary In the Part 1 and in a subsequent Part 2 to be published two methods of adjusting a spatial terrestrial network in tri-dimensional space are described. Care has been taken that the nature of the equations used, as well as of the adjustment, correspond to the same in adjustment satellite networks. The adjustment was carried out by the least-squares method according to conditioned observations. Various types of condition equations have been constructed according to the various types of adjusted quantities, and the various alternatives of the introduced errors (changes of input values) and weights. An effort was made to eliminate the ellipsoid of reference to the largest extent. The theory was applied numerically to a model of a smaller network which corresponds in position and height to usual triangulation networks with side lengths of about 30 km.Dedicated to 90th Birthday of Professor Frantiek Fiala  相似文献   

3.
The interpretation of magnetic anomalies on the basis of model bodies is preferably done by making use of “trial and error” methods. These manual methods are tedious and time consuming but they can be transferred to the computer by making the required adjustments by way of the method of least squares. The general principles of the method are described. Essential presumptions are the following:
  • 1 the assumption of definite model bodies
  • 2 the existence of approximation values of the unknown quantities (position, dip, magnetization, etc.)
  • 3 a sufficiently large number of measuring values, so that the process of adjustment can be carried out.
The advantages of the method are the following:
  • 1 substantial automatization and a quick procedure by using computers
  • 2 determination of the errors of the unknown quantities.
The method was applied to the interpretation of two-dimensional ΔZ- and ΔT-anomalies. Three types of model bodies are taken as basis of the computer program, viz. the thin dyke, infinite resp. finite in its extension downward and the circular cylinder. Only the measuring values are given to the computer. The interpretation proceeds in the following steps:
  • 1 calculation of approximation values
  • 2 determination of the model body of best fit
  • 3 iteration in the case of the model body of best fit.
The computer produces the end values of the unknown quantities, their mean errors, and the pertaining theoretical anomalies. These end results are given to a plotting machine, which draws the measured curve, the theoretical curve and the model bodies. Interpretation examples are given.  相似文献   

4.
Summary In order to evaluate the accuracy of measurements repeated by a set of gravimeters the semi-systematic errors 2 and 3 were introduced besides the random error 1 in[1]. It is shown that Eqs.(10) and(11), given in[1], should not be used to determine these errors, but Eqs.(8) and(9).  相似文献   

5.
The motivation for this paper is to provide expressions for first-order partial derivatives of reflection point coordinates, taken with respect to model parameters. Such derivatives are expected to be useful for processes dealing with the problem of estimating velocities for depth migration of seismic data.The subject of the paper is a particular aspect of ray perturbation theory, where observed parameters—two-way reflection time and horizontal components of slowness, are constraining the ray path when parameters of the reference velocity model are perturbed. The methodology described here is applicable to general rays in a 3D isotropic, heterogeneous medium. Each ray is divided into a shot ray and a receiver ray, i.e., the ray portions between the shot/receiver and the reflection point, respectively. Furthermore, by freezing the initial horizontal slowness of these subrays as the model is perturbed,elementary perturbation quantities may be obtained, comprising derivatives of ray hit positions within theisochrone tangent plane, as well as corresponding time derivatives. The elementary quantities may be estimated numerically, by use of ray perturbation theory, or in some cases, analytically. In particular, when the layer above the reflection point is homogeneous, explicit formulas can be derived. When the elementary quantities are known,reflection point derivatives can be obtained efficiently from a set of linear expressions.The method is applicable for a common shot, receiver or offset data sorting. For these gather types, reflection point perturbationlaterally with respect to the isochrone is essentially different. However, in theperpendicular direction, a first-order perturbation is shown to beindependent of gather type.To evaluate the theory, reflection point derivatives were estimated analytically and numerically. I also compared first-order approximations to true reflection point curves, obtained by retracing rays for a number of model perturbations. The results are promising, especially with respect to applications in sensitivity analysis for prestack depth migration and in velocity model updating.  相似文献   

6.
I. Introduction In this section the problem is stated, its physical and mathematical difficulties are indicated, and the way the authors try to overcome them are briefly outlined. Made up of a few measurements of limited accuracy, an electrical sounding does not define a unique solution for the variation of the earth resistivities, even in the case of an isotropic horizontal layering. Interpretation (i.e. the determination of the true resistivities and thicknesses of the ground-layers) requires, therefore, additional information drawn from various more or less reliable geological or other geophysical sources. The introduction of such information into an automatic processing is rather difficult; hence the authors developped a two-stage procedure:
  • a) the field measurements are automatically processed, without loss of information, into more easily usable data;
  • b) some additional information is then introduced, permitting the determination of several geologically conceivable solutions.
The final interpretation remains with the geophysicist who has to adjust the results of the processing to all the specific conditions of his actual problem. II. Principles of the procedure In this section the fundamental idea of the procedure is given as well as an outline of its successive stages. Since the early thirties, geophysicists have been working on direct methods of interpreting E.S. related to a tabular ground (sequence of parallel, homogeneous, isotropic layers of thicknesses hi and resistivities ρi). They generally started by calculating the Stefanesco (or a similar) kernel function, from the integral equation of the apparent resistivity: where r is the distance between the current source and the observation point, S0 the Stefanesco function, ρ(z) the resistivity as a function of the depth z, J1 the Bessel function of order 1 and λ the integration variable. Thicknesses and resistivities had then to be deduced from S0 step by step. Unfortunately, it is difficult to perform automatically this type of procedure due to the rapid accumulation of the errors which originate in the experimental data that may lead to physically impossible results (e.g. negative thicknesses or resistivities) (II. 1). The authors start from a different integral representation of the apparent resistivity: where K1 is the modified Bessel function of order I. Using dimensionless variables t = r/2h0 and y(t)=ζ (r)/ρ1 and subdividing the earth into layers of equal thicknesses h0 (highest common factor of the thicknesses hi), ø becomes an even periodic function (period 2π) and the integral takes the form: The advantage of this representation is due to the fact that its kernel ø (function of the resistivities of the layers), if positive or null, always yields a sequence of positive resistivities for all values of θ and thus a solution which is surely convenient physically, if not geologically (II.3). Besides, it can be proved that ø(θ) is the Fourier transform of the sequence of the electric images of the current source in the successive interfaces (II.4). Thus, the main steps of the procedure are: a) determination of a non-negative periodic, even function ø(θ) which satisfies in the best way the integral equation of apparent resistivity for the points where measurements were made; b) a Fourier transform gives the electric images from which, c) the resistivities are obtained. This sequence of resistivities is called the “comprehensive solution”; it includes all the information contained in the original E.S. diagram, even if its too great detail has no practical significance. Simplification of the comprehensive solution leads to geologically conceivable distributions (h, ρ) called “particular solutions”. The smoothing is carried out through the Dar-Zarrouk curve (Maillet 1947) which shows the variations of parameters (transverse resistance Ri= hii–as function of the longitudinal conductance Ci=hii) well suited to reflect the laws of electrical prospecting (principles of equivalence and suppression). Comprehensive and particular solutions help the geophysicist in making the final interpretation (II.5). III. Computing methods In this section the mathematical operations involved in processing the data are outlined. The function ø(θ) is given by an integral equation; but taking into account the small number and the limited accuracy of the measurements, the determination of ø(θ) is performed by minimising the mean square of the weighted relative differences between the measured and the calculated apparent resistivities: minimum with inequalities as constraints: where tl are the values of t for the sequence of measured resistivities and pl are the weights chosen according to their estimated accuracy. When the integral in the above expression is conveniently replaced by a finite sum, the problem of minimization becomes one known as quadratic programming. Moreover, the geophysicist may, if it is considered to be necessary, impose that the automatic solution keep close to a given distribution (h, ρ) (resulting for instance from a preliminary interpretation). If φ(θ) is the ø-function corresponding to the fixed distribution, the quantity to minimize takes the form: where: The images are then calculated by Fourier transformation (III.2) and the resistivities are derived from the images through an algorithm almost identical to a procedure used in seismic prospecting (determination of the transmission coefficients) (III.3). As for the presentation of the results, resorting to the Dar-Zarrouk curve permits: a) to get a diagram somewhat similar to the E.S. curve (bilogarithmic scales coordinates: cumulative R and C) that is an already “smoothed” diagram where deeper layers show up less than superficial ones and b) to simplify the comprehensive solution. In fact, in arithmetic scales (R versus C) the Dar-Zarrouk curve consists of a many-sided polygonal contour which múst be replaced by an “equivalent” contour having a smaller number of sides. Though manually possible, this operation is automatically performed and additional constraints (e.g. geological information concerning thicknesses and resistivities) can be introduced at this stage. At present, the constraint used is the number of layers (III.4). Each solution (comprehensive and particular) is checked against the original data by calculating the E.S. diagrams corresponding to the distributions (thickness, resistivity) proposed. If the discrepancies are too large, the process is resumed (III.5). IV. Examples Several examples illustrate the procedure (IV). The first ones concern calculated E.S. diagrams, i.e. curves devoid of experimental errors and corresponding to a known distribution of resistivities and thicknesses (IV. 1). Example I shows how an E.S. curve is sampled. Several distributions (thickness, resistivity) were found: one is similar to, others differ from, the original one, although all E.S. diagrams are alike and characteristic parameters (transverse resistance of resistive layers and longitudinal conductance of conductive layers) are well determined. Additional informations must be introduced by the interpreter to remove the indeterminacy (IV.1.1). Examples 2 and 3 illustrate the principles of equivalence and suppression and give an idea of the sensitivity of the process, which seems accurate enough to make a correct distinction between calculated E.S. whose difference is less than what might be considered as significant in field curves (IV. 1.2 and IV. 1.3). The following example (number 4) concerns a multy-layer case which cannot be correctly approximated by a much smaller number of layers. It indicates that the result of the processing reflects correctly the trend of the changes in resistivity with depth but that, without additional information, several equally satisfactory solutions can be obtained (IV. 1.4). A second series of examples illustrates how the process behaves in presence of different kinds of errors on the original data (IV.2). A few anomalous points inserted into a series of accurate values of resistivities cause no problem, since the automatic processing practically replaces the wrong values (example 5) by what they should be had the E.S. diagram not been wilfully disturbed (IV.2.1). However, the procedure becomes less able to make a correct distinction, as the number of erroneous points increases. Weights must then be introduced, in order to determine the tolerance acceptable at each point as a function of its supposed accuracy. Example 6 shows how the weighting system used works (IV.2.2). The foregoing examples concern E.S. which include anomalous points that might have been caused by erroneous measurements. Geological effects (dipping layers for instance) while continuing to give smooth curves might introduce anomalous curvatures in an E.S. Example 7 indicates that in such a case the automatic processing gives distributions (thicknesses, resistivities) whose E.S. diagrams differ from the original curve only where curvatures exceed the limit corresponding to a horizontal stratification (IV.2.3). Numerous field diagrams have been processed (IV. 3). A first case (example 8) illustrates the various stages of the operation, chiefly the sampling of the E.S. (choice of the left cross, the weights and the resistivity of the substratum) and the selection of a solution, adapted from the automatic results (IV.3.1). The following examples (Nrs 9 and 10) show that electrical prospecting for deep seated layers can be usefully guided by the automatic processing of the E.S., even when difficult field conditions give original curves of low accuracy. A bore-hole proved the automatic solution proposed for E.S. no 10, slightly modified by the interpreter, to be correct.  相似文献   

7.
Artificial time-delay feed-forward neural networks (NN) with one hidden layer and error back-propagation learning are used to predict surface air temperatures (SAT) for six hours up to one day. The networks were trained and tested with the use of data covering a total of 26280 hours (three years) monitored in the period 1998-2000 in location Spoilov (suburban area of Prague). The NN models provided a good fit with the measured data. Phase diagrams as well as the results of the variability studies of SAT revealed a fundamental difference between summer temperatures (April-September) and winter temperatures (October-March). Results of the trial runs indicated that NN models perform better when both periods are trained separately. The results show that relatively simple neural networks, with an adequate choice of the input data, can achieve reasonably good accuracy in one-lag as well as in multi-lag predictions.For the summer period the total errors give 0.055 and/or 0.044 mean accuracy of predicted values in training and testing sets, respectively. Similarly high mean accuracy of the simulated values of 0.057 and 0.065 was obtained for the training and testing sets in the winter season. Similarly good results with the mean error of 0.028 were obtained for the summer period of the year 2001, which were used for additional testing (see Appendix). Higher accuracy obtained for the year 2001 is due to the fact, that warm temperature extremes, which are generally predicted with less accuracy, did not occurred in the summer 2001.  相似文献   

8.
Binary data such as survival, hatching and mortality are assumed to be best described by a binomial distribution. This article provides a simple and straight forward approach for derivation of a no/lowest observed effect level (NOEL/LOEL) in a one-to-many control versus treatments setup. Practically, NOEL and LOEL values can be derived by means of different procedures, e.g. using Fisher’s exact test in coherence with adjusted p values. However, using adjusted p values heavily decreases statistical power. Alternatively, multiple t tests (e.g. Dunnett test procedure) together with arcsin-square-root transformations can be applied in order to account for variance heterogeneity of binomial data. Arcsin-square-root transformation, however, violates normal distribution because transformed data are constrained, while normal distribution provides data in the range \((-\infty ,\infty )\). Furthermore, results of statistical tests relying on an approximate normal distribution are approximate too. When testing for trends in probabilities of success (probs), the step down Cochran–Armitage trend test (CA) can be applied. The test statistic used is approximately normal. However, if probs approach 0 or 1, normal approximation of the null-distribution is suboptimal. Thus, critical values and p values lack statistical accuracy. We propose applying the closure principle (CP) and Fisher–Freeman–Halton test (FISH). The resulting CPFISH can solve the problems mentioned above. CP is used to overcome \(\alpha\)-inflation while FISH is applied to test for differences in probs between the control and any subset of treatment groups. Its applicability is presented by means of real data sets. Additionally, we performed a simulation study of 81 different setups (differing numbers of control replicates, numbers of treatments etc.), and compared the results of CPFISH to CA allowing us to point out the advantages and disadvantages of the CPFISH.  相似文献   

9.
The accuracy and precision of microseismic event locations were measured, analyzed, and compared for two types of location systems: anolog and digital. In the first system, relative times of first arrival were estimated from analog signals using automated hardware circuitry; station positions were estimated from mine map coordinates; and event locations were determined using the BLD (Blake, Leighton, and Duvall) direct solution method. In the second system, arrival times were manually measured during interactive displays of digital waveforms; station coordinates were surveyed; and the SW-GBM (Salamon and Wiebols; Godson, Bridges, and McKavanagh) direct basis function was used to solve for locations. Both systems assume constant isotropic seismic velocity of slightly different signals data sets, calibration blast signals with known source site and origin time, and microseismic event signals, were recorded by each location system employing the same array of high-frequency (5 kHz) accelerometers with 150 m maximum dimension. The calibration blast tests indicated a location precision of ±2 m and accuracy of ±10 m for the analog system. Location precision and accuracy for the digital system measured ±1 m and ±8 m, respectively. Numerical experiments were used to assess the contributions of errors in velocity, arrival times, and station positions on the location accuracy and precision for each system. Measured and estimated errors appropriate to each system for microseismic events were simulated in computing source locations for comparison with exact synthetic event locations. Discrepancy vectors between exact locations and locations calculated with known data errors averaged 7.7 and 1.4 m for the analog and digital systems, respectively. These averages are probably more representative of the location precision of microseismic events, since the calibration blast tests produce impulsive seismic arrivals resulting in smaller arrival-time pick errors in the analog system. For both systems, location accuracy is limited by inadequate modeling of the velocity structure. Consequently, when isotropic velocity models are used in the travel-time inversions, the increased effort expended with the digital location system does not, for the particular systems studied, result in increased accuracy.  相似文献   

10.
Temporal characteristics of the famous Matsushiro earthquake swarm were investigated quantitatively using point-process analysis. Analysis of the earthquake occurrence rate revealed not only the precise and interesting process of the swarm, but also the relation between pore water pressure and the strength of the epidemic effect, and the modified Omori-type temporal decay of earthquake activity. The occurrence rate function (t) for this swarm is represented well aswhere f(t) represents the contribution of the swarm driver, which was the erupting water from the deep in this case, and the second term represents an epidemic effect of the modified Omori type. Based on changes in the form of f(t), this two-year long swarm was divided into six periods and one short transitional epoch. The form of f(t) in each period revealed the detail of the water erupting process. In the final stage, f (t) decayed according to the modified Omori-formula form, while it decayed exponentially in the brief respite of the water eruption in the fourth period. When an exponential decay of swarm activity is observed, we have to be cautious of a sudden restart of the violent activity. The epidemic effect is stronger when the pressure of the pore water is higher. Even when the pressure is not high, the p value in the epidemic effect is small, when there is plenty of pore water. However, the epidemic effect produced about a quarter of the earthquakes even though there was not much pore water in the rocks.  相似文献   

11.
Summary As regards the concept of complete weight p with which an observed quantity (e.g., the direction of theA–G net) should enter the net adjustment, according to Eq.(1), apart from the fundamental weight p 0 ), determined by the number of repetitions, it should also contain the time parameter pt according to Eq.(11), where c>1 is a constant, and t is the number of days of observation, and also the refraction factor pr according to Eqs(17, 18), where q is the structural weight of the direction. The condition for being able to determine pr with the directions is observation by means of the three-directional vertex method[2], because it is not possible to localize lateral refraction by angular methods. The theory of complete weight is in favour of observations with a high fundamental weight p 0 which automatically yield higher values of t, and also of pt. The introduction of the complete weight into the experimental directional net in Fig. 2 caused the mean value of the uneliminated refraction error to decrease from 0.24 to 0.12, the mean square error of the adjusted direction being 0.17. The value of the constant c was investigated and the method of determining the parameter pr was derived also for lengths measured electro-optically. Mention is made of the effect of complete weights on the length adjustment of a net in[6].  相似文献   

12.
In recent years there has been a growing interest in using Godunov-type methods for atmospheric flow problems. Godunov's unique approach to numerical modeling of fluid flow is characterized by introducing physical reasoning in the development of the numerical scheme (van Leer, 1999). The construction of the scheme itself is based upon the physical phenomenon described by the equation sets. These finite volume discretizations are conservative and have the ability to resolve regions of steep gradients accurately, thus avoiding dispersion errors in the solution. Positivity of scalars (an important factor when considering the transport of microphysical quantities) is also guaranteed by applying the total variation diminishing condition appropriately. This paper describes the implementation of a Godunov-type finite volume scheme based on unstructured adaptive grids for simulating flows on the meso-, micro- and urban-scales. The Harten-Lax-van Leer-Contact (HLLC) approximate Riemann solver used to calculate the Godunov fluxes is described in detail. The higher-order spatial accuracy is achieved via gradient reconstruction techniques after van Leer and the total variation diminishing condition is enforced with the aid of slope-limiters. A multi-stage explicit Runge-Kutta time marching scheme is used for maintaining higher-order accuracy in time. The scheme is conservative and exhibits minimal numerical dispersion and diffusion. The subgrid scale diffusion in the model is parameterized via the Smagorinsky-Lilly turbulence closure. The scheme uses a non-staggered mesh arrangement of variables (all quantities are cell-centered) and requires no explicit filtering for stability. A comparison with exact solutions shows that the scheme can resolve the different types of wave structures admitted by the atmospheric flow equation set. A qualitative evaluation for an idealized test case of convection in a neutral atmosphere is also presented. The scheme was able to simulate the onset of Kelvin-Helmholtz type instability and shows promise in simulating atmospheric flows characterized by sharp gradients without using explicit filtering for numerical stability.  相似文献   

13.
During substorms, large-scale changes of the topology of the Earths magnetosphere following the variation of the characteristics of the interplanetary medium are accompanied by the induction of the electric field. In this study a model of a time-dependent magnetosphere is constructed and the large-scale features of the induced electric field are described. Local-time sectors with upward or downward field-aligned component and with intense perpendicular component of the electric field are distinguished. The electric-field structure implies the existence of outflow regions particularly effective in ion energization. With the vector potential adopted in the study, the region from which the most energized ions originate is defined by the local-time sector near 2100 MLT and latitude zone near 71° MLAT. The motion of ionospheric oxygen ions of energy 0.3–3 keV is investigated during a 5-min reconfiguration event when the tail-like magneto-spheric field relaxes to the dipole-like field. As the characteristics of plasma in the regions near the equatorial plane affect the substorm evolution, the energy, pitch angle, and the magnetic moment of ions in these regions are analyzed. These quantities depend on the initial energy and pitch angle of the ion and on the magnetic and electric field it encounters on its way. With the vector potential adopted, the energy attained in the equatorial regions can reach hundreds of keV. Three regimes of magnetic-moment changes are identified: adiabatic, oscillating, and monotonous, depending on the ion initial energy and pitch angle and on the magnetic- and electric-field spatial and temporal scales. The implications for the global substorm dynamics are discussed.  相似文献   

14.
We consider an inverse problem of determination of short-period (high-frequency) radiator in an extended earthquake source. This radiator is assumed to be noncoherent (i.e., random), it can be described by its power flux or brightness (which depends on time and location over the extended source). To decide about this radiator we try to use temporal intensity function (TIF) of a seismic waveform at a given receiver point. It is defined as (time-varying) mean elastic wave energy flux through unit area. We suggest estimating it empirically from the velocity seismogram by its squaring and smoothing. We refer to this function as observed TIF. We believe that one can represent TIF produced by an extended radiator and recorded at some receiver point in the earth as convolution of the two components: (1) ideal intensity function (ITIF) which would be recorded in the ideal nonscattering earth from the same radiator; and (2) intensity function which would be recorded in the real earth from unit point instant radiator (intensity Green's function, IGF). This representation enables us to attempt to estimate an ITIF of a large earthquake by inverse filtering or deconvolution of the observed TIF of this event, using the observed TIF of a small event (actually, fore-or aftershock) as the empirical IGF. Therefore, the effect of scattering is stripped off. Examples of the application of this procedure to real data are given. We also show that if one can determine far-field ITIF for enough rays, one can extract from them the information on space-time structure of the radiator (that is, of brightness function). We apply this theoretical approach to short-periodP-wave records of the 1978 Miyagi-oki earthquake (M=7.6). Spatial and temporal centroids of a short-period radiator are estimated.  相似文献   

15.
Summary According to the results of the adjustments of eight trigonometric and three-dimensional networks, the a priori variance m2() of the measured vertical angle is expressed by the formula: m2() = m2(a) + [C 1/2 m(k)]2, where m(a) represents accidental observation errors; the constant C is estimated in the interval 0.5–1.5 according to the number of repreated observations and the variation of their changes with time; is the angle between the normals to the ellipsoid at the initial and final point of the line of sight, and m(k) is the mean square error of the coefficient of refraction which can be estimated for a given network from Tab. 1.Dedicated to 90th Birthday of Professor Frantiek Fiala  相似文献   

16.
When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the LI and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges –20° 40°E and 32.5° ø 70°N in longitude and latitude, respectively. The derived TEC maps over Europe contribute to the study of horizontal coupling and transport processes during significant ionospheric events. Due to their comprehensive information about the high-latitude ionosphere, EISCAT observations may help to study the influence of ionospheric phenomena upon propagation errors in GPS navigation systems. Since there are still some accuracy limiting problems to be solved in TEC determination using GPS, data comparison of TEC with vertical electron density profiles derived from EISCAT observations is valuable to enhance the accuracy of propagation-error estimations. This is evident both for absolute TEC calibration as well as for the conversion of ray-path-related observations to vertical TEC. The combination of EISCAT data and GPS-derived TEC data enables a better understanding of large-scale ionospheric processes.  相似文献   

17.
This paper gives the exact solution in terms of the Karhunen–Loève expansion to a fractional stochastic partial differential equation on the unit sphere \({\mathbb {S}}^{2} \subset {\mathbb {R}}^{3}\) with fractional Brownian motion as driving noise and with random initial condition given by a fractional stochastic Cauchy problem. A numerical approximation to the solution is given by truncating the Karhunen–Loève expansion. We show the convergence rates of the truncation errors in degree and the mean square approximation errors in time. Numerical examples using an isotropic Gaussian random field as initial condition and simulations of evolution of cosmic microwave background are given to illustrate the theoretical results.  相似文献   

18.
19.
Sub-surface characterization in fractured aquifers is challenging due to the co-existence of contrasting materials namely matrix and fractures. Transient hydraulic tomography (THT) is proved to be an efficient and robust technique to estimate hydraulic (Km, Kf) and storage (Sm, Sf) properties in such complex hydrogeologic settings. However, performance of THT is governed by data quality and optimization technique used in inversion. We assessed the performance of gradient and gradient-free optimizers with THT inversion. Laboratory experiments were performed on a two-dimensional, granite rock (80 cm × 45 cm × 5 cm) with known fracture pattern. Cross-hole pumping experiments were conducted at 10 ports (located on fractures), and time-drawdown responses were monitored at 25 ports (located on matrix and fractures). Pumping ports were ranked based on weighted signal-to-noise ratio (SNR) computed at each observation port. Noise-free, good quality (SNR > 100) datasets were inverted using Levenberg–Marquardt: LM (gradient) and Nelder–Mead: NM (gradient-free) methods. All simulations were performed using a coupled simulation-optimization model. Performance of the two optimizers is evaluated by comparing model predictions with observations made at two validation ports that were not used in simulation. Both LM and NM algorithms have broadly captured the preferential flow paths (fracture network) via K and S tomograms, however LM has outperformed NM during validation ( ). Our results conclude that, while method of optimization has a trivial effect on model predictions, exclusion of low quality (SNR ≤ 100) datasets can significantly improve the model performance.  相似文献   

20.
ABSTRACT

High-resolution data on the spatial pattern of water use are a prerequisite for appropriate and sustainable water management. Based on one well-validated hydrological model, the Distributed Time Variant Gains Model (DTVGM), this paper obtains reliable high-resolution spatial patterns of irrigation, industrial and domestic water use in continental China. During the validation periods, ranges of correlation coefficient (R) and Nash-Sutcliffe efficiency (NSE) coefficient are 0.67–0.96 and 0.51–0.84, respectively, between the observed and simulated streamflow of six hydrological stations, indicating model applicability to simulate the distribution of water use. The simulated water use quantities have relative errors (RE) less than 5% compared with the observed. In addition, the changes in streamflow discharge were also correctly simulated by our model, such as the Zhangjiafen station in the Hai River basin with a dramatic decrease in streamflow, and the Makou station in the Pearl River basin with no significant changes. These changes are combined results of basin available water resources and water use. The obtained high-resolution spatial pattern of water use could decrease uncertainty of hydrological simulation and guide water management efficiently.
Editor M.C. Acreman; Associate editor X. Fang  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号