首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The principle of maximum entropy (POME) was used to derive the Pearson type (PT) III distribution. The POME yielded the minimally prejudiced PT III distribution by maximizing the entropy subject to two appropriate constraints which were the mean and the mean of the logarithm of real values about a constant >0. This provided a unique method for parameter estimation. Historical flood data were used to evaluate this method and compare it with the methods of moments and maximum likelihood estimation.  相似文献   

3.
The principle of maximum entropy (POME) was employed to develop a procedure for derivation of a number of frequency distributions used in hydrology. The procedure required specification of constraints and maximization of entropy, and is thus a solution of the classical optimization problem. The POME led to a unique technique for parameter estimation. For six selected river gaging stations, parameters of the gamma distribution, the log-Pearson type III distribution and extreme value type I distribution fitted to annual maximum discharges, were evaluated by this technique and compared with those obtained by using the methods of moments and maximum likelihood estimation. The concept of entropy, used as a measured of uncertainty associated with a specified distribution, facilitated this comparison.  相似文献   

4.
基于最大熵原理,得到地震时间间隔和地震震级的概率分布函数。根据时间间隔分布,得到地震发震概率,当概率上升达到警界值时,可对云南5级以上中强地震做出预测。6个月以内中短期预测对应率为91%;3个月以内,短临期预测对应率为73%。根据震级分布,得到用最大熵原理求出的地震理论发生次数,理论发震次数与实际较为接近。用最大熵原理求出了云南不同地区不同震级档次5级以上中强地震的复发周期。分析认为,云南7级以上大震危险性在逐步逼近,西部危险性高于东部。  相似文献   

5.
The lethal toxicity of mixtures of Zn2+ —Ni2+, Cu2+ —Ni2+ and Zn2+ —Cu2+ —Ni2+ to common guppy at 21£C in hard water (total hardness = 260 mg/l as CaCO3) was studied under static bioassays test conditions with renewal of the test solutions every 24 h. The heavy metals were tested separately and in mixtures. The 48 h median lethal concentrations (LC50) for individual salts were 75 mg/l Zn2+, 37 mg/l for Ni2+ and 2.5 mg/l for Cu2+. Concentrations were expressed in “toxic units” by taking them as proportions of LC50 values. Experiments showed that in the Zn2+-Ni2+ mixture, when Ni2+ was more in proportion, the toxicity was more than additive. The 48 h LC50 value and 95% confidence limits in the Ni2+-Cu2+ mixture were 0.684 (0.484 … 0.807) toxic units and the mixture produced more than the additive toxicity (synergism.). The LC50 value and its 95% confidence limits in a Zn2+?Cu2+?Ni2+ mixture also suggested that the mixture was again strictly additive. The results indicate that heavy metallic mixtures would pose a greater toxicological danger to fish than the respective individual metals.  相似文献   

6.
7.
8.
9.
10.
Diverse linear and nonlinear statistical parameters of rainfall under aggregation in time and the kind of temporal memory are investigated. Data sets from the Andes of Colombia at different resolutions (15 min and 1-h), and record lengths (21 months and 8-40 years) are used. A mixture of two timescales is found in the autocorrelation and autoinformation functions, with short-term memory holding for time lags less than 15-30 min, and long-term memory onwards. Consistently, rainfall variance exhibits different temporal scaling regimes separated at 15-30 min and 24 h. Tests for the Hurst effect evidence the frailty of the R/S approach in discerning the kind of memory in high resolution rainfall, whereas rigorous statistical tests for short-memory processes do reject the existence of the Hurst effect.Rainfall information entropy grows as a power law of aggregation time, S(T) ∼ Tβ with 〈β〉 = 0.51, up to a timescale, TMaxEnt (70-202 h), at which entropy saturates, with β = 0 onwards. Maximum entropy is reached through a dynamic Generalized Pareto distribution, consistently with the maximum information-entropy principle for heavy-tailed random variables, and with its asymptotically infinitely divisible property. The dynamics towards the limit distribution is quantified. Tsallis q-entropies also exhibit power laws with T, such that Sq(T) ∼ Tβ(q), with β(q) ? 0 for q ? 0, and β(q) ? 0.5 for q ? 1. No clear patterns are found in the geographic distribution within and among the statistical parameters studied, confirming the strong variability of tropical Andean rainfall.  相似文献   

11.
12.
The coprecipitation method is widely used for the preconcentration of trace metal ions prior to their determination by flame atomic absorption spectrometry (FAAS). A simple and sensitive method based on coprecipitation of Fe(III) and Ni(II) ions with Cu(II)‐4‐(2‐pyridylazo)‐resorcinol was developed. The analytical parameters including pH, amount of copper (II), amount of reagent, sample volume, etc., were examined. It was found that the metal ions studied were quantitatively coprecipitated in the pH range of 5.0–6.5. The detection limits (DL) (n = 10, 3s/b) were found to be 0.68 µg L?1 for Fe(III) and 0.43 µg L?1 for Ni(II) and the relative standard deviations (RSD) were ≤4.0%. The proposed method was validated by the analysis of three certified reference materials (TMDA 54.4 fortified lake water, SRM 1568a rice flour, and GBW07605 tea) and recovery tests. The method was successfully applied to sea water, lake water, and various food samples.  相似文献   

13.
Between 1995 and 2001, 16 measuring points at small and medium sized brooks in the Harz National Parks were sampled. The samples have been evaluated by means of hydrochemistry and macroinvertebrate biology. Although nearly all streams are largely uncontaminated by oxygen-consuming substances, they are settled only by a small number of macroinvertebrate species. There is a clear correlation between this number and pH. The reduction in species number with decrease of pH is mainly caused by the absence of most Ephemeroptera, some Coleoptera and Trichoptera. Comparing biological evaluation of acidity with physico-chemical measurements, a unacceptable underestimation was found. The reason could be that different sensitivities to acidification between regional populations seem to exist.

Despite of the low species number, there is a very specific macroinvertebrate fauna that emphasizes the conservation value of the Harz National Parks.  相似文献   


14.
The paper by Slob and Ziolkowski (1993) is apparently a comment on my paper (Szaraniec 1984) on odd-depth structure. In fact the basic understanding of a seismogram is in question. The fundamental equation for an odd-depth model and its subsequent deconvolution is correct with no additional geological constraints. This is the essence of my reply which is contained in the following points.
  • 1 The discussion by Slob and Ziolkowski suffers from incoherence. On page 142 the Goupillaud (1961) paper is quoted: “… we must use a sampling rate at least double that… minimum interval…”. In the following analysis of such a postulated model Slob and Ziolkowski say that “… two constants are used in the model: Δt as sampling rate and 2Δt as two-way traveltime”. By reversing the Goupillaud postulation all the subsequent criticism becomes unreliable for the real Goupillaud postulation as well as the odd-depth model.
  • 2 Slob and Ziolkowski take into consideration what they call the total impulse response. This is over and above the demands of the fundamental property of an odd-depth model. Following a similar approach I take truncated data in the form of a source function, S(z), convolved with a synthetic seismogram (earth impulse response), R?(z), the free surface being included. The problem of data modelling is a crucial one and will be discussed in more detail below. By my reasoning, however, the function may be considered as a mathematical construction introduced purely to work out the fundamental property. In this connection there is no question of this construction having a physical meaning. It is implicit that in terms of system theory, K(z) stands for what is known as input impedance.
  • 3 Our understandings of data are divergent but Slob and Ziolkowski state erroneously that: “Szaraniec (1984) gives (21) as the total impulse response…”. This point was not made. This inappropriate statement is repeated and echoed throughout the paper making the discussion by Slob and Ziolkowski, as well as the corrections proposed in their Appendix A, ineffective. Thus, my equation (2) is quoted in the form which is in terms of the reflection response Gsc and holds true at least in mathematical terms. No wonder that “this identity is not valid for the total impulse response” (sic), which is denoted as G(z). None the less a substitution of G for Gsc is made in Appendix A, equation (A3). The equation numbers in my paper and in Appendix A are irrelevant, but (A3) is substituted for (32) (both numbers of equations from the authors’ paper). Afterwards, the mathematical incorrectness of the resulting equation is proved (which was already evident) and the final result (A16) is quite obviously different from my equation (2). However, the substitution in question is not my invention.
  • 4 With regard to the problem of data modelling, I consider a bi-directional ID seismic source located just below the earth's surface. The downgoing unit impulse response is accompanied by a reflected upgoing unit impulse and the earth response is now doubled. The total impulse response for this model is thus given by where (—r0) =— 1 stands for the surface reflection coefficient in an upward direction. Thus that is to say, the total response to a unit excitation is identical with the input impedance as it must be in system theory. The one-directional 1D seismic source model is in question. There must be a reaction to every action. When only the downgoing unit impulse of energy is considered, what about the compensation?
  • 5 In more realistic modelling, an early part of a total seismogram is unknown (absent) and the seismogram is seen in segments or through the windows. That is why in the usual approach, especially in dynamic deconvolution problems, synthetic data in the presence of the free surface are considered as an equivalent of the global reflection coefficient. It is implicit that model arises from a truncated total seismogram represented as a source function convolved with a truncated global reflection coefficient.
Validation or invalidation of the truncation procedure for a numerically specified model may be attempted in the frame of the odd-depth assumption. My equations (22) and (23) have been designed for investigating the absence or presence of truncated energy. The odd-depth formalism allows the possibility of reconstructing an earlier part of a seismogram (Szaraniec 1984), that is to say, a numerical recovery of unknown moments which are unlikely designed by Slob and Ziolkowski for the data.  相似文献   

15.
16.
Barbara Kennedy has left only a small body of work, consisting principally of eight research papers, nine commentaries and two books. Its extent belies its importance. Read systematically, the work represents a sustained and important critique of the direction taken by mainstream geomorphology since the process‐focused reorientation of the mid‐twentieth century. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
It is commonly assumed that biophysically based soil-vegetation-atmosphere transfer (SVAT) models are scale-invariant with respect to the initial boundary conditions of topography, vegetation condition and soil moisture. In practice, SVAT models that have been developed and tested at the local scale (a few meters or a few tens of meters) are applied almost unmodified within general circulation models (GCMs) of the atmosphere, which have grid areas of 50–500 km2. This study, which draws much of its substantive material from the papers of Sellers et al. (1992c, J. Geophys. Res., 97(D17): 19033–19060) and Sellers et al. (1995, J. Geophys. Res., 100(D12): 25607–25629), explores the validity of doing this. The work makes use of the FIFE-89 data set which was collected over a 2 km × 15 km grassland area in Kansas. The site was characterized by high variability in soil moisture and vegetation condition during the late growing season of 1989. The area also has moderate topography.

The 2 km × 15 km ‘testbed’ area was divided into 68 × 501 pixels of 30 m × 30 m spatial resolution, each of which could be assigned topographic, vegetation condition and soil moisture parameters from satellite and in situ observations gathered in FIFE-89. One or more of these surface fields was area-averaged in a series of simulation runs to determine the impact of using large-area means of these initial or boundary conditions on the area-integrated (aggregated) surface fluxes. The results of the study can be summarized as follows:

1. 1. analyses and some of the simulations indicated that the relationships describing the effects of moderate topography on the surface radiation budget are near-linear and thus largely scale-invariant. The relationships linking the simple ratio vegetation index (SR), the canopy conductance parameter (F) and the canopy transpiration flux are also near-linear and similarly scale-invariant to first order. Because of this, it appears that simple area-averaging operations can be applied to these fields with relatively little impact on the calculated surface heat flux.
2. 2. The relationships linking surface and root-zone soil wetness to the soil surface and canopy transpiration rates are non-linear. However, simulation results and observations indicate that soil moisture variability decreases significantly as an area dries out, which partially cancels out the effects of these non-linear functions.In conclusion, it appears that simple averages of topographic slope and vegetation parameters can be used to calculate surface energy and heat fluxes over a wide range of spatial scales, from a few meters up to many kilometers at least for grassland sites and areas with moderate topography. Although the relationships between soil moisture and evapotranspiration are non-linear for intermediate soil wetnesses, the dynamics of soil drying act to progressively reduce soil moisture variability and thus the impacts of these non-linearities on the area-averaged surface fluxes. These findings indicate that we may be able to use mean values of topography, vegetation condition and soil moisture to calculate the surface-atmosphere fluxes of energy, heat and moisture at larger length scales, to within an acceptable accuracy for climate modeling work. However, further tests over areas with different vegetation types, soils and more extreme topography are required to improve our confidence in this approach.
  相似文献   

18.
The wet ammonia (NH3) desulfurization process can be retrofitted to remove nitric oxide (NO) and sulfur dioxide (SO2) simultaneously by adding soluble cobalt(II) salt into the aqueous ammonia solution. Activated carbon is used as a catalyst to regenerate hexaminecobalt(II), Co(NH3), so that NO removal efficiency can be maintained at a high level for a long time. In this study, the catalytic performance of pitch‐based spherical activated carbon (PBSAC) in the simultaneous removal of NO and SO2 with this wet ammonia scrubbing process has been studied systematically. Experiments have been performed in a batch stirred cell to test the catalytic characteristics of PBSAC in the catalytic reduction of hexaminecobalt(III), Co(NH3). The experimental results show that PBSAC is a much better catalyst in the catalytic reduction of Co(NH3) than palm shell activated carbon (PSAC). The Co(NH3) reduction reaction rate increases with PBSAC when the PBSAC dose is below 7.5 g/L. The Co(NH3) reduction rate increases with its initial concentration. Best Co(NH3) conversion is gained at a pH range of 2.0–6.0. A high temperature is favorable to such reaction. The intrinsic activation energy of 51.00 kJ/mol for the Co(NH3) reduction catalyzed by PBSAC has been obtained. The experiments manifest that the simultaneous elimination of NO and SO2 by the hexaminecobalt solution coupled with catalytic regeneration of hexaminecobalt(II) can maintain a NO removal efficiency of 90% for a long time.  相似文献   

19.
A new filter to separate base flow from streamflow has developed that uses observed groundwater levels. To relate the base flow to the observed groundwater levels, a non‐linear relation was used. This relation is suitable for unconfined aquifers with deep groundwater levels that do not respond to individual rainfall event. Because the filter was calibrated using total streamflow, an estimate of the direct runoff was also needed. The direct runoff was estimated from precipitation and potential evapotranspiration using a water balance model. The parameters for the base flow and direct runoff were estimated simultaneously using a Monte Carlo approach. Instead of one best solution, a range of satisfactory solutions was accepted. The filter was applied to data from two nested gauging stations in the Pang catchment (UK). Streamflow at the upstream station (Frilsham) is strongly dominated by base flow from the main aquifer, whereas at the downstream station (Pangbourne) a significant component of direct runoff also occurs. The filter appeared to provide satisfactory estimates at both stations. For Pangbourne, the rise of the base flow was strongly delayed compared with the rise of the streamflow. However, base flow exceeded streamflow on several occasions, especially during summer and autumn, which might be explained by evapotranspiration from riparian vegetation. To evaluate the results, the base flow was also estimated using three existing base‐flow separation filters: an arithmetic filter (BFI), a digital filter (Boughton) and another filter based on groundwater levels (Kliner and Knĕz̆ek). Both the BFI and Boughton filters showed a much smaller difference in base flow between the two stations. The Kliner and Knĕz̆ek filter gave consistently lower estimates of the base flow. Differences and lack of clarity in the definition of base flow complicated the comparison between the filters. An advantage of the method introduced in this paper is the clear interpretation of the separated components. A disadvantage is the high data requirement. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号