首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The sensitivity of a model output (called a variable) to a parameter can be defined as the partial derivative of the variable with respect to the parameter. When the governing equations are not differentiable with respect to this parameter, problems arise in the numerical solution of the sensitivity equations, such as locally infinite values or instability. An approximate Riemann solver is thus proposed for direct sensitivity calculation for hyperbolic systems of conservation laws in the presence of discontinuous solutions. The proposed approach uses an extra source term in the form of a Dirac function to restore sensitivity balance across the shocks. It is valid for systems such as the Euler equations for gas dynamics or the shallow water equations for free surface flow. The method is first detailed and its application to the shallow water equations is proposed, with some test cases such as dike- or dam-break problems with or without source terms. An application to a two-dimensional flow problem illustrates the superiority of direct sensitivity calculation over the classical empirical approach.  相似文献   

2.
ABSTRACT

There is an implicit assumption in most work that the parameters calibrated based on observations remain valid for future climatic conditions. However, this might not be true due to parameter instability. This paper investigates the uncertainty and transferability of parameters in a hydrological model under climate change. Parameter transferability is investigated with three parameter sets identified for different climatic conditions, which are: wet, intermediate and dry. A parameter set based on the baseline period (1961–1990) is also investigated for comparison. For uncertainty analysis, a k-simulation set approach is proposed instead of employing the traditional optimization method which uses a single best-fit parameter set. The results show that the parameter set from the wet sub-period performs the best when transferred into wet climate condition, while the parameter set from the baseline period is the most appropriate when transferred into dry climate condition. The largest uncertainty of simulated daily high flows for 2011–2040 is from the parameter set trained in the dry sub-period, while that of simulated daily medium and low flows lies in the parameter set from the intermediate calibration sub-period. For annual changes in the future period, the uncertainty with the parameter set from the intermediate sub-period is the largest, followed by the wet sub-period and dry sub-period. Compared with high and medium flows/runoffs, the uncertainty of low flows/runoffs is much smaller for both simulated daily flows and annual runoffs. For seasonal runoffs, the largest uncertainty is from the intermediate sub-period, while the smallest is from the dry sub-period. Apart from that, the largest uncertainty can be observed for spring runoffs and the lowest one for autumn runoffs. Compared with the traditional optimization method, the k-simulation set approach shows many more advantages, particularly being able to provide uncertainty information to decision support for watershed management under climate change.

EDITOR Z.W. Kundzewicz ASSOCIATE EDITOR not assigned  相似文献   

3.
In this paper, we analyse the uncertainty and parameter sensitivity of a conceptual water quality model, based on a travel time distribution (TTD) approach, simulating electrical conductivity (EC) in the Duck River, Northwest Tasmania, Australia for a 2-year period. Dynamic TTDs of stream water were estimated using the StorAge Selection (SAS) approach, which was coupled with two alternate methods to model stream water EC: (1) a solute-balance approach and (2) a water age-based approach. Uncertainty analysis using the Differential Evaluation Adoptive Metropolis (DREAM) algorithm showed that: 1. parameter uncertainty was a small contribution to the overall uncertainty; 2. most uncertainty was related to input data uncertainty and model structure; 3. slightly lower total error was obtained in the water age-based model than the solute-balance model; 4. using time-variant SAS functions reduced the model uncertainty markedly, which likely reflects the effect of dynamic hydrological conditions over the year affecting the relative importance of different flow pathways over time. Model parameter sensitivity analysis using the Variogram Analysis of Response Surfaces (VARS-TOOL) framework found that parameters directly related to the EC concentration were most sensitive. In the solute-balance model, the rainfall concentration Crain and in the age-based model, the parameter controlling the rate of change of EC with age (λ) were the most sensitive parameter. Model parameters controlling the age mixes of both evapotranspiration and streamflow water fluxes (i.e., the SAS function parameters) were influential for the solute-balance model. Little change in parameter sensitivity over time was found for the age-based concentration relationship; however, the parameter sensitivity was quite dynamic over time for the solute-balance approach. The overarching outcomes provide water quality modellers, engineers and managers greater insight into catchment functioning and its dependence on hydrological conditions.  相似文献   

4.
We employed multilayer perceptrons (MLP), self organizing feature maps (SOFM), and learning vector quantization (LVQ) to reveal and interpret statistically significant features of different categories of waveform parameter vectors extracted from three-component WEBNET velocigrams. In this contribution we present and discuss in a summarizing manner the results of (i) SOFM classification and MLP discrimination between microearthquakes and explosions on the basis of single-station spectral and amplitude parameter vectors, (ii) SOFM/LVQ recognition of initial onset polarities from PV'-waveforms, and (iii) a source mechanism study of the January 1997 microearthquake swarm based on SOFM classification of combined multi-station PV-onset polarity and SH/PVamplitude ratio (CPA) data. Unsupervised SOFM classification of 497 NKC seismograms revealed that the best discriminants are pure spectral parameter vectors for the recognition of microearthquakes (reliability 95% with 30 spectral parameters), and mixed amplitude and spectral parameter vectors for the recognition of explosions (reliability 98% with 41 amplitude and 30 spectral parameters). The optimal MLP, trained with the standard backpropagation error method by one randomly selected half of a set of 312 mixed (7 amplitude and 7 spectral) single-station (NKC) microearthquake and explosion parameter vectors and tested by the other half-set, and vice versa, correctly classified, on average, 99% of all events. From a set of NKC PV-waveform vectors for 375 events, the optimal LVQ net correctly classified, on average, 98% of all up and 97% of all down onsets, and assigned the likely correct polarity to 85% of the onsets that were visually classified as uncertain. Optimal SOFM architectures categorized the CPA parameter vector sets for 145 January 97 events individually for each of five stations (KOC, KRC, SKC, NKC, LAC) quite unambiguously and stable into three statistically significant classes. The nature of the coincidence of these classes among the stations that provided most reliable mechanism-relevant information (KOC, KRC, SKC) points at the occurence of further seven statistically significant subclassses of mechanisms during the swarm. The ten neural classes of focal mechanisms coincide fairly well with those obtained by moment tensor inversion of P and SH polarities and amplitudes extracted from the seismograms interactively. The obtained results, together with those of refined hypocenter location, imply that the focal area consisted of three dominant faults and at least seven subfaults within a volume of not more than 1 km in diameter that likely were seismically activated by vertical stress from underneath.  相似文献   

5.
Finding an operational parameter vector is always challenging in the application of hydrologic models, with over‐parameterization and limited information from observations leading to uncertainty about the best parameter vectors. Thus, it is beneficial to find every possible behavioural parameter vector. This paper presents a new methodology, called the patient rule induction method for parameter estimation (PRIM‐PE), to define where the behavioural parameter vectors are located in the parameter space. The PRIM‐PE was used to discover all regions of the parameter space containing an acceptable model behaviour. This algorithm consists of an initial sampling procedure to generate a parameter sample that sufficiently represents the response surface with a uniform distribution within the “good‐enough” region (i.e., performance better than a predefined threshold) and a rule induction component (PRIM), which is then used to define regions in the parameter space in which the acceptable parameter vectors are located. To investigate its ability in different situations, the methodology is evaluated using four test problems. The PRIM‐PE sampling procedure was also compared against a Markov chain Monte Carlo sampler known as the differential evolution adaptive Metropolis (DREAMZS) algorithm. Finally, a spatially distributed hydrological model calibration problem with two settings (a three‐parameter calibration problem and a 23‐parameter calibration problem) was solved using the PRIM‐PE algorithm. The results show that the PRIM‐PE method captured the good‐enough region in the parameter space successfully using 8 and 107 boxes for the three‐parameter and 23‐parameter problems, respectively. This good‐enough region can be used in a global sensitivity analysis to provide a broad range of parameter vectors that produce acceptable model performance. Moreover, for a specific objective function and model structure, the size of the boxes can be used as a measure of equifinality.  相似文献   

6.
Distributed hydrological models require a detailed definition of a watershed's internal drainage structure. The conventional approach to obtain this drainage structure is to use an eight flow direction matrix (D8) which is derived from a raster digital elevation model (DEM). However, this approach leads to a rather coarse drainage structure when monitoring or gauging stations need to be accurately located within a watershed. This is largely due to limitations of the D8 approach and the lack of information over flat areas and pits. The D8 approach alone is also unable to differentiate lakes from plain areas.

To avoid these problems a new approach, using a digital river and lake network (DRLN) as input in addition to the DEM, has been developed. This new approach allows for an accurate fit between the DRLN and the modelled drainage structure, which is represented by a flow direction matrix and a modelled watercourse network. More importantly, the identification of lakes within the modelled network is now possible. The proposed approach, which is largely rooted in the D8 approach, uses the DRLN to correct modelled flow directions and network calculations. For DEM cells overlapped by the DRLN, flow directions are determined using DRLN connections only. The flow directions of the other DEM cells are evaluated with the D8 approach which uses a DEM that has been modified as a function of distance to the DRLN.

The proposed approach has been tested on the Chaudière River watershed in southern Québec, Canada. The modelled watershed drainage structure showed a high level of coherence with the DRLN. A comparison between the results obtained with the D8 approach and those obtained by the proposed approach clearly demonstrated an improvement over the conventionally modelled drainage structure. The proposed approach will benefit hydrological models which require data such as a flow direction matrix, a river and lake network and sub-watersheds for drainage structure information.  相似文献   


7.
The simple Lanczos method presented in a recent paper by the writers, with application to single vector loads, is extended to include a more general dynamic loading represented as a linear combination of k vectors (load patterns). The result is a set of orthogonal vectors that is used to transform the equations of motion to a banded form, the half-bandwidth of which becomes k + 1. When k is small relative to the number of equations, this approach provides for a very efficient time-stepping solution.  相似文献   

8.
Abstract

This paper examines the applicability of the Bartlett-Lewis Rectangular Pulse Model to rainfall taken from a site in Elmdon, Birmingham, UK. The approach used is to assess the performance of the model in terms of characteristics of the precipitation process, incorporating monthly seasonality. Analytical expressions are derived to complement those presented in Rodriguez-Iturbe et al. (1987). As in that paper, the shortcomings of the simple Poisson process are reduced by the use of a Bartlett-Lewis process. Different methods of parameter estimation are examined. The characteristic features of the time distribution of rainfall events, however, can be well approximated only by optimization and this enables an improved identification of the model parameters.  相似文献   

9.
Realistic environmental models used for decision making typically require a highly parameterized approach. Calibration of such models is computationally intensive because widely used parameter estimation approaches require individual forward runs for each parameter adjusted. These runs construct a parameter-to-observation sensitivity, or Jacobian, matrix used to develop candidate parameter upgrades. Parameter estimation algorithms are also commonly adversely affected by numerical noise in the calculated sensitivities within the Jacobian matrix, which can result in unnecessary parameter estimation iterations and less model-to-measurement fit. Ideally, approaches to reduce the computational burden of parameter estimation will also increase the signal-to-noise ratio related to observations influential to the parameter estimation even as the number of forward runs decrease. In this work a simultaneous increments, an iterative ensemble smoother (IES), and a randomized Jacobian approach were compared to a traditional approach that uses a full Jacobian matrix. All approaches were applied to the same model developed for decision making in the Mississippi Alluvial Plain, USA. Both the IES and randomized Jacobian approach achieved a desirable fit and similar parameter fields in many fewer forward runs than the traditional approach; in both cases the fit was obtained in fewer runs than the number of adjustable parameters. The simultaneous increments approach did not perform as well as the other methods due to inability to overcome suboptimal dropping of parameter sensitivities. This work indicates that use of highly efficient algorithms can greatly speed parameter estimation, which in turn increases calibration vetting and utility of realistic models used for decision making.  相似文献   

10.
The vibrational distribution of nitric oxide in the polar ionosphere computed according to the one-dimensional non-steady model of chemical and vibrational kinetics of the upper atmosphere has been compared with experimental data from rocket measurement. Some input parameters of the model have been varied to obtain the least-averaged deviation of the calculated population from experimental one. It is shown that the least deviation of our calculations from experimental measurements depends sufficiently on both the surprisal parameter of the production reaction of metastable atomic nitrogen with molecular oxygen and the profile of atomic oxygen concentration. The best agreement with the MSIS-83 profile was obtained for the value of surprisal parameter corresponding to recent laboratory estimations. The measured depression of level v = 2 is obtained in the calculation that uses sufficiently increased concentrations of atomic oxygen. It is pointed out that similar measurements of infrared radiation intensities could be used to estimate the atomic oxygen concentrations during auroral disturbances of the upper atmosphere.  相似文献   

11.
Acoustic full waveforms recorded in wells are the simplest way to get the velocity of P, S, and Stoneley waves in situ. Processing and interpretation of acoustic full waveforms in hard formations does not generate problems with identification packets of waves and calculation of their slowness and arrivals, and determination of the elastic parameter of rocks. But in shallow intervals of wells, in soft formations, some difficulties arise with proper evaluation of the S-wave velocity due to the lack of refracted S wave in case when its velocity is lower than the velocity of mud. Dynamic approach to selection of a proper value of semblance to determine the proper slowness and arrival is presented. Correlation between the results obtained from the proposed approach and the theoretical modeling is a measure of the correctness of the method.  相似文献   

12.
The identification of sediment sources is fundamental to the management of increasingly scarce water resources. Tracing the origin of sediment with elemental geochemistry is a well‐established approach to determining sediment provenance. Fundamental to the confident apportionment of sediment to their lithogenic sources is the modelling process. Recent approaches have incorporated distributions throughout the modelling process including source contribution terms for two end‐member sources. The shift from modelling source samples to modelling samples drawn from distributions has removed relationships, including potential correlations between elemental concentrations, from the modelling process. Here, we present a novel modelling approach that re‐incorporates correlations between elemental concentrations and models distributions for source contribution terms for multiple source end members. Artificial mixtures, based on catchment sources samples, were created to test the accuracy of this correlated distribution model and also examine modelling approaches used in the literature. The most accurate model incorporates correlations between elements, uses the absolute mixing model difference and does not use any weighting. This model was then applied to identify the sources of sediment in three South East Queensland catchments and demonstrated that Quaternary Alluvium is the most dominant source of sediment in these catchments (μ 44%, σ 12%). This study demonstrates that it is important to understand how different weightings may impact modelling results. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Computation of complex-valued traveltimes provides an efficient approach to describe the seismic wave attenuation for applications like attenuation tomography, inverse Q filtering and Kirchhoff migration with absorption compensation. Attenuating acoustic transverse isotropy can be used to describe the directional variation of velocity and attenuation of P-waves in thin-bedding geological structures. We present an approximate method to solve the acoustic eikonal equation for an attenuating transversely isotropic medium with a vertical symmetry axis. We take into account two similar parameterizations of an attenuating vertical symmetry axis medium. The first parameterization uses the normal moveout velocity, whereas the second parameterization uses the horizontal velocity. For each parameterization, we combine perturbation theory and the Shanks transform in different ways to derive analytic solutions. Numerical examples show that the analytic solutions derived from the second parameterization yield better accuracy. The Shanks transform solution with respect to only the anellipticity parameter from the second parameterization is demonstrated numerically to be the most accurate among all the analytic solutions.  相似文献   

14.
Abstract

Different approaches used in hydrological modelling are compared in terms of the way each one takes the rainfall data into account. We examine the errors associated with accounting for rainfall variability, whether in hydrological modelling (distributed vs lumped models) or in computing catchment rainfall, as well as the impact of each approach on the representativeness of the parameters it uses. The database consists of 1859 rainfall events, distributed on 500 basins, located in the southeast of France with areas ranging from 6.2 to 2851 km2. The study uses as reference the hydrographs computed by a distributed hydrological model from radar rainfall. This allows us to compare and to test the effects of various simplifications to the process when taking rainfall information (complete rain field vs sampled rainfall) and rainfall–runoff modelling (lumped vs distributed) into account. The results appear to show that, in general, the sampling effect can lead to errors in discharge at the outlet that are as great as, or even greater than, those one would get with a fully lumped approach. We found that small catchments are more sensitive to the uncertainties in catchment rainfall input generated by sampling rainfall data as seen through a raingauge network. Conversely, the larger catchments are more sensitive to uncertainties generated when the spatial variability of rainfall events is not taken into account. These uncertainties can be compensated for relatively easily by recalibrating the parameters of the hydrological model, although such recalibrations cause the parameter in question to completely lose physical meaning.

Citation Arnaud, P., Lavabre, J., Fouchier, C., Diss, S. & Javelle, P. (2011) Sensitivity of hydrological models to uncertainty of rainfall input. Hydrol. Sci. J. 56(3), 397–410.  相似文献   

15.
Abstract

Estimates of groundwater recharge are often needed for a variety of groundwater resource evaluation purposes. A method for estimating long-term groundwater recharge and actual evapotranspiration not known in the English literature is presented. The method uses long-term average annual precipitation, runoff, potential evaporation, and crop-yield information, and uses empirical parameter curves that depend on soil and crop types to determine long-term average annual groundwater recharge (GWR). The method is tested using historic lysimeter records from 10 lysimeters at Coshocton, Ohio, USA. Considering the coarse information required, the method provides good estimates of groundwater recharge and actual evapotranspiration, and is sensitive to a range of cropping and land-use conditions. Problems with practical application of the technique are mentioned, including the need for further testing using given parameter curves, and for incorporating parameters that describe current farming practices and other land uses. The method can be used for urban conditions, and can be incorporated into a GIS framework for rapid, large-area, spatially-distributed estimations of GWR. An example application of the method is given.  相似文献   

16.
The jet erosion test (JET) is a widely applied method for deriving the erodibility of cohesive soils and sediments. There are suggestions in the literature that further examination of the method widely used to interpret the results of these erosion tests is warranted. This paper presents an alternative approach for such interpretation based on the principle of energy conservation. This new approach recognizes that evaluation of erodibility using the jet tester should involve the mass of soil eroded, so determination of this eroded mass (or else scour volume and bulk density) is required. The theory partitions jet kinetic energy flux into that involved in eroding soil, the remainder being dissipated in a variety of mechanisms. The energy required to erode soil is defined as the product of the eroded mass and a resistance parameter which is the energy required to entrain unit mass of soil, denoted J (in J/kg), whose magnitude is sought. An effective component rate of jet energy consumption is defined which depends on depth of scour penetration by the jet, but not on soil type, or the uniformity of the soil type being investigated. Application of the theory depends on experimentally determining the spatial form of jet energy consumption displayed in erosion of a uniform body of soil, an approach of general application. The theory then allows determination of the soil resistance parameter J as a function of depth of scour penetration into any soil profile, thus evaluating such profile variation in erodibility as may exist. This parameter J has been used with the same meaning in soil and gully erosion studies for the last 25 years. Application of this approach will appear in a companion publication as part 2. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

17.
Exact representation of unbounded soil contains the single output–single input relationship between force and displacement in the physical or transformed space. This relationship is a global convolution integral in the time domain. Rational approximation to its frequency response function (frequency‐domain convolution kernel) in the frequency domain, which is then realized into the time domain as a lumped‐parameter model or recursive formula, is an effective method to obtain the temporally local representation of unbounded soil. Stability and identification for the rational approximation are studied in this paper. A necessary and sufficient stability condition is presented based on the stability theory of linear system. A parameter identification method is further developed by directly solving a nonlinear least‐squares fitting problem using the hybrid genetic‐simplex optimization algorithm, in which the proposed stability condition as constraint is enforced by the penalty function method. The stability is thus guaranteed a priori. The infrequent and undesirable resonance phenomenon in stable system is also discussed. The proposed stability condition and identification method are verified by several dynamic soil–structure‐interaction examples. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
Wavefront construction (WFC) methods provide robust tools for computing ray theoretical traveltimes and amplitudes for multivalued wavefields. They simulate a wavefront propagating through a model using a mesh that is refined adaptively to ensure accuracy as rays diverge during propagation. However, an implementation for quasi-shear (qS) waves in anisotropic media can be very difficult, since the two qS slowness surfaces and wavefronts often intersect at shear-wave singularities. This complicates the task of creating the initial wavefront meshes, as a particular wavefront will be the faster qS-wave in some directions, but slower in others. Analogous problems arise during interpolation as the wavefront propagates, when an existing mesh cell that crosses a singularity on the wavefront is subdivided. Particle motion vectors provide the key information for correctly generating and interpolating wavefront meshes, as they will normally change slowly along a wavefront. Our implementation tests particle motion vectors to ensure correct initialization and propagation of the mesh for the chosen wave type and to confirm that the vectors change gradually along the wavefront. With this approach, the method provides a robust and efficient algorithm for modeling shear-wave propagation in a 3-D, anisotropic medium. We have successfully tested the qS-wave WFC in transversely isotropic models that include line singularities and kiss singularities. Results from a VTI model with a strong vertical gradient in velocity also show the accuracy of the implementation. In addition, we demonstrate that the WFC method can model a wavefront with a triplication caused by intrinsic anisotropy and that its multivalued traveltimes are mapped accurately. Finally, qS-wave synthetic seismograms are validated against an independent, full-waveform solution.  相似文献   

19.
It has been shown in the past that the interval-NMO velocity and the non-ellipticity parameter largely control the P-wave reflection time moveout of VTI media. To invert for these two parameters, one needs either reasonably large offsets, or some structure in the subsurface in combination with relatively mild lateral velocity variation.This paper deals with a simulation of an inversion approach, building on the assumption that accurately measured V NMO, as defined by small offset asymptotics for a particular reflector, were available. Instead of such measurements we take synthetically computed data. First, an isotropic model is constructed which explains these V NMO. Subsequently, residual moveout in common image gathers is modelled by ray tracing (replacing real data), along with its sensitivity for changes in the interval-NMO velocity and the non-ellipticity parameter under the constraint that V NMO is preserved. This enables iterative updating of the non-ellipticity parameter and the interval-NMO velocity in a layer that can be laterally inhomogeneous.This approach is successfully applied for a mildly dipping reflector at the bottom of a layer with laterally varying medium parameters. With the exact V NMO assumed to be given, lateral inhomogeneity and anisotropy can be distinguished for such a situation. However, for another example with a homogeneous VTI layer overlying a curved reflector with dip up to 30°, there appears to be an ambiguity which can be understood by theoretical analysis. Consistently with existing theory using the NMO-ellipse, the presented approach is successfully applied to the latter example if V NMO in the strike direction is combined with residual moveout in dip direction.  相似文献   

20.
Identifying high groundwater recharge areas is important for the conservation of groundwater quality and quantity. A common practice used by previous studies is to estimate groundwater recharge potential (GRP) using recharge potential analysis (RPA) under different environments. These studies use the estimated GRP to identify the high potential groundwater recharge sites. However, the RPA parameters are subjectively defined for these previous studies. To remove the supposition, this study proposes a systematic approach that defines the RPA parameter values based on the theory of parameter identification. This study uses dissolved oxygen (DO) indicators to calibrate the RPA parameters. This calibration improves the correlation coefficient between the DO indicators and computed GRP values from 0.63 to 0.87. By comparing the initial values, these results indicate that the estimated RPA parameters better represent the field infiltration characteristic. This result also indicates that defining the RPA parameter values based on DO indicators is necessary and important for accuracy. These calibrated parameters are used to estimate the GRP distribution of Taiwan’s Pingtung Plain. The GRP values are delineated into five levels. High and excellent GRP areas are defined as high recharge areas, which compose about 26.74 % of the study area. Based on the proposed method, the estimated GRP distribution can accurately represent the study area’s field recharge characteristics. These study results can be a good reference for groundwater recharge analyses, specifically if well data is limited or difficult to obtain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号