首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities—this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts.  相似文献   

2.
The majority of slug tests done at sites of shallow groundwater contamination are performed in wells screened across the water table and are affected by mechanisms beyond those considered in the standard slug‐test models. These additional mechanisms give rise to a number of practical issues that are yet to be fully resolved; four of these are addressed here. The wells in which slug tests are performed were rarely installed for that purpose, so the well design can result in problematic (small signal to noise ratio) test data. The suitability of a particular well design should thus always be assessed prior to field testing. In slug tests of short duration, it can be difficult to identify which portion of the test represents filter‐pack drainage and which represents formation response; application of a mass balance can help confirm that test phases have been correctly identified. A key parameter required for all slug test models is the casing radius. However, in this setting, the effective casing radius (borehole radius corrected for filter‐pack porosity), not the nominal well radius, is required; this effective radius is best estimated directly from test data. Finally, although conventional slug‐test models do not consider filter‐pack drainage, these models will yield reasonable hydraulic conductivity estimates when applied to the formation‐response phase of a test from an appropriately developed well.  相似文献   

3.
Velocity analysis based on data correlation   总被引:1,自引:0,他引:1  
Several methods exist to automatically obtain a velocity model from seismic data via optimization. Migration velocity analysis relies on an imaging condition and seeks the velocity model that optimally focuses the migrated image. This approach has been proven to be very successful. However, most migration methods use simplified physics to make them computationally feasible and herein lies the restriction of migration velocity analysis. Waveform inversion methods use the full wave equation to model the observed data and more complicated physics can be incorporated. Unfortunately, due to the band‐limited nature of the data, the resulting inverse problem is highly nonlinear. Simply fitting the data in a least‐squares sense by using a gradient‐based optimization method is sometimes problematic. In this paper, we propose a novel method that measures the amount of focusing in the data domain rather than the image domain. As a first test of the method, we include some examples for 1D velocity models and the convolutional model.  相似文献   

4.
This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.  相似文献   

5.
With the introduction of high‐resolution digital elevation models, it is possible to use digital terrain analysis to extract small streams. In order to map streams correctly, it is necessary to remove errors and artificial sinks in the digital elevation models. This step is known as preprocessing and will allow water to move across a digital landscape. However, new challenges are introduced with increasing resolution because the effect of anthropogenic artefacts such as road embankments and bridges increases with increased resolution. These are problematic during the preprocessing step because they are elevated above the surrounding landscape and act as artificial dams. The aims of this study were to evaluate the effect of different preprocessing methods such as breaching and filling on digital elevation models with different resolutions (2, 4, 8, and 16 m) and to evaluate which preprocessing methods most accurately route water across road impoundments at actual culvert locations. A unique dataset with over 30,000 field‐mapped road culverts was used to assess the accuracy of stream networks derived from digital elevation models using different preprocessing methods. Our results showed that the accuracy of stream networks increases with increasing resolution. Breaching created the most accurate stream networks on all resolutions, whereas filling was the least accurate. Burning streams from the topographic map across roads from the topographic map increased the accuracy for all methods and resolutions. In addition, the impact in terms of change in area and absolute volume between original and preprocessed digital elevation models was smaller for breaching than for filling. With the appropriate methods, it is possible to extract accurate stream networks from high‐resolution digital elevation models with extensive road networks, thus providing forest managers with stream networks that can be used when planning operations in wet areas or areas near streams to prevent rutting, sediment transport, and mercury export.  相似文献   

6.
Obtaining chronological control for geomorphological sequences can be problematic due to the fragmentary and non‐sequential nature of sediment and landform archives. The robust analysis of 14C ages is often critical for the interpretation of these complicated sequences. This paper demonstrates a robust methodology for the 14C dating of geomorphological sequences using a case study from the lower Ribble valley, northwest England. The approach adopted incorporates using greater numbers of ages, targeting plant macrofossils, obtaining replicate measurements from single horizons to assess the extent of reworking and the use of Bayesian approaches to test models of the relative order of events. The extent of reworking of organic materials and space‐time dynamics of fluvial change means that it is critical that chronological control is sufficiently resourced with 14C measurements. As a result Bayesian approaches are increasingly important for the evaluation of large data sets. Assessing the conformability of relative order models informed by interpretation of the geomorphology can identify contexts or materials that are out of sequence, and focuses attention on problem materials (reworking) and errors in interpretation (outlier ages). These relative order models provide a framework for the interrogation of sequences and a means for securing probability‐based age estimates for events that occur between dated contexts. This approach has potential value in constraining the sequence of geomorphological development at scales that vary from individual sites to a catchment or region, furthering understanding of forcing and change in geomorphological systems. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Time series analysis is a data-driven approach to analyze time series of heads measured in an observation well. Time series models are commonly much simpler and give much better fits than regular groundwater models. Time series analysis with response functions gives insight into why heads vary, while such insight is difficult to gain with black box models out of the artificial intelligence world. An important application is to quantify the contributions to the head variation of different stresses on the aquifer, such as rainfall and evaporation, pumping, and surface water levels. Time series analysis may be applied to answer many groundwater questions without the need for a regular groundwater model, such as what is the drawdown caused by a pumping station? Or, how long will it take before groundwater levels recover after a period of drought? Even when a regular groundwater model is needed to solve a groundwater problem, time series analysis can be of great value. It can be used to clean up the data, identify the major stresses on the aquifer, determine the most important processes that affect flow in the aquifer, and give an indication of the fit that can be expected. In addition, it can be used to determine calibration targets for steady-state models, and it can provide several alternative calibration methods for transient models. In summary, the overarching message of this paper is that it would be wise to do time series analysis for any application that uses measured groundwater heads.  相似文献   

8.
The Future Midwestern Landscapes (FML) project is part of the U.S. Environmental Protection Agency's Ecosystem Services Research Program. The goal of the FML project is to quantify changes in ecosystem services across the Midwestern region as a result of the growing demand for biofuels. Watershed models are an efficient way to quantify ecosystem services of water quality and quantity. By calibrating models, we can better capture watershed characteristics before they are applied to make predictions. The Kaskaskia River watershed in Illinois was selected to investigate the effectiveness of different calibration strategies (single‐site and multi‐site calibrations) for streamflow, total suspended sediment (TSS) and total nitrogen (TN) loadings using the Soil and Water Assessment Tool. Four USGS gauges were evaluated in this study. Single‐site calibration was performed from a downstream site to an upstream site, and multi‐site calibration was performed and fine‐tuned based on the single‐site calibration results. Generally, simulated streamflow and TSS were not much affected by different calibration strategies. However, when single‐site calibration was performed at the most downstream site, the Nash–Sutcliffe efficiency (NSE) values for TN ranged between ?0.09 and 0.53 at the other sites; and when single‐site calibration was performed at the most upstream site, the NSE values ranged between ?8.38 and ?0.07 for the other sites. The NSE values for TN were improved to 0.5 – 0.59 for all four sites when multi‐site calibration was performed. The results of the multi‐site calibration and validation showed an improvement on model performance on TN and highlighted that multi‐site calibrations are needed to assess the hydrological and water quality processes at various spatial scales. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
C. Dobler  F. Pappenberger 《水文研究》2013,27(26):3922-3940
The increasing complexity of hydrological models results in a large number of parameters to be estimated. In order to better understand how these complex models work, efficient screening methods are required in order to identify the most important parameters. This is of particular importance for models that are used within an operational real‐time forecasting chain such as HQsim. The objectives of this investigation are to (i) identify the most sensitive parameters of the complex HQsim model applied in the Alpine Lech catchment and (ii) compare model parameter sensitivity rankings attained from three global sensitivity analysis techniques. The techniques presented are the (i) regional sensitivity analysis, (ii) Morris analysis and (iii) state‐dependent parameter modelling. The results indicate that parameters affecting snow melt as well as processes in the unsaturated soil zone reveal high significance in the analysed catchment. The snow melt parameters show clear temporal patterns in the sensitivity whereas most of the parameters affecting processes in the unsaturated soil zone do not vary in importance across the year. Overall, the maximum degree day factor (meltfunc_max) has been identified to play a key role within the HQsim model. Although the parameter sensitivity rankings are equivalent between methods for a number of parameters, for several key parameters differing results were obtained. An uncertainty analysis demonstrates that a parameter ranking attained from only one method is subjected to large uncertainty. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
Generally, forest transpiration models contain model parameters that cannot be measured independently and therefore are tuned to fit the model results to measurements. Only unique parameter estimates with high accuracy can be used for extrapolation in time or space. However, parameter identification problems may occur as a result of the properties of the data set. Time‐series of environmental conditions, which control the forest transpiration, may contain periods with redundant or coupled information, so called collinearity, and other combinations of conditions may be measured only with difficulty or incompletely. The aim of this study is to select environmental conditions that yield a unique parameter set of a canopy conductance model. The parameter identification method based on localization of information (PIMLI) was used to calculate the information content of every individual artificial transpiration measurement. It is concluded that every measurement has its own information with respect to a parameter. Independent criteria were assessed to localize the environmental conditions, which contain measurements with most information. These measurements were used in separate subdata sets to identify the parameters. The selected measurements do not overlap and the accuracies of the parameter estimates are maximized. Measurements that were not selected do not contain additional information that can be used to further maximize the parameter accuracy. Thereupon, the independent criteria were used to select eddy correlation measurements and parameters were identified with only the selected measurements. It is concluded that, for this forest and data set, PIMLI identifies a unique parameter set with high accuracy, whereas conventional calibrations on subdata sets give non‐unique parameter estimates. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

11.
Migration velocity analysis and waveform inversion   总被引:3,自引:0,他引:3  
Least‐squares inversion of seismic reflection waveform data can reconstruct remarkably detailed models of subsurface structure and take into account essentially any physics of seismic wave propagation that can be modelled. However, the waveform inversion objective has many spurious local minima, hence convergence of descent methods (mandatory because of problem size) to useful Earth models requires accurate initial estimates of long‐scale velocity structure. Migration velocity analysis, on the other hand, is capable of correcting substantially erroneous initial estimates of velocity at long scales. Migration velocity analysis is based on prestack depth migration, which is in turn based on linearized acoustic modelling (Born or single‐scattering approximation). Two major variants of prestack depth migration, using binning of surface data and Claerbout's survey‐sinking concept respectively, are in widespread use. Each type of prestack migration produces an image volume depending on redundant parameters and supplies a condition on the image volume, which expresses consistency between data and velocity model and is hence a basis for velocity analysis. The survey‐sinking (depth‐oriented) approach to prestack migration is less subject to kinematic artefacts than is the binning‐based (surface‐oriented) approach. Because kinematic artefacts strongly violate the consistency or semblance conditions, this observation suggests that velocity analysis based on depth‐oriented prestack migration may be more appropriate in kinematically complex areas. Appropriate choice of objective (differential semblance) turns either form of migration velocity analysis into an optimization problem, for which Newton‐like methods exhibit little tendency to stagnate at nonglobal minima. The extended modelling concept links migration velocity analysis to the apparently unrelated waveform inversion approach to estimation of Earth structure: from this point of view, migration velocity analysis is a solution method for the linearized waveform inversion problem. Extended modelling also provides a basis for a nonlinear generalization of migration velocity analysis. Preliminary numerical evidence suggests a new approach to nonlinear waveform inversion, which may combine the global convergence of velocity analysis with the physical fidelity of model‐based data fitting.  相似文献   

12.
Ground motions with strong velocity pulses are of particular interest to structural earthquake engineers because they have the potential to impose extreme seismic demands on structures. Accurate classification of records is essential in several earthquake engineering fields where pulse‐like ground motions should be distinguished from nonpulse‐like records, such as probabilistic seismic hazard analysis and seismic risk assessment of structures. This study proposes an effective method to identify pulse‐like ground motions having single, multiple, or irregular pulses. To effectively characterize the intrinsic pulse‐like features, the concept of an energy‐based significant velocity half‐cycle, which is visually identifiable, is first presented. Ground motions are classified into 6 categories according to the number of significant half‐cycles in the velocity time series. The pulse energy ratio is used as an indicator for quantitative identification, and then the energy threshold values for each type of ground motions are determined. Comprehensive comparisons of the proposed approach with 4 benchmark identification methods are conducted, and the results indicate that the methodology presented in this study can more accurately and efficiently distinguish pulse‐like and nonpulse‐like ground motions. Also presented are some insights into the reasons why many pulse‐like ground motions are not detected successfully by each of the benchmark methods.  相似文献   

13.
There is great interest in modelling the export of nitrogen (N) and phosphorus (P) from agricultural fields because of ongoing challenges of eutrophication. However, the use of existing hydrochemistry models can be problematic in cold regions because models frequently employ incomplete or conceptually incorrect representations of the dominant cold regions hydrological processes and are overparameterized, often with insufficient data for validation. Here, a process‐based N model, WINTRA, which is coupled to a physically based cold regions hydrological model, was expanded to simulate P and account for overwinter soil nutrient biochemical cycling. An inverse modelling approach, using this model with consideration of parameter equifinality, was applied to an intensively monitored agricultural basin in Manitoba, Canada, to help identify the main climate, soil, and anthropogenic controls on nutrient export. Consistent with observations, the model results suggest that snow water equivalent, melt rate, snow cover depletion rate, and contributing area for run‐off generation determine the opportunity time and surface area for run‐off–soil interaction. These physical controls have not been addressed in existing models. Results also show that the time lag between the start of snowmelt and the arrival of peak nutrient concentration in run‐off increased with decreasing antecedent soil moisture content, highlighting potential implications of frozen soils on run‐off processes and hydrochemistry. The simulations showed TDP concentration peaks generally arriving earlier than NO3 but also decreasing faster afterwards, which suggests a significant contribution of plant residue Total dissolved Phosphorus (TDP) to early snowmelt run‐off. Antecedent fall tillage and fertilizer application increased TDP concentrations in spring snowmelt run‐off but did not consistently affect NO3 run‐off. In this case, the antecedent soil moisture content seemed to have had a dominant effect on overwinter soil N biogeochemical processes such as mineralization, which are often ignored in models. This work demonstrates both the need for better representation of cold regions processes in hydrochemical models and the model improvements that are possible if these are included.  相似文献   

14.
It is well-known that the phase center of a Global Navigation Satellite System (GNSS) antenna is not a stable point coinciding with a mechanical reference. The phase center position depends on the direction of the received signal, and is antenna-and signaldependent. Phase center corrections (PCC) models of GNSS antennas have been available for several years. The first method to create antenna PCC models was the relative field calibration procedure. Currently only absolute calibration models are generally recommended for use. In this study we investigate the differences between position estimates obtained using individual and type-mean absolute antenna calibrations in order to better understand how receiver antenna calibration models contribute to the Global Positioning System (GPS) positioning error budget. The station positions were estimated with two absolute calibration models: the igs08.atx model, which contains typemean calibration results, and individual antenna calibration models. Continuous GPS observations from selected Polish European Permanent Network (EPN) stations were used for these studies. The position time series were derived from the precise point positioning (PPP) technique using the NAPEOS scientific GNSS software package. The results show that the differences in the calibrations models propagate directly into the position domain, affecting daily as well sub-daily results. In daily solutions, the position offsets, resulting from the use of individual calibrations instead of type-mean igs08.atx calibrations, can reach up to 5 mm in the Up component, while in the horizontal one they generally stay below 1 mm. It was found that increasing the frequency of sub-daily coordinate solutions amplifies the effects of type-mean vs individual PCC-dependent differences, and also gives visible periodic variations in time series of GPS position differences.  相似文献   

15.
The need for accurate hydrologic analysis and rainfall–runoff modelling tools has been rapidly increasing because of the growing complexity of operational hydrologic and hydraulic problems associated with population growth, rapid urbanization and expansion of agricultural activities. Given the recent advances in remote sensing of physiographic features and the availability of near real‐time precipitation products, rainfall–runoff models are expected to predict runoff more accurately. In this study, we compare the performance and implementation requirements of two rainfall–runoff models for a semi‐urbanized watershed. One is a semi‐distributed conceptual model, the Hydrologic Engineering Center‐Hydrologic Modelling System (HEC‐HMS). The other is a physically based, distributed‐parameter hydrologic model, the Gridded Surface Subsurface Hydrologic Analysis (GSSHA). Four flood events that took place on the Leon Creek watershed, a sub‐watershed of the San Antonio River basin in Texas, were used in this study. The two models were driven by the Multisensor Precipitation Estimator radar products. One event (in 2007) was used for HEC‐HMS and GSSHA calibrations. Two events (in 2004 and 2007) were used for further calibration of HEC‐HMS. Three events (in 2002, 2004 and 2010) were used for model validation. In general, the physically based, distributed‐parameter model performed better than the conceptual model and required less calibration. The two models were prepared with the same minimum required input data, and the effort required to build the two models did not differ substantially. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
A number of methods have been proposed that utilize the time‐domain transformations of frequency‐dependent dynamic impedance functions to perform a time‐history analysis. Though these methods have been available in literature for a number of years, the methods exhibit stability issues depending on how the model parameters are calibrated. In this study, a novel method is proposed with which the stability of a numerical integration scheme combined with time‐domain representation of a frequency‐dependent dynamic impedance function can be evaluated. The method is verified with three independent recursive parameter models. The proposed method is expected to be a useful tool in evaluating the potential stability issue of a time‐domain analysis before running a full‐fledged nonlinear time‐domain analysis of a soil–structure system in which the dynamic impedance of a soil–foundation system is represented with a recursive parameter model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Field tracer experiments and model calibrations indicate that the magnitude of dispersivity increases as a function of the scale at which observations are made. Calculations presented in this study suggest that some part of this scaling may be explained as an artifact of the models used. Specifically, a scaling-up of dispersivity will occur whenever an (n − 1)-dimensional model is calibrated or otherwise employed to describe an n-dimensional system. The calibrated coefficients for such models will depend not only on size of the contaminant plume or tracer experiment at the time of calibration, but will exhibit a size dependency beyond the calibration period. The magnitude of scaling appears to be sufficient to encompass the range of differences between laboratory measurements of dispersivity and model calibrations.  相似文献   

18.
Uncertainty in the estimation of hydrologic export of solutes has never been fully evaluated at the scale of a small‐watershed ecosystem. We used data from the Gomadansan Experimental Forest, Japan, Hubbard Brook Experimental Forest, USA, and Coweeta Hydrologic Laboratory, USA, to evaluate many sources of uncertainty, including the precision and accuracy of measurements, selection of models, and spatial and temporal variation. Uncertainty in the analysis of stream chemistry samples was generally small but could be large in relative terms for solutes near detection limits, as is common for ammonium and phosphate in forested catchments. Instantaneous flow deviated from the theoretical curve relating height to discharge by up to 10% at Hubbard Brook, but the resulting corrections to the theoretical curve generally amounted to <0.5% of annual flows. Calibrations were limited to low flows; uncertainties at high flows were not evaluated because of the difficulties in performing calibrations during events. However, high flows likely contribute more uncertainty to annual flows because of the greater volume of water that is exported during these events. Uncertainty in catchment area was as much as 5%, based on a comparison of digital elevation maps with ground surveys. Three different interpolation methods are used at the three sites to combine periodic chemistry samples with streamflow to calculate fluxes. The three methods differed by <5% in annual export calculations for calcium, but up to 12% for nitrate exports, when applied to a stream at Hubbard Brook for 1997–2008; nitrate has higher weekly variation at this site. Natural variation was larger than most other sources of uncertainty. Specifically, coefficients of variation across streams or across years, within site, for runoff and weighted annual concentrations of calcium, magnesium, potassium, sodium, sulphate, chloride, and silicate ranged from 5 to 50% and were even higher for nitrate. Uncertainty analysis can be used to guide efforts to improve confidence in estimated stream fluxes and also to optimize design of monitoring programmes. © 2014 The Authors. Hydrological Processes published John Wiley & Sons, Ltd.  相似文献   

19.
In floodplains, anthropogenic features such as levees or road scarps, control and influence flows. An up‐to‐date and accurate digital data about these features are deeply needed for irrigation and flood mitigation purposes. Nowadays, LiDAR Digital Terrain Models (DTMs) covering large areas are available for public authorities, and there is a widespread interest in the application of such models for the automatic or semiautomatic recognition of features. The automatic recognition of levees and road scarps from these models can offer a quick and accurate method to improve topographic databases for large‐scale applications. In mountainous contexts, geomorphometric indicators derived from DTMs have been proven to be reliable for feasible applications, and the use of statistical operators as thresholds showed a high reliability to identify features. The goal of this research is to test if similar approaches can be feasible also in floodplains. Three different parameters are tested at different scales on LiDAR DTM. The boxplot is applied to identify an objective threshold for feature extraction, and a filtering procedure is proposed to improve the quality of the extractions. This analysis, in line with other works for different environments, underlined (1) how statistical parameters can offer an objective threshold to identify features with varying shapes, size and height; (2) that the effectiveness of topographic parameters to identify anthropogenic features is related to the dimension of the investigated areas. The analysis also showed that the shape of the investigated area has not much influence on the quality of the results. While the effectiveness of residual topography had already been proven, the proposed study underlined how the use of entropy can anyway provide good extractions, with an overall quality comparable to the one offered by residual topography, and with the only limitation that the extracted features are slightly wider than the investigated one. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
The velocity and dynamic pressure of debris flows are critical determinants of the impact of these natural phenomena on infrastructure. Therefore, the prediction of these parameters is critical for hazard assessment and vulnerability analysis. We present here an approach to predict the velocity of debris flows on the basis of the energy line concept. First, we obtained empirically and field‐based estimates of debris flow peak discharge, mean velocity at peak discharge and velocity, at channel bends and within the fans of ten of the debris flow events that occurred in May 1998 in the area of Sarno, Southern Italy. We used this data to calibrate regression models that enable the prediction of velocity as a function of the vertical distance between the energy line and the surface. Despite the complexity in morphology and behaviour of these flows, the statistical fits were good and the debris flow velocities can be predicted with an associated uncertainty of less than 30% and less than 3 m s?1. We wrote code in Visual Basic for Applications (VBA) that runs within ArcGIS® to implement the results of these calibrations and enable the automatic production of velocity and dynamic pressure maps. The collected data and resulting empirical models constitute a realistic basis for more complex numerical modelling. In addition, the GIS implementation constitutes a useful decision‐support tool for real‐time hazard mitigation. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号