首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Full‐waveform inversion is an appealing technique for time‐lapse imaging, especially when prior model information is included into the inversion workflow. Once the baseline reconstruction is achieved, several strategies can be used to assess the physical parameter changes, such as parallel difference (two separate inversions of baseline and monitor data sets), sequential difference (inversion of the monitor data set starting from the recovered baseline model) and double‐difference (inversion of the difference data starting from the recovered baseline model) strategies. Using synthetic Marmousi data sets, we investigate which strategy should be adopted to obtain more robust and more accurate time‐lapse velocity changes in noise‐free and noisy environments. This synthetic application demonstrates that the double‐difference strategy provides the more robust time‐lapse result. In addition, we propose a target‐oriented time‐lapse imaging using regularized full‐waveform inversion including a prior model and model weighting, if the prior information exists on the location of expected variations. This scheme applies strong prior model constraints outside of the expected areas of time‐lapse changes and relatively less prior constraints in the time‐lapse target zones. In application of this process to the Marmousi model data set, the local resolution analysis performed with spike tests shows that the target‐oriented inversion prevents the occurrence of artefacts outside the target areas, which could contaminate and compromise the reconstruction of the effective time‐lapse changes, especially when using the sequential difference strategy. In a strongly noisy case, the target‐oriented prior model weighting ensures the same behaviour for both time‐lapse strategies, the double‐difference and the sequential difference strategies and leads to a more robust reconstruction of the weak time‐lapse changes. The double‐difference strategy can deliver more accurate time‐lapse variation since it can focus to invert the difference data. However, the double‐difference strategy requires a preprocessing step on data sets such as time‐lapse binning to have a similar source/receiver location between two surveys, while the sequential difference needs less this requirement. If we have prior information about the area of changes, the target‐oriented sequential difference strategy can be an alternative and can provide the same robust result as the double‐difference strategy.  相似文献   

2.
Near‐surface problem is a common challenge faced by land seismic data processing, where often, due to near‐surface anomalies, events of interest are obscured. One method to handle this challenge is near‐surface layer replacement, which is a wavefield reconstruction process based on downward wavefield extrapolation with the near‐surface velocity model and upward wavefield extrapolation with a replacement velocity model. This requires, in theory, that the original wavefield should be densely sampled. In reality, data acquisition is always sparse due to economic reasons, and as a result in the near‐surface layer replacement data interpolation should be resorted to. For datasets with near‐surface challenges, because of the complex event behaviour, a suitable interpolation scheme by itself is a challenging problem, and this, in turn, makes it difficult to carry out the near‐surface layer replacement. In this research note, we first point out that the final objective of the near‐surface layer replacement is not to obtain a newly reconstructed wavefield but to obtain a better final image. Next, based upon this finding, we propose a new thinking, interpolation‐free near‐surface layer replacement, which can handle complex datasets without any interpolation. Data volume expansion is the key idea, and with its help, the interpolation‐free near‐surface layer replacement is capable of preserving the valuable information of areas of interest in the original dataset. Two datasets, i.e., a two‐dimensional synthetic dataset and a three‐dimensional field dataset, are used to demonstrate this idea. One conclusion that can be drawn is that an attempt to interpolate data before layer replacement may deteriorate the final image after layer replacement, whereas interpolation‐free near‐surface layer replacement preserves all image details in the subsurface.  相似文献   

3.
This paper empirically investigates the asymptotic behaviour of the flood probability distribution and more precisely the possible occurrence of heavy tail distributions, generally predicted by multiplicative cascades. Since heavy tails considerably increase the frequency of extremes, they have many practical and societal consequences. A French database of 173 daily discharge time series is analyzed. These series correspond to various climatic and hydrological conditions, drainage areas ranging from 10 to 105 km2, and are from 22 to 95 years long. The peaks-over-threshold method has been used with a set of semi-parametric estimators (Hill and Generalized Hill estimators), and parametric estimators (maximum likelihood and L-moments). We discuss the respective interest of the estimators and compare their respective estimates of the shape parameter of the probability distribution of the peaks. We emphasize the influence of the selected number of the highest observations that are used in the estimation procedure and in this respect the particular interest of the semi-parametric estimators. Nevertheless, the various estimators agree on the prevalence of heavy tails and we point out some links between their presence and hydrological and climatic conditions.  相似文献   

4.
Many numerical landform evolution models assume that soil erosion by flowing water is either purely detachment‐limited (i.e. erosion rate is related to the shear stress, power, or velocity of the flow) or purely transport‐limited (i.e. erosion/deposition rate is related to the divergence of shear stress, power, or velocity). This paper reviews available data on the relative importance of detachment‐limited versus transport‐limited erosion by flowing water on soil‐mantled hillslopes and low‐order valleys. Field measurements indicate that fluvial and slope‐wash modification of soil‐mantled landscapes is best represented by a combination of transport‐limited and detachment‐limited conditions with the relative importance of each approximately equal to the ratio of sand and rock fragments to silt and clay in the eroding soil. Available data also indicate that detachment/entrainment thresholds are highly variable in space and time in many landscapes, with local threshold values dependent on vegetation cover, rock‐fragment armoring, surface roughness, soil texture and cohesion. This heterogeneity is significant for determining the form of the fluvial/slope‐wash erosion or transport law because spatial and/or temporal variations in detachment/entrainment thresholds can effectively increase the nonlinearity of the relationship between sediment transport and stream power. Results from landform evolution modeling also suggest that, aside from the presence of distributary channel networks and autogenic cut‐and‐fill cycles in non‐steady‐state transport‐limited landscapes, it is difficult to infer the relative importance of transport‐limited versus detachment‐limited conditions using topography alone. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
We challenge the notion of steady‐state equilibrium in the context of progressive cliff retreat on micro‐tidal coasts. Ocean waves break at or close to the abrupt seaward edge of near‐horizontal shore platforms and then rapidly lose height due to turbulence and friction. Conceptual models assume that wave height decays exponentially with distance from the platform edge, and that the platform edge does not erode under stable sea‐level. These assumptions combine to a steady‐state view of Holocene cliff retreat. We argue that this model is not generally applicable. Recent data show that: (1) exponential decay in wave height is not the most appropriate conceptual model of wave decay; (2) by solely considering wave energy at gravity wave frequencies the steady‐state model neglects a possible formative role for infragravity waves. Here we draw attention to possible mechanisms through which infragravity waves may drive cliff retreat over much greater distances (and longer timescales) than imaginable under the established conceptual model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
7.
Geochemical and isotopic tracers were often used in mixing models to estimate glacier melt contributions to streamflow, whereas the spatio‐temporal variability in the glacier melt tracer signature and its influence on tracer‐based hydrograph separation results received less attention. We present novel tracer data from a high‐elevation catchment (17 km2, glacierized area: 34%) in the Oetztal Alps (Austria) and investigated the spatial, as well as the subdaily to monthly tracer variability of supraglacial meltwater and the temporal tracer variability of winter baseflow to infer groundwater dynamics. The streamflow tracer variability during winter baseflow conditions was small, and the glacier melt tracer variation was higher, especially at the end of the ablation period. We applied a three‐component mixing model with electrical conductivity and oxygen‐18. Hydrograph separation (groundwater, glacier melt, and rain) was performed for 6 single glacier melt‐induced days (i.e., 6 events) during the ablation period 2016 (July to September). Median fractions (±uncertainty) of groundwater, glacier melt, and rain for the events were estimated at 49±2%, 35±11%, and 16±11%, respectively. Minimum and maximum glacier melt fractions at the subdaily scale ranged between 2±5% and 76±11%, respectively. A sensitivity analysis showed that the intraseasonal glacier melt tracer variability had a marked effect on the estimated glacier melt contribution during events with large glacier melt fractions of streamflow. Intra‐daily and spatial variation of the glacier melt tracer signature played a negligible role in applying the mixing model. The results of this study (a) show the necessity to apply a multiple sampling approach in order to characterize the glacier melt end‐member and (b) reveal the importance of groundwater and rainfall–runoff dynamics in catchments with a glacial flow regime.  相似文献   

8.
9.
Ten methods for sampling beach litter were tested on 16 beaches located around the Firth of Forth, Scotland in order to ascertain the effectiveness of the various methods. Both fresh and/or accumulated litter were sampled. Some methods were more effective for recording gross amounts of litter. Maximum litter counts could be obtained by surveying the top boundary of the beach (e.g. vegetation line, retaining wall, rocks). Lowest amounts were obtained by surveying one five metre wide belt transect from the vegetation line to the shore. Some bias towards highlighting particular litter types was shown by specific methods. It was concluded that there were advantages and disadvantages for each method and that the aims of the study would in the end determine the method.  相似文献   

10.
Where should we take cores for palaeotsunami research? It is generally considered that local depressions with low energy environments such as wetlands are one of the best places. However, it is also recognized that the presence or absence of palaeotsunami deposits (and their relative thickness) is highly dependent upon subsoil microtopography. In the beach ridge system of Ishinomaki Plain, Japan, several palaeotsunami deposits linked to past Japan Trench earthquakes have been reported. However, the number of palaeotsunami deposits reported at individual sites varies considerably. This study used ground penetrating radar (GPR) combined with geological evidence to better understand the relationship between palaeotopography and palaeotsunami deposit characteristics. The subsurface topography of the ~3000–4000 bp beach ridge was reconstructed using GPR data coupled with core surveys of the underlying sediments. We noted that the number (and thickness) of the palaeotsunami deposits inferred from the cores was controlled by the palaeotopography. Namely, a larger number of events and thicker palaeotsunami deposits were observed in depressions in the subsurface microtopography. We noted a total of three palaeotsunami deposits dated to between 1700 and 3000 cal bp , but they were only observed together in 11% of the core sites. This result is important for tsunami risk assessments that use the sedimentary evidence of past events because we may well be underestimating the number of tsunamis that have occurred. We suggest that GPR is an efficient and invaluable tool to help researchers identify the most appropriate places to carry out geological fieldwork in order to provide a more comprehensive understanding of past tsunami activity. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

11.
12.
Application of Schmidt‐hammer exposure‐age dating (SHD) to landforms has substantially increased in recent years. The original mechanical Schmidt hammer records R‐(rebound) values. Although the newly introduced electronic Schmidt hammer (SilverSchmidt) facilitates greatly improved data processing, it measures surface hardness differently, recording Q‐(velocity) values that are not a priori interconvertible with R‐values. This study is the first to compare the performance of both instruments in the context of field‐based exposure‐age dating with a particular focus on the interconvertibility of R‐values and Q‐values. The study was conducted on glacially polished pyroxene‐granulite gneiss, Jotunheimen, southern Norway. Results indicate that mean Q‐values are consistently 8–10 units higher than mean R‐values over the range of values normally encountered in the application of SHD to glacial and periglacial landforms. A convenient conversion factor of ±10 units may, therefore, be appropriate for all but the softest rock types close to the technical resolution of the instruments. The electronic Schmidt hammer should therefore be regarded as a useful complement and potential replacement for the mechanical Schmidt hammer. Conversion of published R‐values data to Q‐values requires, however, careful control and documentation of instrument calibration. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
The majority of the world's mangrove forests occur on mostly mineral sediments of fluvial origin. Two perspectives exist on the biogeomorphic development of these forests, i.e. that mangroves are opportunistic, with forest development primarily driven by physical processes, or alternatively that biophysical feedbacks strongly influence sedimentation and resulting geomorphology. On the Firth of Thames coast, New Zealand, we evaluate these two possible scenarios for sediment accumulation and forest development using high‐resolution sedimentary records and a detailed chronology of mangrove‐forest (Avicennia marina) development since the 1950s. Cores were collected along a shore‐normal transect of known elevation relative to mean sea level (MSL). Activities for lead‐210 (210Pb), caesium‐137 (137Cs) and beryllium‐7 (7Be), and sediment properties were analysed, with 210Pb sediment accumulation rates (SARs), compensated for deep subsidence (~8 mm yr?1) used as a proxy for elevation gain. At least four phases of forest development since the 1950s are recognized. An old‐growth forest developed by the late‐1970s with more recent seaward forest expansion thereafter. Excess 210Pb profiles from the old‐growth forest exhibit relatively low SARs near the top (7–12 mm yr?1) and bottom (10–22 mm yr?1) of cores, separated by an interval of higher SARs (33–100 mm yr?1). A general trend of increasing SAR over time characterizes the recent forest. Biogeomorphic evolution of the system is more complex than simple mudflat accretion/progradation and mangrove‐forest expansion. Surface‐elevation gain in the old‐growth forest displays an asymptotic trajectory, with a secondary depocentre developing on the seaward mudflat from the mid‐1970s. Two‐ to ten‐fold increases in 210Pb SARs are unambiguously large and occurred years to decades before seedling recruitment, demonstrating that mangroves do not measurably enhance sedimentation over annual to decadal timescales. This suggests that mangrove‐forest development is largely dependent on physical processes, with forests occupying mudflats once they reach a suitable elevation in the intertidal. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
Seismic hazard disaggregation is commonly used as an aid in ground‐motion selection for the seismic response analysis of structures. This short communication investigates two different approaches to disaggregation related to the exceedance and occurrence of a particular intensity. The impact the different approaches might have on a subsequent structural analysis at a given intensity is explored through the calculation of conditional spectra. It is found that the exceedance approach results in conditional spectra that will be conservative when used as targets for ground‐motion selection. It is however argued that the use of the occurrence disaggregation is more consistent with the objectives of seismic response analyses in the context of performance‐based earthquake engineering. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
A growing body of field, theoretical and numerical modelling studies suggests that predicting river response to even major changes in input variables is difficult. Rivers are seen to adjust rapidly and variably through time and space as well as changing independently of major driving variables. Concepts such as Self‐Organized Criticality (SOC) are considered to better reflect the complex interactions and adjustments occurring in systems than traditional approaches of cause and effect. This study tests the hypothesis that riverbank mass failures which occurred both prior to, and during, an extreme flood event in southeast Queensland (SEQ) in 2011 are a manifestation of SOC. Each wet‐flow failure is somewhat analogous to the ‘avalanche’ described in the initial sand‐pile experiments of Bak et al. (Physical Review Letters, 1987, 59(4), 381–384) and, due to the use of multitemporal LiDAR, the time period of instability can be effectively constrained to that surrounding the flood event. The data is examined with respect to the key factors thought to be significant in evaluating the existence of SOC including; non‐linear temporal dynamics in the occurrence of disturbance events within the system; an inverse power‐law relation between the magnitude and frequency of the events; the existence of a critical state to which the system readjusts after a disturbance; the existence of a cascading processes mechanism by which the same process can initiate both low‐magnitude and high‐magnitude events. While there was a significant change in the frequency of mass failures pre‐ and post‐flood, suggesting non‐linear temporal dynamics in the occurrence of disturbance events, the data did not fit an inverse power‐law within acceptable probability and other models were found to fit the data better. Likewise, determining a single ‘critical’ state is problematic when a variety of feedbacks and multiple modes of adjustment are likely to have operated throughout this high magnitude event. Overall, the extent to which the data supports a self‐organized critical state is variable and highly dependent upon inferential arguments. Investigating the existence of SOC, however, provided results and insights that are useful to the management and future prediction of these features. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
To evaluate climate and atmospheric deposition induced physical and water chemical changes and their effects on phytoplankton communities, we used complete time series (14 years, monthly measurements during the growing season) of 18 physical and chemical variables and phytoplankton data from 13 nutrient-poor Swedish reference lakes along a latitudinal gradient. We found numerous strong significant changes over time that were most coherent among lakes for sulfate concentrations, conductivity, calcium, magnesium, chloride, potassium, water color, surface water temperature and the intensity of thermal stratification. Despite these pronounced coherent physical and water chemical changes over Sweden, the phytoplankton biomass and species richness of six phytoplankton groups, measured at the same time as the water chemical variables, showed only few and weak significant changes over time. The only coherent significant change over Sweden, occurring in seven lakes, was observed in the species richness of chlorophytes. The number of chlorophyte taxa significantly declined over Sweden. Using a partial least square model for each lake, we attributed the decline primarily to an increase in water temperatures and water color, which were among the most important variables for the model performance of each lake. All other taxonomic groups were driven primarily by non-coherent changes in nutrient concentrations, pH and probably also non-coherent grazing pressure. We concluded that coherent phytoplankton responses can only be achieved for taxonomic groups that are driven primarily by coherent physical/chemical changes. According to our study, chlorophytes belong to such a group, making them possible global change indicators. Our findings give new insights into global change effects on different phytoplankton taxonomic groups in nutrient-poor lakes.  相似文献   

17.
Gullies are conceptualized in the literature as essentially fluvial forms with dimensional boundaries arbitrarily defined between rills and river channels. This notion is incompatible with the existing variability of form and process, as mass movements frequently exert a fundamental control on gully initiation and expansion, to the point of features outgrowing their original contributing area. The inability of a conceptual framework to incorporate existing observations inevitably constrains methodologies and research results. In this commentary, several examples of published results are contrasted with the prevailing assumption of an essentially fluvial nature, with the purpose of encouraging discussion on the need for a revised conceptual framework in gully erosion research. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
There is a growing appreciation of the uncertainties in the estimation of snow-melt and glacier-melt as a result of climate change in high elevation catchments. Through a detailed examination of three hydrological models in two catchments, and interpretation of results from previous studies, we observed that many variations in estimated streamflow could be explained by the selection of a best parameter set from the possible good model parameters. The importance of understanding changing glacial dynamics is critically important for our study areas in the Upper Indus Basin where Pakistan's policymakers are planning infrastructure to meet the future energy and water needs of hundreds of millions of people downstream. Yet, the effect of climate on glacial runoff and climate on snowmelt runoff is poorly understood. With the HBV model, for example, we estimated glacial melt as between 56% and 89% for the Hunza catchment. When rainfall was a scaled parameter, the models estimated glacial melt as between 20% and 100% of streamflow. These parameter sets produced wildly different projections of future climate for RCP8.5 scenarios in 2046–2075 compared to 1976–2005. Assuming no glacial shrinkage, for one climate projection, we found that the choice among good parameter sets resulted in projected values of future streamflow across a range from +54% to +125%. Parameter selection was the most significant source of uncertainty in the glaciated catchment and amplified climate model uncertainty, whereas climate model choice was more important in the rainfall dominated catchment. Although the study focuses on Pakistan, the overall conclusions are instructive for other similar regions in the world. We suggest that modellers of glaciated catchments should present results from at least the book-ends: models with low sensitivity to ice-melt and models with high sensitivity to ice-melt. This would reduce confusion among decision makers when they are faced with similar contrasting results.  相似文献   

19.
Abstract Volcanism in the back-arc side region of Central Luzon, Philippines, with respect to the Manila Trench is characterized by fewer and smaller volume volcanic centers compared to the adjacent forearc side-main volcanic arc igneous rocks. The back-arc side volcanic rocks which include basalts, basaltic andesites, andesites and dacites also contain more hydrous minerals (ie, hornblende and biotite). Adakite-like geochemical characteristics of these back-arc lavas, including elevated Sr, depleted heavy rare earth elements and high Sr/Y ratios, are unlikely to have formed by slab melting, be related to incipient subduction, slab window magmatism or plagioclase accumulation. Field and geochemical evidence show that these adakitic lavas were most probably formed by the partial melting of a garnet-bearing amphibolitic lower crust. Adakitic lavas are not necessarily arc–trench gap region slab melts.  相似文献   

20.
A common method for compensating for grain-size differences in suites of sediment samples is to normalize potential contaminants by regression with a particular grain-size fraction, the <63 microm fraction being most often selected. However, this fraction is unlikely to represent accurately the clay content, which represents a major factor in the ability of sediments to adsorb contaminants. Moreover, no reliable estimation of clay content can be made from a coarser grain-size fraction. As a result, regression with coarser-grained fractions can produce spurious interpretations of background values and contamination. Normalization with the clay content or by an alternative grain-size proxy is recommended.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号