首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The Quaternary history of the Capitol Reef area, Utah, is closely linked to the basaltic-andesite boulder deposits that cover much of the landscape. Understanding the age and mode of emplacement of these deposits is crucial to deciphering the Quaternary evolution of this part of the Colorado Plateau. Using cosmogenic 3He exposure age dating, we obtained apparent exposure ages for several key deposits in the Capitol Reef area. Coarse boulder diamicts capping the Johnson Mesa and Carcass Creek Terraces are not associated with the Bull Lake glaciation as previously thought, but were deposited 180±15 to 205±17 ka (minimum age) and are the result of debris flow deposition. Desert pavements on the Johnson Mesa surface give exposure ranging from 97±8 to 159±14 ka and are 34–96 kyears younger than the boulder exposure ages. The offset between the boulder and pavement exposure ages appears to be related to a delay in pavement formation until the penultimate glacial/interglacial transition or periodic burial and exposure of pavement clasts since debris flow deposition. Incision rates for the Capitol Reef reach of the Fremont River calculated from the boulder exposure ages range from 0.40 to 0.43 m kyear−1 (maximum rates) and are some of the highest on the Colorado Plateau.  相似文献   

2.
Varnish microlamination (VML) dating is a correlative age determination technique that can be used to date and correlate various geomorphic features in deserts. In this study, we establish a generalized late Quaternary (i.e., 0–300 ka) varnish layering sequence for the drylands of western USA and tentatively correlate it with the SPECMAP oxygen isotope record. We then use this climatically correlated varnish layering sequence as a correlative dating tool to determine surface exposure ages for late Quaternary geomorphic features in the study region. VML dating of alluvial fan deposits in Death Valley of eastern California indicates that, during the mid to late Pleistocene, 5–15 ky long aggradation events occurred during either wet or dry climatic periods and that major climate shifts between glacial and interglacial conditions may be the pacemaker for alteration of major episodes of fan aggradation. During the Holocene interglacial time, however, 0.5–1 ky long brief episodes of fan deposition may be linked to short periods of relatively wet climate. VML dating of alluvial desert pavements in Death Valley and the Mojave Desert reveals that pavements can be developed rapidly (< 10 ky) during the Holocene (and probably late Pleistocene) in the arid lowlands (< 800 m msl) of these regions; but once formed, they may survive for 74–85 ky or even longer without being significantly disturbed by geomorphic processes operative at the pavement surface. Data from this study also support the currently accepted, “being born at the surface” model of desert pavement formation. VML dating of colluvial boulder deposits on the west slope of Yucca Mountain, southern Nevada, yields a minimum age of 46 ka for the emplacement of these deposits on the slope, suggesting that they were probably formed during the early phase of the last glaciation or before. These results, combined with those from our previous studies, demonstrate that VML dating has great potential to yield numerical age estimates for various late Quaternary geomorphic features in the western USA drylands.  相似文献   

3.
Mark A. Fonstad   《Geomorphology》2006,77(3-4):217
The linkages between ecology and geomorphology can be difficult to identify because of physical complexity and the limitations of the current theoretical representations in these two fields of study. Deep divisions between these disciplines are manifest in the methods used to simulate process, such as rigidly physical-deterministic methods for many aspects of geomorphology compared with purely stochastic simulations in many models of change in landcover. Practical and theoretical research into ecology–geomorphology linkages cannot wait for a single simulation schema which may never come; as a result, studies of these linkages often appear disjointed and inconsistent.The grid-based simulation framework for cellular automata (CA) allows simultaneous use of competing schemas. CA use in general geographic studies has been primarily limited to urban simulations models of change for land cover, both highly stochastic and/or expert rule-based. In the last decade, however, methods for describing physically deterministic systems in the CA framework have become much more accurate. The possibility now exists to merge separate CA simulations of different environmental systems into unified “multiautomata” models. Because CAs allow transition rules that are deterministic, probabilistic, or expert rule-based, they can immediately incorporate the existing knowledge rules in ecology and geomorphology. The explicitly spatial nature of CA provides a map-like framework that should allow a simple and deeply rooted connection with the mapping traditions of the geosciences and ecological sciences.  相似文献   

4.
Ronald I. Dorn   《Geomorphology》2003,55(1-4):155
An April–May 2000 “Coon Creek Fire” burned 37.5 km2 of the Sierra Ancha Mountains, 32.3 km miles north of Globe, AZ—including 25 sandstone and 19 diorite boulders surveyed in 1989 and resurveyed after the burn, after the summer 2000 monsoon season, and after the winter 2001 season. When viewed from the perspective of cumulative eroded area, both sandstone and diorite displayed bimodal patterns with 79% of sandstone boulder area and 93% of diorite boulder area undergoing either no fire-induced erosion or fire-induced erosion >76 mm. When stretched over cumulative boulder areas, erosion due to the fire averaged >26 mm for sandstone and >42 mm for diorite. Post-fire erosion from thunderstorm summer rains averaged <1 mm for 5 diorite and 1 mm for 10 sandstone boulders. While only a single diorite boulder eroded an average of 1.2 mm after the winter, winter erosion removed an average of 5.5 mm from 14 sandstone boulders. Thus, fire appears to increase a rock's susceptibility to post-fire weathering and erosion processes, as predicted by Goudie et al. [Earth Surf. Process. Landf. 17 (1992) 605]. In contrast to experimental research indicating the importance of size in fire weathering, no statistically significant relationship exists between erosion and boulder height or boulder surface area—a result similar to Zimmerman et al. [Quat. Res. 42 (1994) 255]. These data exclude 12 original sites and 85 boulders at sites impacted by the fire that could not be relocated, with a reasonable cause for the lack of relocation being boulder obliteration by the fire. Given evidence from 10Be and 26Al cosmogenic nuclides [Earth Planet. Sci. Lett. 186 (2001) 269] supporting the importance of boulders in controlling evolution of nonglaciated, bouldered landscapes [Geol. Soc. Amer. Bull. 76 (1965) 1165], fire obliteration of boulders could be an important process driving drainage evolution in nonglaciated mountains.  相似文献   

5.
“Stone runs” is the Falklands vernacular term for openwork boulder accumulations, which include extensive blockstreams like the famous Darwin “stone-river” and associated features such as stone stripes. Since the early 20th century, they have been interpreted as the product of a suite of periglacial processes, including frost-wedging, gelifluction, frost heave, frost-sorting and snowmelt runoff. Following a literature review, the results of recent field investigations of the valley-floor blockstreams of East Falkland are presented. Access to the internal structure of these forms provides evidence for the existence of a three-fold profile, with clear vertical size gradation presenting striking similarities with an inverted weathering profile. Micromorphological analyses, SEM, XRD, thin sections and grain-size analyses lead to the hypothesis of an alternative model of stone run formation. It is suggested that the material forming the stone run profile lato sensu (including the superficial pavement) is not of periglacial origin, but derives directly from the stripping and accumulation downslope of a regolith, possibly Tertiary in age and formed under subtropical or temperate conditions. The valley-floor stone runs should, therefore, be considered as complex polygenetic landforms that may have formed according to a six-stage scenario, including in situ chemical weathering, regolith stripping by mass movements, soil formation, further regolith stripping, downslope accumulation and matrix washing-out (all phases possibly achieved by the Early Quaternary). Periglacial reworking of the stone run material would have operated at a “final” stage, i.e. during Quaternary cold stages, with boulder bioweathering and limonite-staining operating during the temperate intervals including the present one. The suggested antiquity of the Falklands blockstreams is in accordance with Caine's pioneer interpretation of Tasmania blockfields and with recent analyses and cosmogenic datings of blockfields from Scandinavia and North America.  相似文献   

6.
Vishwas S. Kale   《Geomorphology》2007,85(3-4):306
The efficacy of extreme events is directly linked to the flood power and the total energy expended. The geomorphic effectiveness of floods is evaluated in terms of the distribution of stream power per unit boundary area (ω) over time, for three very large floods of the 20th Century in the Indian Peninsula. These floods stand out as outliers when compared with the peak floods per unit drainage area recorded elsewhere in the world. We used flood hydrographs and at-a-station hydraulic geometry equations, computed for the same gauging site or a nearby site, to construct approximately stream-power curves and to estimate the total energy expended by each flood. Critical unit stream power values necessary to entrain cobbles and boulders were estimated on the basis of empirical relationships for coarse sediment transport developed by Williams [Williams, G.P., 1983. Paleohydrological methods and some examples from Swedish fluvial environments. I. Cobble and boulder deposits. Geografiska Annaler 65A, 227–243.] in order to determine the geomorphological effectiveness of the floods. The estimates indicate that the minimum power per unit area values for all three floods were sufficiently high, and stream energy was above the threshold of boulder movement (90 W m− 2) for several tens of hours. The peak unit stream power values and the total energy expended during each flood were in the range of 290–325 W m− 2 and 65–160 × 106 J respectively. The average and peak flood powers were found to be higher or comparable to those estimated for extreme palaeo or modern floods on low-gradient, alluvial rivers.  相似文献   

7.
A distance cartogram is a diagram that visualizes the proximity indices between points in a network, such as time–distances between cities. The Euclidean distances between the points on the distance cartogram represent the given proximity indices. This is a useful visualization tool for the level of service of transport, e.g. difference in the level of service between regions or points in a network and its improvement in the course of time. The two previously proposed methods—multidimensional scaling (MDS) and network time–space mapping—have certain advantages and disadvantages. However, we observe that these methods are essentially the same, and the merits of both these methods can be combined to formulate a generalized solution. In this study, we first formulate the time–space mapping problem, which includes the key features of both of the above stated methods, and propose a generalized solution. We then apply this solution to the time–distances of Japan's railway networks to confirm its applicability.  相似文献   

8.
With the rapid development of geospatial data capture technologies such as the Global Positioning System, more and higher accuracy data are now readily available to upgrade existing spatial datasets having lower accuracy using positional accuracy improvement (PAI) methods. Such methods may not achieve survey-accurate spatial datasets but can contribute to significant improvements in positional accuracy in a cost-effective manner. This article addresses a comparative study on PAI methods with applications to improve the spatial accuracy of the digital cadastral for Shanghai. Four critical issues are investigated: (1) the choice of improvement model in PAI adjustment; five PAI models are presented, namely the translation, scale and translation, similarity, affine, and second-order polynomial models; (2) the choice of estimation method in PAI adjustment; three estimation methods in PAI adjustment are proposed, namely the classical least squares (LS) adjustment, which assumes that only the observation vector contains error, the general least squares (GLS) adjustment, which regards both the ground and map coordinates of control points as observations with errors, and the total least squares (TLS) adjustment, which takes the errors in both the observation vector and the design matrix into account; (3) the impact of the configuration of ground control points (GCPs) on the result of PAI adjustment; 12 scenarios of GCP configurations are tested, including different numbers and distributions of GCPs; and (4) the deformation of geometric shape by the above-mentioned transformation models is presented in terms of area and perimeter.

The empirical experiment results for six test blocks in Shanghai demonstrated the following. (1) The translation model hardly improves the positional accuracy because it accounts only for the shift error within digital datasets. The other four models (i.e., the scale and translation, similarity, affine, and second-order polynomial models) significantly improve the positional accuracy, which is assessed at checkpoints (CKPs) by calculating the difference between the updated coordinates transformed from the map coordinates and the surveyed coordinates. On the basis of the refined Akaike information criterion, the two best optimal transformation models for PAI are determined as the scale and translation and affine transformation models. (2) The weighted sum of square errors obtained using the GLS and TLS methods are much less than those obtained using the classical least squares method. The result indicates that both the GLS and TLS estimation methods can achieve greater reliability and accuracy in PAI adjustment. (3) The configuration of GCPs has a considerable effect on the result of PAI adjustment. Thus, an optimal configuration scheme of GCPs is determined to obtain the highest positional accuracy in the study area. (4) Compared with the deformations of geometric shapes caused by the transformation models, the scale and translation model is found to be the best model for the study area.  相似文献   

9.
The Waite–Nicolson rangeland management method for semi-arid chenopod shrublands predicts that smaller paddocks with medium to moderate stocking rates help to preserve the native vegetation. Vegetation cover around waterpoints in three small paddocks (<2000 ha) from Middleback Station, South Australia was studied using multivariate analysis. Data from quadrats sampled along radiating transects were tested for correlations with a number of site features and grazing history factors. Two significant associations were detected: quadrats with an abundance of Rhagodia parabolica and less palatable species such as Maireana pyramidata, and Atriplex stipitata were correlated positively with proximity to water points, paddock age and stocking rate, and negatively with paddock size. In contrast, quadrats with species such as Rhagodia ulicina and the more palatable M. sedifolia were correlated with increasing distance from the water points and paddock size, but negatively with age and stocking rates. Transect direction was not correlated with either group. Twelve of the 20 species examined, including the important forage species A. vesicaria, also were not correlated with those paddock and grazing features included here. These results suggest that the distribution of some chenopod shrub species in fenced paddocks is still possibly affected by a combination of these factors in the long term by grazing at densities of 6 ha sheep−1 and that the method, although maintaining the fodder species, may not be preserving biodiversity at these grazing levels, although further study is needed.  相似文献   

10.
Terrestrial Laser Scanning of grain roughness in a gravel-bed river   总被引:2,自引:1,他引:1  
This paper demonstrates the application of Terrestrial Laser Scanning (TLS) to determine the full population of grain roughness in gravel-bed rivers. The technique has the potential to completely replace the need for complex, time-consuming manual sampling methods. Using TLS, a total of 3.8 million data points (mean spacing 0.01 m) were retrieved from a gravel bar surface at Lambley on the River South Tyne, UK. Grain roughness was extracted through determination of twice the local standard deviation (2σz) of all the elevations in a 0.15 m radius moving window over the data cloud. 2σz values were then designated to each node on a 5 cm regular grid, allowing fine resolution DEMs to be produced, where the elevation is equivalent to the grain roughness height. Comparisons are made between TLS-derived grain roughness and grid-by-number sampling for eight 2 m2 patches on the bar surface. Strong relationships exist between percentiles from the population of 2σz heights with measured a-, b-, and c-axes, with the closest matches appearing for the c-axis. Although strong relationships exist between TLS-derived grain roughness (2σz), variations in the degree of burial, packing and imbrication, results in very different slope and intercept exponents. This highlights that conventional roughness measurement using gravel axis length should be used with caution as measured axes do not necessarily represent the actual extent to which the grain protrudes into the flow. The sampling error inherent in conventional sampling is also highlighted through undertaking Monte Carlo simulation on a population of 2000 clasts measured using the grid-by-number method and comparing this with the TLS-derived population of grain roughness heights. Underestimates of up to − 23% and overestimates of up to + 50% were found to occur when considering the D84, and − 20% and overestimates of up to + 36% were found to occur when considering the D50.  相似文献   

11.
Glacier mass balance is a key component of glacier monitoring programs. Information on the mass balance of Sawir Mountains is poor due to a dearth of in-situ measurements. This paper introduces the applicability of an ultra-long-range terrestrial laser scanner(TLS) to monitor the mass balance of Muz Taw Glacier, Sawir Mountains, China. The Riegl VZ?-6000 TLS is exceptionally well-suited for measuring snowy and icy terrain. Here, we use TLS to create repeated high spatiotemporal resolution DEMs, focusing on the annual mass balance(June 2, 2015 to July 25, 2016). According to TLS-derived high spatial resolution point clouds, the front variation(glacier retreat) of Muz Taw Glacier was 9.3 m. The mean geodetic elevation change was 4.55 m at the ablation area. By comparing with glaciological measurements, the glaciological elevation change of individual stakes and the TLS-derived geodetic elevation change of corresponding points matched closely, and the calculated balance was-3.864±0.378 m w.e.. This data indicates that TLS provides accurate results and is therefore suitable to monitor mass balance evolution of Muz Taw Glacier.  相似文献   

12.
The research record on the quantification of sediment transport processes in periglacial mountain environments in Scandinavia dates back to the 1950s. A wide range of measurements is available, especially from the Kärkevagge region of northern Sweden. Within this paper satellite image analysis and tools provided by geographic information systems (GIS) are exploited in order to extend and improve this research and to complement geophysical methods. The processes of interest include mass movements such as solifluction, slope wash, dirty avalanches and rock- and boulder falls. Geomorphic process units have been derived in order to allow quantification via GIS techniques at a catchment scale. Mass movement rates based on existing field measurements are employed in the budget calculations. In the Kärkevagge catchment, 80% of the area can be identified either as a source area for sediments or as a zone where sediments are deposited. The overall budget for the slopes beneath the rockwalls in the Kärkevagge is approximately 680 t a−1 whilst about 150 t a−1 are transported into the fluvial system.  相似文献   

13.
Most regional geochemistry data reflect processes that can produce superfluous bits of noise and, perhaps, information about the mineralization process of interest. There are two end-member approaches to finding patterns in geochemical data—unsupervised learning and supervised learning. In unsupervised learning, data are processed and the geochemist is given the task of interpreting and identifying possible sources of any patterns. In supervised learning, data from known subgroups such as rock type, mineralized and nonmineralized, and types of mineralization are used to train the system which then is given unknown samples to classify into these subgroups.To locate patterns of interest, it is helpful to transform the data and to remove unwanted masking patterns. With trace elements use of a logarithmic transformation is recommended. In many situations, missing censored data can be estimated using multiple regression of other uncensored variables on the variable with censored values.In unsupervised learning, transformed values can be standardized, or normalized, to a Z-score by subtracting the subset's mean and dividing by its standard deviation. Subsets include any source of differences that might be related to processes unrelated to the target sought such as different laboratories, regional alteration, analytical procedures, or rock types. Normalization removes effects of different means and measurement scales as well as facilitates comparison of spatial patterns of elements. These adjustments remove effects of different subgroups and hopefully leave on the map the simple and uncluttered pattern(s) related to the mineralization only.Supervised learning methods, such as discriminant analysis and neural networks, offer the promise of consistent and, in certain situations, unbiased estimates of where mineralization might exist. These methods critically rely on being trained with data that encompasses all populations fairly and that can possibly fall into only the identified populations.  相似文献   

14.
The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits as can correlations of –1.0. Studies done in the 1970s on methods that use Bayes rule show that moderate correlations among attributes seriously affect estimates and even small correlations lead to increases in misclassifications. Adverse effects have been observed with small to moderate correlations when only six to eight variables were used. Consistent evidence of upward biased probability estimates from multivariate methods founded on Bayes rule must be of considerable concern to institutions and governmental agencies where unbiased estimates are required. In addition to increasing the misclassification rate, biased probability estimates make classification into deposit and nondeposit classes an arbitrary subjective decision. The probabilistic neural network has no problem dealing with correlated variables—its performance depends strongly on having a thoroughly representative training set. Probabilistic neural networks or logistic regression should receive serious consideration where unbiased estimates are required. The weights-of-evidence method would serve to estimate thresholds between anomalies and background and for exploratory data analysis.  相似文献   

15.
The Pleistocene periglacial legacy to the geomorphology of Dartmoor has been substantial. This paper examines some of these relict features in an area of western Dartmoor. The major features are tors, altiplanation terraces, boulder accumulations in a variety of patterns, and earth mounds. The tors and altiplanation terraces indicate the degree of slope modification created by frost action. The block-fields (clitter) are arranged into stripes, runs and garlands. Narrow stripes start and finish in midslope positions, while boulder runs converge and diverge, apparently at random. The long axis orientation of boulders in stripes is roughly in accord with the direction of the steepest slope, whereas orientation of boulders in blockfields is more variable. The altiplanation terraces and earth mounds occur on Cox Tor, which is composed of diabase. This contrast in rock type seems to explain the lack of similar features on the granite areas. The diabase is very closely jointed and weathers to a silt grade. The earth mounds are thought to be the result of frost thrusting in a silt-based soil. The general conclusion is that many of the landforms of Dartmoor are relicts from periglacial activity during the last glacial period. [Key words: periglaciation, tors, solifluction, Dartmoor.]  相似文献   

16.
To reduce the hazards from debris flows in drainage basins burned by wildfire, erosion control measures such as construction of check dams, installation of log erosion barriers (LEBs), and spreading of straw mulch and seed are common practice. After the 2002 Missionary Ridge Fire in southwest Colorado, these measures were implemented at Knight Canyon above Lemon Dam to protect the intake structures of the dam from being filled with sediment. Hillslope erosion protection measures included LEBs at concentrations of 220–620/ha (200–600% of typical densities), straw mulch was hand spread at concentrations up to 5.6 metric tons/hectare (125% of typical densities), and seeds were hand spread at 67–84 kg/ha (150% of typical values). The mulch was carefully crimped into the soil to keep it in place. In addition, 13 check dams and 3 debris racks were installed in the main drainage channel of the basin.The technical literature shows that each mitigation method working alone, or improperly constructed or applied, was inconsistent in its ability to reduce erosion and sedimentation. At Lemon Dam, however, these methods were effective in virtually eliminating sedimentation into the reservoir, which can be attributed to a number of factors: the density of application of each mitigation method, the enhancement of methods working in concert, the quality of installation, and rehabilitation of mitigation features to extend their useful life. The check dams effectively trapped the sediment mobilized during rainstorms, and only a few cubic meters of debris traveled downchannel, where it was intercepted by debris racks.Using a debris volume-prediction model developed for use in burned basins in the Western U.S., recorded rainfall events following the Missionary Ridge Fire should have produced a debris flow of approximately 10,000 m3 at Knight Canyon. The mitigation measures, therefore, reduced the debris volume by several orders of magnitude. For comparison, rainstorm-induced debris flows occurred in two adjacent canyons at volumes within the range predicted by the model.  相似文献   

17.
18.
遥感图像纹理信息提取方法综述   总被引:1,自引:0,他引:1  
纹理是遥感图像的重要特征,它提示了图像中辐射亮度值空间变化的重要信息。要利用图像空间信息提高分类精度,合理而有效地度量纹理至关重要。目前遥感图像纹理信息提取方法主要有:统计描述法、小波变换法、分维分形法和地统计学4类。分别就各种方法的优缺点、适用领域和应用情况进行了阐述,最后展望了遥感图像纹理信息提取方法的发展方向和研究热点。  相似文献   

19.
20.
The movement of bedload over a cross-section is often sampled using a “pressure-difference bedload sampler”, such as the Helley–Smith. Whereas several types are in use, no one device has gained universal acceptance as the standard for use in all types of streams. Moreover, evidence suggests that similar devices may collect substantially different amounts of bedload because of only slight modifications in design. In this study, sample weights collected by three types of pressure-difference samplers are compared to determine whether differences are statistically significant or whether sampler performance is so irregular and overlapping that one might regard them as being the same. The results confirm that the weights of samples collected by the devices are significantly different. Generally, the US BLH 84 collected less material, the Sheetmetal Helley–Smith collected more material, and the Original Helley–Smith was intermediate; these tendencies were consistent at two sites where bedload was measured. The implication of these results is that measured transport rates will vary depending on the sampler used and, therefore, they are not directly comparable without some mode of calibration. To place this finding in a larger context, sediment rating curves, determined from weights of samples and measurements of flow, were integrated over available flow records and used to estimate annual yield. Three estimates of annual yield, one for each device, were then compared with measures of annual accumulation from a weir pond below one of the collection sites. The results indicate that despite differences between the devices, data obtained with pressure-difference samplers estimated annual accumulations of sediment reasonably well. Predicted accumulations were within 40–50% of the measured yield for two samplers whereas the third sampler predicted within 80%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号