This paper studies dynamic crack propagation by employing the distinct lattice spring model (DLSM) and 3‐dimensional (3D) printing technique. A damage‐plasticity model was developed and implemented in a 2D DLSM. Applicability of the damage‐plasticity DLSM was verified against analytical elastic solutions and experimental results for crack propagation. As a physical analogy, dynamic fracturing tests were conducted on 3D printed specimens using the split Hopkinson pressure bar. The dynamic stress intensity factors were recorded, and crack paths were captured by a high‐speed camera. A parametric study was conducted to find the influences of the parameters on cracking behaviors, including initial and peak fracture toughness, crack speed, and crack patterns. Finally, selection of parameters for the damage‐plasticity model was determined through the comparison of numerical predictions and the experimentally observed cracking features. 相似文献
We investigate our ability to assess transfer of hexavalent chromium, Cr(VI), from the soil to surface runoff by considering the effect of coupling diverse adsorption models with a two‐layer solute transfer model. Our analyses are grounded on a set of two experiments associated with soils characterized by diverse particle size distributions. Our study is motivated by the observation that Cr(VI) is receiving much attention for the assessment of environmental risks due to its high solubility, mobility, and toxicological significance. Adsorption of Cr(VI) is considered to be at equilibrium in the mixing layer under our experimental conditions. Four adsorption models, that is, the Langmuir, Freundlich, Temkin, and linear models, constitute our set of alternative (competing) mathematical formulations. Experimental results reveal that the soil samples characterized by the finest grain sizes are associated with the highest release of Cr(VI) to runoff. We compare the relative abilities of the four models to interpret experimental results through maximum likelihood model calibration and four model identification criteria (i.e., the Akaike information criteria [AIC and AICC] and the Bayesian and Kashyap information criteria). Our study results enable us to rank the tested models on the basis of a set of posterior weights assigned to each of them. A classical variance‐based global sensitivity analysis is then performed to assess the relative importance of the uncertain parameters associated with each of the models considered, within subregions of the parameter space. In this context, the modelling strategy resulting from coupling the Langmuir isotherm with a two‐layer solute transfer model is then evaluated as the most skilful for the overall interpretation of both sets of experiments. Our results document that (a) the depth of the mixing layer is the most influential factor for all models tested, with the exception of the Freundlich isotherm, and (b) the total sensitivity of the adsorption parameters varies in time, with a trend to increase as time progresses for all of the models. These results suggest that adsorption has a significant effect on the uncertainty associated with the release of Cr(VI) from the soil to the surface runoff component. 相似文献
ABSTRACT High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning. 相似文献
Abstract— It has now been about a decade since the first demonstrations that hypervelocity particles could be captured, partially intact, in aerogel collectors. But the initial promise of a bonanza of partially‐intact extraterrestrial particles, collected in space, has yet to materialize. One of the difficulties that investigators have encountered is that the location, extraction, handling and analysis of very small (10 μm and less) grains, which constitute the vast majority of the captured particles, is challenging and burdensome. Furthermore, current extraction techniques tend to be destructive over large areas of the collectors. Here we describe our efforts to alleviate some of these difficulties. We have learned how to rapidly and efficiently locate captured particles in aerogel collectors, using an automated microscopic scanning system originally developed for experimental nuclear astrophysics. We have learned how to precisely excavate small access tunnels and trenches using an automated micromanipulator and glass microneedles as tools. These excavations are only destructive to the collector in a very small area—this feature may be particularly important for excavations in the precious Stardust collectors. Using actuatable silicon microtweezers, we have learned how to extract and store “naked” particles—essentially free of aerogel—as small as 3 μm in size. We have also developed a technique for extracting particles, along with their terminal tracks, still embedded in small cubical aerogel blocks. We have developed a novel method for storing very small particles in etched nuclear tracks. We have applied these techniques to the extraction and storage of grains captured in aerogel collectors (Particle Impact Experiment, Orbital Debris Collector Experiment, Comet‐99) in low Earth orbit. 相似文献
Abstract— Calculations of the formation of seven types of chondrules in Semarkona from a gas of solar composition were performed with the FACT computer program to predict the chemistries of oxides (including silicates), developed by the authors and their colleagues. The constrained equilibrium theory was used in the calculations with two nucleation constraints suggested by nucleation theory. The first constraint was the blocking of Fe and other metal gaseous atoms from condensing to form solids or liquids because of very high surface free energies and high surface tensions of the solid and liquid metals, respectively. The second constraint was the blocking of the condensation of solids and the formation of metastable liquid oxides (including silicates) well below their liquidus temperatures. Our laboratory experiments suggested subcooling of type IIA chondrule compositions of 400 degrees or more below the liquidus temperature. The blocking of iron leads to a supersaturation of Fe atoms, so that the partial pressure of Fe (pFe) is larger than the partial pressure at equilibrium (pFe(eq)). The supersaturation ratio S = pFe/pFe(eq) becomes larger than 1 and increases rapidly with a decrease in temperature. This drives the reaction Fe + H2O ? H2 + FeO to the right. With S = 100, the activity of FeO in the liquid droplet is 100 times as large as the value at equilibrium. The FeO activities are a function of temperature and provide relative average temperatures of the crystallization of chondrules. Our calculations for the LL3.0 chondrite Semarkona and our study of some non‐equilibrium effects lead to accurate representations of the compositions of chondrules of types IA, IAB, IB, IIA, IIAB, IIB, and CC. Our concepts readily explain both the variety of FeO concentrations in the different chondrule types and the entire process of chondrule formation. Our theory is unified and could possibly explain the formation of chondrules in all chondritic meteorites as well as provide a simple explanation for the complex chemistries of chondrites, and especially for type 3 chondrites. 相似文献
As well known, the methods of remote sensing and Bowen Ratio for retrieving surface flux are based on energy balance closure; however, in most cases, surface energy observed in experiment is lack of closure. There are two main causes for this: one is from the errors of the observation devices and the differences of their observational scale; the other lies in the effect of horizontal advection on the surface flux measurement. Therefore, it is very important to estimate the effects of horizontal advection quantitatively. Based on the local advection theory and the surface experiment, a model has been proposed for correcting the effect of horizontal advection on surface flux measurement, in which the relationship between the fetch of the measurement and pixel size for remote sensed data was considered. By means of numerical simulations, the sensitivities of the main parameters in the model and the scaling problems of horizontal advection were analyzed. At last, by using the observational data acquired in agricultural field with relatively homogeneous surface, the model was validated.