This paper studies dynamic crack propagation by employing the distinct lattice spring model (DLSM) and 3‐dimensional (3D) printing technique. A damage‐plasticity model was developed and implemented in a 2D DLSM. Applicability of the damage‐plasticity DLSM was verified against analytical elastic solutions and experimental results for crack propagation. As a physical analogy, dynamic fracturing tests were conducted on 3D printed specimens using the split Hopkinson pressure bar. The dynamic stress intensity factors were recorded, and crack paths were captured by a high‐speed camera. A parametric study was conducted to find the influences of the parameters on cracking behaviors, including initial and peak fracture toughness, crack speed, and crack patterns. Finally, selection of parameters for the damage‐plasticity model was determined through the comparison of numerical predictions and the experimentally observed cracking features. 相似文献
The Three Gorges Project is the world's largest water conservancy project. According to the design standards for the 1,000‐year flood, flood diversion areas in the Jingjiang reach of the Yangtze River must be utilized to ensure the safety of the Jingjiang area and the city of Wuhan. However, once these areas are used, the economic and life loss in these areas may be very great. Therefore, it is vital to reduce this loss by developing a scheme that reduces the use of the flood diversion areas through flood regulation by the Three Gorges Reservoir (TGR), under the premise of ensuring the safety of the Three Gorges Dam. For a 1,000‐year flood on the basis of a highly destructive flood in 1954, this paper evaluates scheduling schemes in which flood diversion areas are or are not used. The schemes are simulated based on 2.5‐m resolution reservoir topography and an optimized model of dynamic capacity flood regulation. The simulation results show the following. (a) In accord with the normal flood‐control regulation discharge, the maximum water level above the dam should be not more than 175 m, which ensures the safety of the dam and reservoir area. However, it is necessary to utilize the flood diversion areas within the Jingjiang area, and flood discharge can reach 2.81 billion m3. (b) In the case of relying on the TGR to impound floodwaters independently rather than using the flood diversion areas, the maximum water level above the dam reaches 177.35 m, which is less than the flood check level of 180.4 m to ensure the safety of the Three Gorges Dam. The average increase of the TGR water level in the Chongqing area is not more than 0.11 m, which indicates no significant effect on the upstream reservoir area. Comparing the various scheduling schemes, when the flood diversion areas are not used, it is believed that the TGR can execute safe flood control for a 1,000‐year flood, thereby greatly reducing flood damage. 相似文献
ABSTRACT High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning. 相似文献
Starfish oocytes with intact germinal vesicles (GVs) were cut along desired planes with glass needles or ligated using silk
thread loops into two parts and allowed to mature in vitro, and inseminated. The experimental results showed that (1) only
the parts with GVs or partial GV contents (PGVCs) cleaved, those without any GV materials did not; but nucleated and non-nucleated
fragments cut from mature eggs were able to divide; (2) the development of animal parts of oocytes containing GVs or PGVCs
was like that of animal fragments of matured oocytes with female pronuclei; most of them gave rise to permanent blastulae,
and just a few formed ectodermal vesicles with a little primary mesenchyme; (3) a large part of vegetal fragments with GVs
or PGVCs, and the vegetal parts of mature eggs without female pronuclei developed into small but normal embryos; (4) the fragments
containing GVs or PGVCs obtained from the oocytes along a plane parallel to the animal-vegetal (A-V) axis developed as normally
as the halves (with or without female pronuclei) severed from mature eggs along the same axis. Based on the data above, it
was concluded that (1) the non-chromatin materials in the oocyte GVs are indispensable for successful fertilization and cleavage
of starfish eggs; (2) some factor (s) located asymmetrically in the vegetal hemispheres of starfish oocytes is (are) responsible
for formation of the archenteron and primary mesenchyme. It is evident from the above findings that the oocyte cytoplasm of
the starfish had already regionalized before the GV break-down.
Contribution No. 1722 from the Institute of Oceanology, Academia Sinica 相似文献
As well known, the methods of remote sensing and Bowen Ratio for retrieving surface flux are based on energy balance closure; however, in most cases, surface energy observed in experiment is lack of closure. There are two main causes for this: one is from the errors of the observation devices and the differences of their observational scale; the other lies in the effect of horizontal advection on the surface flux measurement. Therefore, it is very important to estimate the effects of horizontal advection quantitatively. Based on the local advection theory and the surface experiment, a model has been proposed for correcting the effect of horizontal advection on surface flux measurement, in which the relationship between the fetch of the measurement and pixel size for remote sensed data was considered. By means of numerical simulations, the sensitivities of the main parameters in the model and the scaling problems of horizontal advection were analyzed. At last, by using the observational data acquired in agricultural field with relatively homogeneous surface, the model was validated.
An improved Solar Radio Spectrometer working at 1.10-2.06 GHz with much improved spectral and temporal resolution, has been accomplished by the National Astronomical Observatories and Hebei Semiconductor Research Institute, based on an old spectrometer at 1-2 GHz. The new spectrometer has a spectral resolution of 4 MHz and a temporal resolution of 5ms, with an instantaneous detectable range from 0.02 to 10 times of the quiet Sun flux. It can measure both left and right circular polarization with an accuracy of 10% in degree of polarization. Some results of preliminary observations that could not be recorded by the old spectrometer at 1-2 GHz are presented. 相似文献