ABSTRACT High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning. 相似文献
Methane content in coal seam is an essential parameter for the assessment of coalbed gas reserves and is a threat to underground coal mining activities. Compared with the adsorption-isotherm-based indirect method, the direct method by sampling methane-bearing coal seams is apparently more accurate for predicting coalbed methane content. However, the traditional sampling method by using an opened sample tube or collecting drill cuttings with air drilling operation would lead to serious loss of coalbed methane in the sampling process. The pressurized sampling method by employing mechanical-valve-based pressure corer is expected to reduce the loss of coalbed methane, whereas it usually results in failure due to the wear of the mechanical valve. Sampling of methane-bearing coal seams by freezing was proposed in this study, and the coalbed gas desorption characteristics under freezing temperature were studied to verify the feasibility of this method. Results show that low temperature does not only improve the adsorption velocity of the coalbed gas, but also extend the adsorption process and increase the total adsorbed gas. The total adsorbed methane gas increased linearly with decreasing temperature, which was considered to be attributed to the decreased Gibbs free energy and molecular average free path of the coalbed gas molecular caused by low temperature. In contrast, the desorption velocity and total desorbed gas are significantly deceased under lower temperatures. The process of desorption can be divided into three phases. Desorption velocity decreases linearly at the first phase, and then, it shows a slow decreases at the second phase. Finally, the velocity of desorption levels off to a constant value at the third phase. The desorbed coalbed gas shows a parabolic relation to temperature at each phase, and it increases with increasing temperature at the first phase, and then, it poses a declining trend with increasing temperature at the rest phases. The experimental results show that decreasing the system temperature can restrain desorption of coalbed methane effectively, and it is proven to be a feasible way of sampling methane-bearing coal seams.
Reservoir sizing is one of the most important aspects of water resources engineering as the storage in a reservoir must be sufficient to supply water during extended droughts. Typically, observed streamflow is used to stochastically generate multiple realizations of streamflow to estimate the required storage based on the Sequent Peak Algorithm (SQP). The main limitation in this approach is that the parameters of the stochastic model are purely derived from the observed record (limited to less than 80 years of data) which does not have information related to prehistoric droughts. Further, reservoir sizing is typically estimated to meet future increase in water demand, and there is no guarantee that future streamflow over the planning period will be representative of past streamflow records. In this context, reconstructed streamflow records, usually estimated based on tree ring chronologies, provide better estimates of prehistoric droughts, and future streamflow records over the planning period could be obtained from general circulation models (GCMs) which provide 30 year near-term climate change projections. In this study, we developed paleo streamflow records and future streamflow records for 30 years are obtained by forcing the projected precipitation and temperature from the GCMs over a lumped watershed model. We propose combining observed, reconstructed and projected streamflows to generate synthetic streamflow records using a Bayesian framework that provides the posterior distribution of reservoir storage estimates. The performance of the Bayesian framework is compared to a traditional stochastic streamflow generation approach. Findings based on the split-sample validation show that the Bayesian approach yielded generated streamflow traces more representative of future streamflow conditions than the traditional stochastic approach thereby, reducing uncertainty on storage estimates corresponding to higher reliabilities. Potential strategies for improving future streamflow projections and its utility in reservoir sizing and capacity expansion projects are also discussed. 相似文献