This study presents a machine learning framework for multi-step-ahead prediction of groundwater levels in agricultural regions heavily reliant on groundwater irrigation. The framework utilizes a comprehensive set of p...
详细信息
This study presents a machine learning framework for multi-step-ahead prediction of groundwater levels in agricultural regions heavily reliant on groundwater irrigation. The framework utilizes a comprehensive set of predictive factors, including meteorological, hydrological, and human activity data. An optimal combination of input variables and their temporal delays was determined using a novel selection method. To address overfitting, a mathematical model for hyperparameter optimization was developed, leveraging sample subset cross- validation and an improved differential evolution algorithm. Numerical experiments on the YingGuo region in the Huaihe River Basin demonstrated that the hyperparameter optimization resulted in an 11.6%-38.5% - 38.5% increase in the Nash-Sutcliffe Efficiency (NSE) indicator. Additionally, fine-tuned temporal scales, from monthly to fiveday resolution, significantly improved predictive performance, with NSE increasing from 0.629 to 0.952 (33.9% enhancement). However, longer forecasting horizons led to a 29.4% reduction in NSE. The study also implemented a multi-core parallel computing framework, which achieved a 15.35-fold improvement in computational efficiency while maintaining predictive precision. The integration of external factors enhanced the predictive performance across various observation wells. These findings contribute to a better understanding of groundwater dynamics and highlight the potential of machine learning models in improving groundwater depth predictions in agricultural regions with high reliance on groundwater irrigation.
With the widespread application of GNSS, the delicate handling of biases among different systems and different frequencies is of critical importance, wherein the inter-frequency clock biases (IFCBs) and observable-spe...
详细信息
With the widespread application of GNSS, the delicate handling of biases among different systems and different frequencies is of critical importance, wherein the inter-frequency clock biases (IFCBs) and observable-specific signal biases (OSBs) should be carefully corrected. Usually, a serial approach is used to calculate these products. To accelerate the computation speed and reduce the time delay, a multicoreparallel estimation strategy for IFCBs, code, and phase OSBs by utilizing task parallel library (TPL) is proposed, the parallel computations, including precise point positioning (PPP), IFCBs, and OSBs estimation, being carried out on the basis of data parallelisms and task-based asynchronous programming. Three weeks of observables from the multi-GNSS experiment campaign (MGEX) network is utilized. The result shows that the IFCB errors of GPS Block IIF and GLONASS M+ satellites are nonnegligible, in which the GLONASS M+ satellite R21 shows the largest IFCB of more than 0.60 m, while those of other systems and frequencies are marginal, and the code OSBs present excellent stability with a standard deviation (STD) of 0.10 ns for GPS and approximately 0.20 ns for other satellite systems. Besides, the phase OSBs of all systems show the stability of better than 0.10 ns, wherein the Galileo satellites show the best performance of 0.01 ns. Compared with the single-core serial computing method, the acceleration rates for IFCBs and OSBs estimation are 3.10, 5.53, 9.66, and 17.04 times higher using four, eight, sixteen, and thirty-two physical cores, respectively, through multi-coreparallelized execution.
暂无评论