The Particle Flow Code is a typical DEM numerical software;however, the microparameters of DEM models need to be calibrated before numerical simulation. In most cases, the trial-and-error method is used;however, it ta...
详细信息
The Particle Flow Code is a typical DEM numerical software;however, the microparameters of DEM models need to be calibrated before numerical simulation. In most cases, the trial-and-error method is used;however, it takes a great deal of time and the results depend on the researchers' experience. To address this issue, the cross-entropy method (CEM), differential evolution (DE), electromagnetic field optimization (EFO), moth-flame optimization (MFO) and salp swarm optimization (SSO) algorithm have been used for microparameters calibration. We provide a numerical simulation example to verify the validity of these microparameter calibration methods;it turns out that the number of iterations was large. To reduce the computational effort of obtaining suitable microparameters of the DEM model, we determine the optimal hyperparameters of the CEM, DE, EFO, MFO and SSO microparameter calibrating techniques. Through the analysis of the results, the number of iterations of these algorithms was markedly reduced. Considering the number of iterations, the number of hyperparameters and the results of numerical simulation, we suggest SSO algorithm for microparameter calibration. We also give another numerical simulation example to verify the validity of the proposed method. We found that the number of iterations for obtaining suitable microparameters was less than 100, and only 1 hyperparameter needed to be determined. Compared with previous studies, the number of iterations decreased remarkably.
With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service *** volume is an influential parameter for planning and operatin...
详细信息
With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service *** volume is an influential parameter for planning and operating traffic *** study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning *** fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal ***,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition *** second aspect involves predicting traffic volume using the long short-term memory ***,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent ***,the fusion of the obtained results leads to a final traffic volume *** experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and *** achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the *** findings highlight the accuracy of traffic pattern ***,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning.
Large Bayesian vector autoregressions with the natural conjugate prior are now routinely used for forecasting and structural analysis. It has been shown that selecting the prior hyperparameters in a data-driven manner...
详细信息
Large Bayesian vector autoregressions with the natural conjugate prior are now routinely used for forecasting and structural analysis. It has been shown that selecting the prior hyperparameters in a data-driven manner can often substantially improve forecast performance. We propose a computationally efficient method to obtain the optimal hyperparameters based on automatic differentiation, which is an efficient way to compute derivatives. Using a large US data set, we show that using the optimal hyperparameter values leads to substantially better forecast performance. Moreover, the proposed method is much faster than the conventional grid-search approach, and is applicable in high-dimensional optimization problems. The new method thus provides a practical and systematic way to develop better shrinkage priors for forecasting in a data-rich environment.
Accurate prediction of industrial load is a crucial step in the development of smart grids. Industrial load forecasting is fundamentally a time series forecasting problem. Traditional time series forecasting models st...
详细信息
ISBN:
(纸本)9798350328356;9798350328349
Accurate prediction of industrial load is a crucial step in the development of smart grids. Industrial load forecasting is fundamentally a time series forecasting problem. Traditional time series forecasting models struggle to accurately predict complex industrial load changes due to their nonlinearity, time series nature, and unstationarity. Neural network models, with their robust self-learning capabilities, can effectively process industrial load data. However, these models are prone to overfitting and uncertainty issues arising from manual parameter adjustments based on experience during training. Moreover, the inability to handle bidirectional data propagation causes conventional neural network models to lose essential load data characteristics and interrelated data information. In this paper, we propose an SSA-Dropout-Bi-LSTM prediction model based on a sliding window. First, the Bi-directional Long Short-Term Memory (Bi-LSTM) neural network theory is employed to facilitate bidirectional information transfer. Dropout technique is utilized to decrease the model's overfitting degree. The sparrow search algorithm (SSA) is further used to search for the optimal hyperparameters of the Dropout-Bi-LSTM model. The model's parameter search uncertainty is reduced by dynamically adjusting parameters through the machine learning algorithm, thereby enhancing the neural network model's generalization ability. We conducted load forecasting for six industrial users in Zhejiang province. The proposed model's Mean Absolute Percentage Error (MAPE) on the test set averages at 3.75%, an improvement compared to other combinations of LSTM models (4.17%-5.37%), and a significant enhancement compared to RNN (7.22%) and GRU (5.94%). The mean coefficient of determination (R2) of the proposed model is 94.34%, which is considerably higher than RNN (87.58%) and GRU (90.28%). In comparison, the proposed model demonstrates higher prediction accuracy and better model fitting effects.
Large Bayesian VARs are now widely used in empirical macroeconomics. One popular shrinkage prior in this setting is the natural conjugate prior as it facilitates posterior simulation and leads to a range of useful ana...
详细信息
Large Bayesian VARs are now widely used in empirical macroeconomics. One popular shrinkage prior in this setting is the natural conjugate prior as it facilitates posterior simulation and leads to a range of useful analytical results. This is, however, at the expense of modeling flexibility, as it rules out cross-variable shrinkage, that is, shrinking coefficients on lags of other variables more aggressively than those on own lags. We develop a prior that has the best of both worlds: it can accommodate cross-variable shrinkage, while maintaining many useful analytical results, such as a closed-form expression of the marginal likelihood. This new prior also leads to fast posterior simulation-for a BVAR with 100 variables and 4 lags, obtaining 10,000 posterior draws takes less than half a minute on a standard desktop. We demonstrate the usefulness of the new prior via a structural analysis using a 15-variable VAR with sign restrictions to identify 5 structural shocks.
The Random Forest (RF) algorithm, a decision-tree-based technique, has become a promising approach for applications addressing runoff forecasting in remote areas. This machine learning approach can overcome the limita...
详细信息
The Random Forest (RF) algorithm, a decision-tree-based technique, has become a promising approach for applications addressing runoff forecasting in remote areas. This machine learning approach can overcome the limitations of scarce spatio-temporal data and physical parameters needed for process-based hydrological models. However, the influence of RF hyperparameters is still uncertain and needs to be explored. Therefore, the aim of this study is to analyze the sensitivity of RF runoff forecasting models of varying lead time to the hyperparameters of the algorithm. For this, models were trained by using (a) default and (b) extensive hyperparameter combinations through a grid-search approach that allow reaching the optimal set. Model performances were assessed based on the R-2, %Bias, and RMSE metrics. We found that: (i) The most influencing hyperparameter is the number of trees in the forest, however the combination of the depth of the tree and the number of features hyperparameters produced the highest variability-instability on the models. (ii) Hyperparameter optimization significantly improved model performance for higher lead times (12- and 24-h). For instance, the performance of the 12-h forecasting model under default RF hyperparameters improved to R-2 = 0.41 after optimization (gain of 0.17). However, for short lead times (4-h) there was no significant model improvement (0.69 < R-2 < 0.70). (iii) There is a range of values for each hyperparameter in which the performance of the model is not significantly affected but remains close to the optimal. Thus, a compromise between hyperparameter interactions (i.e., their values) can produce similar high model performances. Model improvements after optimization can be explained from a hydrological point of view, the generalization ability for lead times larger than the concentration time of the catchment tend to rely more on hyperparameterization than in what they can learn from the input data. This insight can help in
暂无评论