Due to the large volume of requests and the need to speed up the provision of services, production companies are migrating from a single service center to distributed centers. To support this migration, it is necessar...
详细信息
Due to the large volume of requests and the need to speed up the provision of services, production companies are migrating from a single service center to distributed centers. To support this migration, it is necessary to make intelligence decisions that benefit from automatic design of search algorithms. Considering these, this paper addresses the distributed hybrid flow shop scheduling problem with multiprocessor tasks (DHFSP-MT) as an extension of the hybrid flow shop scheduling problem with multiprocessor tasks (HFSP-MT) to minimize the maximum completion time among distributed factories. To provide effective decision support, we apply a novel framework called conditional markov chain search (CMCS) to automate the generation of heuristics, which is presented for the first time in the distributed shop scheduling problem to the best of our knowledge. We express the HFSP-MT as a markov decision process (MDP) and solve it through a hybrid q-learning-local search algorithm. By using the characteristics of the problem under study, we introduce two new concepts, weight and impact, which are used to develop an initial construction algorithm and two local search methods. To balance jobs between factories at runtime, we propose a load balancing method, which transfers selected jobs from certain source factories to destination factories. We compare the proposed CMCS with two state-of-the-art metaheuristic algorithms from the literature using publicly available benchmark instances. The computational results show that the proposed CMCS provides better performance than that of the existing algorithms on solving the considered DHFSP-MT.(c) 2023 Elsevier B.V. All rights reserved.
Accurately predicting volatility has always been the focus of government decision-making departments, financial regulators and academia. Therefore, it is very crucial to precisely predict the realized volatility (RV) ...
详细信息
Accurately predicting volatility has always been the focus of government decision-making departments, financial regulators and academia. Therefore, it is very crucial to precisely predict the realized volatility (RV) of the stock price index. In this paper, we take the RV sequences of Shanghai Stock Exchange Composite Index (SSEC), Standard & Poor 500 index (SPX) and Financial Times Stock Exchange Index (FTSE) as the research objects, and propose a predictive model based on optimized variational mode decomposition (VMD), deep learning models including deep belief network (DBN), long short-term memory network (LSTM) and gated recurrent unit (GRU), and reinforcement learningq-learningalgorithm. Firstly, the original RV sequence is decomposed by using the VMD ideal parameters optimized by grey wolf optimizer (GWO) to obtain the intrinsic mode functions (IMFs). Then, DBN, LSTM and GRU are used to predict same IMF simultaneously. Finally, the optimal weights of the above three models are determined by the q-learningalgorithm to construct an integrated model, and the final results are obtained after accumulating the predicted values of each IMF. The predictive performance of the model was evaluated by four loss functions: the mean average error (MAE), mean squared error (MSE), heterogeneous mean average error (HMAE), heterogeneous mean squared error (HMSE) and modified Diebold and Mariano test (MDM). The experimental results show that the constructed GVMD-q-DBN-LSTM-GRU method has better performance that the comparison model in both emerging and developed markets.
To overcome the shortcomings of the Arithmetic Optimization algorithm (AOA) in solution accuracy and convergence speed, this paper proposes an improved approach based on reinforcement q-learning and Random Elite Pool ...
详细信息
To overcome the shortcomings of the Arithmetic Optimization algorithm (AOA) in solution accuracy and convergence speed, this paper proposes an improved approach based on reinforcement q-learning and Random Elite Pool strategy (qL-REP-AOA). The algorithm constructs a state space based on the iteration process and designs a nonlinear reward function with stage adaptability. With this design, the algorithm can dynamically select the optimal search strategy based on the characteristics of each stage of the optimization problem. Additionally, the Random Elite Pool strategy is introduced, which enhances population diversity and search efficiency through the collaborative effect of multiple search operators. To validate the effectiveness of the proposed algorithm, experiments are conducted on 27 classical benchmark functions, the CEC2020 test set, and real-world engineering problems. The experimental results show that qL-REP-AOA outperforms other optimization algorithms in both accuracy and convergence speed, demonstrating its potential in solving complex optimization problems.
暂无评论