A recently developed algorithm inspired by natural processes, known as the Artificial Gorilla Troops Optimizer (GTO), boasts a straightforward structure, unique stabilizing features, and notably high effectiveness. It...
详细信息
A recently developed algorithm inspired by natural processes, known as the Artificial Gorilla Troops Optimizer (GTO), boasts a straightforward structure, unique stabilizing features, and notably high effectiveness. Its primary objective is to efficiently find solutions for a wide array of challenges, whether they involve constraints or not. The GTO takes its inspiration from the behavior of Gorilla Troops in the natural world. To emulate the impact of gorillas at each stage of the search process, the GTO employs a flexible weighting mechanism rooted in its concept. Its exceptional qualities, including its independence from derivatives, lack of parameters, user-friendliness, adaptability, and simplicity, have resulted in its rapid adoption for addressing various optimization challenges. This review is dedicated to the examination and discussion of the foundational research that forms the basis of the GTO. It delves into the evolution of this algorithm, drawing insights from 112 research studies that highlight its effectiveness. Additionally, it explores proposed enhancements to the GTO's behavior, with a specific focus on aligning the geometry of the search area with real-world optimization problems. The review also introduces the GTO solver, providing details about its identification and organization, and demonstrates its application in various optimization scenarios. Furthermore, it provides a critical assessment of the convergence behavior while addressing the primary limitation of the GTO. In conclusion, this review summarizes the key findings of the study and suggests potential avenues for future advancements and adaptations related to the GTO.
The geomechanical properties of rock, including shear strength (SS) and uniaxial compressive strength (UCS), are very important parameters in designing rock structures. To improve the accuracy of SS and UCS prediction...
详细信息
The geomechanical properties of rock, including shear strength (SS) and uniaxial compressive strength (UCS), are very important parameters in designing rock structures. To improve the accuracy of SS and UCS prediction, this study presented an evolving support vector regression (SVR) using Grey Wolf optimization (GWO). To examine the feasibility and applicability of the SVR-GWO model, the differential evolution (DE) and artificial bee colony (ABC) algorithms were also used. In other words, the SVR hyperparameters were tuned using the GWO, DE, and ABC algorithms. To implement the proposed models, a comprehensive database accessible in an open-source was used in this study. Finally, the comparative experiments such as root mean square error (RMSE) were conducted to show the superiority of the proposed models. The SVR-GWO model predicted the SS and UCS with RMSE of 0.460 and 3.208, respectively, while, the SVR-DE model predicted the SS and UCS with RMSE of 0.542 and 5.4, respectively. Furthermore, the SVR-ABC model predicted the SS and UCS with RMSE of 0.855 and 5.033, respectively. The aforementioned results clearly exhibited the applicability as well as the usability of the proposed SVR-GWO model in the prediction of both SS and UCS parameters. Accordingly, the SVR-GWO model can be also applied to solving various complex systems, especially in geotechnical and mining fields.
Researchers recently extended Distributed Constraint optimization Problems (DCOPs) to Communication-Aware DCOPs so that they are applicable in scenarios in which messages can be arbitrarily delayed. Distributed asynch...
详细信息
Teaching Learning Based optimization Algorithm, nature inspired optimization technique based on presence of constraints, TLBO can be used and adapted to solve constrained and unconstrained optimization problems. A bri...
详细信息
Based on the characteristics of outdoor media and the application of digital technology, this study proposes an outdoor advertising resource allocation optimization model and transforms it into a constrained integer o...
详细信息
Expensive multi-objective optimization problems often allow limited functional evaluations, which makes the traditional evolutional algorithms requiring large sample sizes challenging to solve. Multi-objective efficie...
详细信息
We study regret minimization in online episodic linear Markov Decision Processes, and propose a policy optimization algorithm that is computationally efficient, and obtains rate optimal Oe(√K) regret where K denotes ...
详细信息
We study regret minimization in online episodic linear Markov Decision Processes, and propose a policy optimization algorithm that is computationally efficient, and obtains rate optimal Oe(√K) regret where K denotes the number of episodes. Our work is the first to establish the optimal rate (in terms of K) of convergence in the stochastic setting with bandit feedback using a policy optimization based approach, and the first to establish the optimal rate in the adversarial setup with full information feedback, for which no algorithm with an optimal rate guarantee was previously known. Copyright 2024 by the author(s)
Stochastic multiobjective optimization has at-tract widespread attention in real-world scenarios. However, the current work mainly focuses on the gradient-based algorithms and there is a lack of effective methods for ...
详细信息
Avstract-The integration of vast amounts of renewables into the power generation mix is paramount for the green transition. These intermittent resources bring promise of a cleaner and less expensive generation portfol...
详细信息
For obtaining optimal first-order convergence guarantees for stochastic optimization, it is necessary to use a recurrent data sampling algorithm that samples every data point with sufficient frequency. Most commonly u...
详细信息
For obtaining optimal first-order convergence guarantees for stochastic optimization, it is necessary to use a recurrent data sampling algorithm that samples every data point with sufficient frequency. Most commonly used data sampling algorithms (e.g., i.i.d., MCMC, random reshuffling) are indeed recurrent under mild assumptions. In this work, we show that for a particular class of stochastic optimization algorithms, we do not need any further property (e.g., independence, exponential mixing, and reshuffling) beyond recurrence in data sampling to guarantee optimal rate of first-order convergence. Namely, using regularized versions of Minimization by Incremental Surrogate optimization (MISO), we show that for non-convex and possibly non-smooth objective functions with constraints, the expected optimality gap converges at an optimal rate O(n−1/2) under general recurrent sampling schemes. Furthermore, the implied constant depends explicitly on the 'speed of recurrence', measured by the expected amount of time to visit a farthest data point, either averaged ('target time') or supremized ('hitting time') over the initial locations. We discuss applications of our general framework to decentralized optimization and distributed non-negative matrix factorization. Copyright 2024 by the author(s)
暂无评论