Evolutionary algorithms have been extensively utilized in practical ***,manually designed population updating formulas are inherently prone to the subjective influence of the *** programming(GP),characterized by its t...
详细信息
Evolutionary algorithms have been extensively utilized in practical ***,manually designed population updating formulas are inherently prone to the subjective influence of the *** programming(GP),characterized by its tree-based solution structure,is a widely adopted technique for optimizing the structure of mathematical models tailored to real-world *** paper introduces a GP-based framework(GPEAs)for the autonomous generation of update formulas,aiming to reduce human *** modifications to tree-based GP have been instigated,encompassing adjustments to its initialization process and fundamental update operations such as crossover and mutation within the *** designing suitable function sets and terminal sets tailored to the selected evolutionary algorithm,and ultimately derive an improved update *** Cat Swarm Optimization Algorithm(CSO)is chosen as a case study,and the GP-EAs is employed to regenerate the speed update formulas of the *** validate the feasibility of the GP-EAs,the comprehensive performance of the enhanced algorithm(GP-CSO)was evaluated on the CEC2017 benchmark ***,GP-CSO is applied to deduce suitable embedding factors,thereby improving the robustness of the digital watermarking *** experimental results indicate that the update formulas generated through training with GP-EAs possess excellent performance scalability and practical application proficiency.
Internet of Things (IoT) technology quickly transformed traditional management and engagement techniques in several sectors. This work explores the trends and applications of the Internet of Things in industries, incl...
详细信息
Modernization and intense industrialization have led to a substantial improvement in people’s quality of life. However, the aspiration for achieving an improved quality of life results in environmental contamination....
详细信息
作者:
Hossain, M. ShamimShorfuzzaman, MohammadKing Saud University
Research Chair of Pervasive and Mobile Computing Department of Software Engineering College of Computer and Information Sciences Riyadh12372 Saudi Arabia Taif University
Department of Computer Science College of Computers and Information Technology Taif21944 Saudi Arabia
Diabetic retinopathy is the most common and severe eye complication of diabetes, and it can cause vision loss or even blindness due to retina damage. Automatic and faster detection of various DR stages is crucial and ...
详细信息
Deepfake detection aims to mitigate the threat of manipulated content by identifying and exposing forgeries. However, previous methods primarily tend to perform poorly when confronted with cross-dataset scenarios. To ...
详细信息
5G has boosted the possibility of ultra-high-speed, low-latency, and reliable wireless communication systems. With 5G networks, if efficient resource management is not properly looked at, then the full potential of su...
详细信息
Background: Epilepsy is a neurological disorder that leads to seizures. This occurs due to excessive electrical discharge by the brain cells. An effective seizure prediction model can aid in improving the lifestyle of...
详细信息
Ant-Miner, a rule-based classifier, has been extensively utilized for classification tasks. However, it features numerous controlling parameters that significantly impact its performance. The standard Ant-Miner (AM) o...
详细信息
Dehazing is a difficult process in computer vision that seeks to improve the clarity and excellence of pictures taken under cloudy, foggy, and rainy circumstances. The Generative Adversarial Network (GAN) has been a v...
详细信息
Analyzing incomplete data is one of the prime concerns in data analysis. Discarding the missing records or values might result in inaccurate analysis outcomes or loss of helpful information, especially when the size o...
详细信息
Analyzing incomplete data is one of the prime concerns in data analysis. Discarding the missing records or values might result in inaccurate analysis outcomes or loss of helpful information, especially when the size of the data is small. A preferable alternative is to substitute the missing values using imputation such that the substituted values are very close to the actual missing values and this is a challenging task. In spite of the existence of many imputation algorithms, there is no universal imputation algorithm that can yield the best values for imputing all types of datasets. This is mainly because of the dependence of the imputation algorithm on the inherent properties of the data. These properties include type of data distribution, data size, dimensionality, presence of outliers, data dependency among the attributes, and so on. In the literature, there exists no straightforward method for determining a suitable imputation algorithm based on the data characteristics. The existing practice is to conduct exhaustive experimentation using the available imputation techniques with every dataset and this requires a lot of time and effort. Moreover, the current approaches for checking the suitability of imputations cannot be done when the ground truth data is not available. In this paper, we propose a new method for the systematic selection of a suitable imputation algorithm based on the inherent properties of the dataset which eliminates the need for exhaustive experimentation. Our method determines the imputation technique which consistently gives lower errors while imputing datasets with specific properties. Also, our method is particularly useful when the real-world data do not have the ground truth for missing data to check the imputation performance and suitability. Once the suitability of a DI technique is established based on the data properties, this selection will remain valid for another dataset with similar properties. Thus, our method can save time an
暂无评论