This paper presents the Infection Susceptible Artificial Intelligence optimization Model (SIMO, susceptible-infected- removed model optimizer), an innovative learned heuristic inspired by biological systems and Deep L...
详细信息
This paper presents the Infection Susceptible Artificial Intelligence optimization Model (SIMO, susceptible-infected- removed model optimizer), an innovative learned heuristic inspired by biological systems and Deep Learning (DL) techniques. The SIMO optimization algorithm estimates the susceptibility of the population to infection, active infections and the recovering population at any point in time, inspired by the epidemiological partition model with Infection-Sensitive Artificial Intelligence. SIMO integrates the IA method into the initialisation method and parameter tuning components to improve the search process, so that it can exhibit intelligent and autonomous behaviour. The integration of the IO facilitates the generation of initial solutions based on neural models, which allows the algorithm to be guided towards efficient, effective and robust search results. This approach improves the performance of the algorithm by obtaining high-level solutions, allowing it to converge faster, increasing its robustness and reducing its computational requirements. Two datasets from the 2017 IEEE Congress on Evolutionary Computing (CEC 2017) benchmarking functions are used to validate the effectiveness of the SIMO algorithm and the experimental results are compared with innovative algorithms. Detailed comparisons show that SIMO outperforms many similar models, offering high performance solutions using fewer control parameters. Furthermore, the performance of SIMO is adapted to real-life problems. The results clearly show that integrating the learning process into SIMO provides superior accuracy and computational efficiency compared to other optimization approaches in the existing literature.
DNA Microarray datasets, also known as "omics" data, are important for the diagnosis of numerous diseases, including cancer and tumors. In the analysis of these data, feature selection techniques and classif...
详细信息
DNA Microarray datasets, also known as "omics" data, are important for the diagnosis of numerous diseases, including cancer and tumors. In the analysis of these data, feature selection techniques and classification algorithms are the workhorse for choosing candidate genes that serve as cancer biomarkers. However, microarray datasets present a challenge;they contain a greater number of features than the samples, which affects the performance of algorithms used in the analysis process. In order to extract precise information, it is necessary to employ a method that is both robust and performant. This paper emphasizes the importance of accurate and stable gene selection for the discovery of knowledge derived from high-dimensional data. A novel hybrid framework was put forth for consideration, comprising three distinct stages: Clustering, Parallel Filtering, and Hybrid-Parallel optimization. In each step, a combination of techniques and algorithms is used to improve the results in terms of stability and/or accuracy. The proposal is evaluated and tested according to different scenarios;using thirteen gene expression datasets and two classifiers: Artificial Neural Network (ANN) and Na & iuml;ve Bayes (NB). Comparison with related work demonstrates the efficacy of this approach, which enhances classification accuracy and stability while reducing the number of selected genes.
Photosynthesis plays a pivotal role in vegetable growth. However, its intricate interplay with plant physiology and environmental factors complicates precise prediction of photosynthetic rates (Pn). Current predictive...
详细信息
Photosynthesis plays a pivotal role in vegetable growth. However, its intricate interplay with plant physiology and environmental factors complicates precise prediction of photosynthetic rates (Pn). Current predictive models primarily focus on environmental influences on photosynthesis, limiting their applicability to leaves exhibiting different physiological traits. To address the challenge, we introduce a novel approach that incorporates chlorophyll fluorescence (ChlF) parameters into a model for predicting Pn across diverse leaf ontogenies. Eggplant leaves were used as experimental samples. We collected 5280 Pn data of leaves with different ChlF parameters under controlled changes in temperature, [CO2], and light intensity. The Fo (initial fluorescence) and Fv/Fm (Maximum light energy conversion efficiency of PSII system) were selected as key ChlF indicators using the entropy method. Fo and Fv/Fm, along with temperature, [CO2], and light intensity, are key features, while Pn serves as a label, forming a robust modeling dataset. Then, we proposed a Convolutional Neural Network Regression model with Input Encoding and Genetic Algorithm optimization (CNNR-IEGA) to train these environment and fluorescence data and develop the predictive model for eggplant *** results indicate that the model exhibits excellent performance in predicting Pn. On unknown datasets, the root mean square error of the model is only 0.97 mu mol center dot m- 2 center dot s- 1, with a high coefficient of determination reaching 0.99. Compared to models established by other algorithms (including multiple nonlinear regression, support vector regression, and back propagation neural network), the proposed model demonstrates superior performance across training, testing, and validation sets. Furthermore, compared to models without ChlF parameters and those with single ChlF parameters, the proposed model has the highest accuracy. This demonstrates the validity of using fluorescence to characterize
Data clustering is a fundamental technique in data mining, pivotal for various applications such as statistical analysis and data compression. Traditional clustering algorithms often struggle with noisy or high-dimens...
详细信息
Data clustering is a fundamental technique in data mining, pivotal for various applications such as statistical analysis and data compression. Traditional clustering algorithms often struggle with noisy or high-dimensional datasets, hindering their efficacy in addressing real-world challenges. In response, this research introduces MutaSwarmClus, a novel hybrid metaheuristic algorithm that combines Mouth Brooding Fish (MBF), Ant Colony optimization (ACO), and mutation operators to enhance clustering quality. MutaSwarmClus intends to adaptively control the exploration and exploitation phases of the solution space, solve issues with local optima and changes in the distribution of available data. Moreover, it incorporates an Iterated Local Search (ILS) to refine solutions and avoiding getting stuck in local optima. MutaSwarmClus therefore increases the robustness of the clustering process by incorporating controlled randomness through mutation operators in order to handle noisy and outlier data points well. According to the contributions analysis, the proposed algorithm improves the clustering solution with the combined system of MBF, ACO, and mutation operators, which enables the mechanism of exploration and exploitation in the process of information search. As shown through the results of experimental studies, MutaSwarmClus has high performance when used with various benchmarks, and outperforms or performs as well as or better than compared to other clustering algorithms such as K-means, ALO, Hybrid ALO, and MBF. It achieves an average error rate of only 10%, underscoring its accuracy in clustering tasks. The utilization of MutaSwarmClus offers a solution to the existing problems in clustering large datasets in terms of scalability, efficiency and accuracy. Possible directions for future work can continue to optimize the model parameters of the algorithm and study its adaptability in dynamic conditions and with large amounts of data.
To efficiently solve the time-varying convex quadratic programming (TVCQP) problem under equational constraint, an adaptive variable-parameter dynamic learning network (AVDLN) is proposed and analyzed. Being different...
详细信息
To efficiently solve the time-varying convex quadratic programming (TVCQP) problem under equational constraint, an adaptive variable-parameter dynamic learning network (AVDLN) is proposed and analyzed. Being different from existing varying-parameter and fixed-parameter convergent-differential neural network (VPCDNN and FPCDNN), the proposed AVDLN integrates the error signals into the time-varying parameter term. To do so, the TVCQP problem is transformed into a time-varying matrix equation. Second, an adaptive time-varying design formulation is designed for the error function, and then, the error function is integrated into the time-varying parameter. Furthermore, the AVDLN is designed with the adaptive time-varying design formulation. Moreover, the convergence and robustness theorems of AVDLN are proved by Lyapunov stability analysis, and Mathematical analysis demonstrates that AVDLN possesses a smaller upper bound on the convergence error and a faster error convergence rate than FPCDNN and VPCDNN. Finally, the validity of AVDLN is demonstrated by simulations, and the comparative results prove that the proposed AVDLN has a faster convergence speed and smaller error fluctuation.
Proton exchange membrane fuel cells (PEMFCs) have the benefits of high efficiency, fast startup, and the ability to operate at low temperatures, which can improve the efficiency of energy utilization. Accurate paramet...
详细信息
Proton exchange membrane fuel cells (PEMFCs) have the benefits of high efficiency, fast startup, and the ability to operate at low temperatures, which can improve the efficiency of energy utilization. Accurate parameter identification can enable the PEMFC model to better predict and simulate the performance of the system under dynamic operating conditions. Based on the semi-empirical model of PEMFC, an improved adaptive guided differential evolution (AGDE) algorithm is presented by adding the roulette wheel selection (RWS) optimizationbased fitness-distance balance (RFDB) and Levy flight (LF) strategies, simplified as LRFDB-AGDE. The integration of multiple enhancement strategies is for a deeper optimization of the mutation mechanism architecture of the AGDE algorithm, aiming to enhance the local and global integrated search ability of the LRFDB-AGDE algorithm, which can identify the unknown parameters of the PEMFC model more efficiently and quickly. In this study, the superior parameter identification performance of the proposed LRFDB-AGDE algorithm is validated by simulating voltage and current data from four types of PEMFCs and comparing them with traditional intelligent algorithms such as AGDE and the whale optimization algorithm (WOA). Notably, the absolute errors of the LRFDBAGDE algorithm in identifying the four PEMFCs are all within 5%.
The ability to train ever-larger neural networks brings artificial intelligence to the forefront of scientific and technical discoveries. However, their exponentially increasing size creates a proportionally greater d...
详细信息
The ability to train ever-larger neural networks brings artificial intelligence to the forefront of scientific and technical discoveries. However, their exponentially increasing size creates a proportionally greater demand for energy and computational hardware. Incorporating complex physical events in networks as fixed, efficient computation modules can address this demand by decreasing the complexity of trainable layers. Here, we utilize ultrashort pulse propagation in multimode fibers, which perform large-scale nonlinear transformations, for this purpose. Training the hybrid architecture is achieved through a neural model that differentiably approximates the optical system. The training algorithm updates the neural simulator and backpropagates the error signal over this proxy to optimize layers preceding the optical one. Our experimental results achieve state-of-the-art image classification accuracies and simulation fidelity. Moreover, the framework demonstrates exceptional resistance to experimental drifts. By integrating low-energy physical systems into neural networks, this approach enables scalable, energy-efficient AI models with significantly reduced computational demands.
/ We present a version of the sieve of Eratosthenes that can factor all integers <= x in O(x log log x) arithmetic operations using at most O( x/ log log x) bits of space. Among algorithms that take the optimal O(x...
详细信息
/ We present a version of the sieve of Eratosthenes that can factor all integers <= x in O(x log log x) arithmetic operations using at most O( x/ log log x) bits of space. Among algorithms that take the optimal O(xlog log x) time, this new space bound is an improvement of a factor proportional to log xlog log x over the implied / previous bound of O( xlogx). We also show our algorithm performs well in practice.
Cyber threats are an ongoing problem that is hard to prevent completely. This can occur for various reasons, but the main causes are the evolving techniques of hackers and the neglect of security measures when develop...
详细信息
Cyber threats are an ongoing problem that is hard to prevent completely. This can occur for various reasons, but the main causes are the evolving techniques of hackers and the neglect of security measures when developing software or hardware. Asa result, several countermeasures will need to be applied to mitigate these threats. Cyber-threat detection techniques can fulfill this role by utilizing different identification methods for various cyber threats. In this work, an intelligent cyber threat detection system employing a swarm-based machine learning approach is proposed. The approach involves using Harris Hawks optimization (HHO) to enhance the Support Vector Machine (SVM) for improved threat detection through parameter tuning and feature weighting. Furthermore, various cyber-threat types have been considered, including Fake News, IoT Intrusion, Malicious URLs, Spam Emails, and Spam Websites. The proposed HHO-SVM has been compared to other approaches for detecting all these types collectively. The HHO-SVM outperforms all algorithms inmost types (datasets). The proposed approach demonstrated the highest accuracy across seven datasets: FakeNews-1, FakeNews-2, FakeNews-3, IoT-ID, URL, SpamEmail-2, and SpamWebsites, achieving average accuracy of 68.251%, 68.729%, 79.049%, 95.254%, 100%, 96.681%, and 93.975%, respectively. Additionally, a thorough analysis of each cyber-threat type has been conducted to understand their characteristics and detection strategies.
Almost all hydrological models require calibration. The same model but with different parameters may lead to diverse simulations of the hydrological phenomena. Hence, the choice of a calibration method may affect the ...
详细信息
Almost all hydrological models require calibration. The same model but with different parameters may lead to diverse simulations of the hydrological phenomena. Hence, the choice of a calibration method may affect the model performance. The present paper is the first study analyzing how the choice of air2water model calibration procedure may influence projections of surface water temperature in lowland lakes under future climatic conditions. To address this issue, projections from 14 atmospheric circulation models, data from 22 lowland Polish lakes located in a temperate climate zone, and 12 different optimization algorithms are employed. The studied lake areas range from 1.5 km2 to 115 km2, and their maximum depths range from 2.5 m to 70 m. Depending on which calibration algorithm is applied, the differences in mean monthly surface water temperatures projected for future climatic conditions may exceed 1.5 degrees C for a small deep lake. On the contrary, the differences observed for shallow and relatively large lakes, due to the optimization procedure used, were lower than 0.6 degrees C each month. The largest differences in projected lake water temperatures were observed for the winter and summer months, which are especially critical for aquatic biota. Among the optimization algorithms resulting in the largest differences were those that fit historical data well, as well as those that do not reproduce historical data appropriately. Therefore, strong performance for historical data does not guarantee reliable projections for future conditions. We have shown that projected lake water temperatures largely depend on the calibration method used for a particular model.
暂无评论