Data clustering is a fundamental technique in data mining, pivotal for various applications such as statistical analysis and data compression. Traditional clustering algorithms often struggle with noisy or high-dimens...
详细信息
Data clustering is a fundamental technique in data mining, pivotal for various applications such as statistical analysis and data compression. Traditional clustering algorithms often struggle with noisy or high-dimensional datasets, hindering their efficacy in addressing real-world challenges. In response, this research introduces MutaSwarmClus, a novel hybrid metaheuristic algorithm that combines Mouth Brooding Fish (MBF), Ant Colony optimization (ACO), and mutation operators to enhance clustering quality. MutaSwarmClus intends to adaptively control the exploration and exploitation phases of the solution space, solve issues with local optima and changes in the distribution of available data. Moreover, it incorporates an Iterated Local Search (ILS) to refine solutions and avoiding getting stuck in local optima. MutaSwarmClus therefore increases the robustness of the clustering process by incorporating controlled randomness through mutation operators in order to handle noisy and outlier data points well. According to the contributions analysis, the proposed algorithm improves the clustering solution with the combined system of MBF, ACO, and mutation operators, which enables the mechanism of exploration and exploitation in the process of information search. As shown through the results of experimental studies, MutaSwarmClus has high performance when used with various benchmarks, and outperforms or performs as well as or better than compared to other clustering algorithms such as K-means, ALO, Hybrid ALO, and MBF. It achieves an average error rate of only 10%, underscoring its accuracy in clustering tasks. The utilization of MutaSwarmClus offers a solution to the existing problems in clustering large datasets in terms of scalability, efficiency and accuracy. Possible directions for future work can continue to optimize the model parameters of the algorithm and study its adaptability in dynamic conditions and with large amounts of data.
The ability to train ever-larger neural networks brings artificial intelligence to the forefront of scientific and technical discoveries. However, their exponentially increasing size creates a proportionally greater d...
详细信息
The ability to train ever-larger neural networks brings artificial intelligence to the forefront of scientific and technical discoveries. However, their exponentially increasing size creates a proportionally greater demand for energy and computational hardware. Incorporating complex physical events in networks as fixed, efficient computation modules can address this demand by decreasing the complexity of trainable layers. Here, we utilize ultrashort pulse propagation in multimode fibers, which perform large-scale nonlinear transformations, for this purpose. Training the hybrid architecture is achieved through a neural model that differentiably approximates the optical system. The training algorithm updates the neural simulator and backpropagates the error signal over this proxy to optimize layers preceding the optical one. Our experimental results achieve state-of-the-art image classification accuracies and simulation fidelity. Moreover, the framework demonstrates exceptional resistance to experimental drifts. By integrating low-energy physical systems into neural networks, this approach enables scalable, energy-efficient AI models with significantly reduced computational demands.
/ We present a version of the sieve of Eratosthenes that can factor all integers <= x in O(x log log x) arithmetic operations using at most O( x/ log log x) bits of space. Among algorithms that take the optimal O(x...
详细信息
/ We present a version of the sieve of Eratosthenes that can factor all integers <= x in O(x log log x) arithmetic operations using at most O( x/ log log x) bits of space. Among algorithms that take the optimal O(xlog log x) time, this new space bound is an improvement of a factor proportional to log xlog log x over the implied / previous bound of O( xlogx). We also show our algorithm performs well in practice.
Cyber threats are an ongoing problem that is hard to prevent completely. This can occur for various reasons, but the main causes are the evolving techniques of hackers and the neglect of security measures when develop...
详细信息
Cyber threats are an ongoing problem that is hard to prevent completely. This can occur for various reasons, but the main causes are the evolving techniques of hackers and the neglect of security measures when developing software or hardware. Asa result, several countermeasures will need to be applied to mitigate these threats. Cyber-threat detection techniques can fulfill this role by utilizing different identification methods for various cyber threats. In this work, an intelligent cyber threat detection system employing a swarm-based machine learning approach is proposed. The approach involves using Harris Hawks optimization (HHO) to enhance the Support Vector Machine (SVM) for improved threat detection through parameter tuning and feature weighting. Furthermore, various cyber-threat types have been considered, including Fake News, IoT Intrusion, Malicious URLs, Spam Emails, and Spam Websites. The proposed HHO-SVM has been compared to other approaches for detecting all these types collectively. The HHO-SVM outperforms all algorithms inmost types (datasets). The proposed approach demonstrated the highest accuracy across seven datasets: FakeNews-1, FakeNews-2, FakeNews-3, IoT-ID, URL, SpamEmail-2, and SpamWebsites, achieving average accuracy of 68.251%, 68.729%, 79.049%, 95.254%, 100%, 96.681%, and 93.975%, respectively. Additionally, a thorough analysis of each cyber-threat type has been conducted to understand their characteristics and detection strategies.
Almost all hydrological models require calibration. The same model but with different parameters may lead to diverse simulations of the hydrological phenomena. Hence, the choice of a calibration method may affect the ...
详细信息
Almost all hydrological models require calibration. The same model but with different parameters may lead to diverse simulations of the hydrological phenomena. Hence, the choice of a calibration method may affect the model performance. The present paper is the first study analyzing how the choice of air2water model calibration procedure may influence projections of surface water temperature in lowland lakes under future climatic conditions. To address this issue, projections from 14 atmospheric circulation models, data from 22 lowland Polish lakes located in a temperate climate zone, and 12 different optimization algorithms are employed. The studied lake areas range from 1.5 km2 to 115 km2, and their maximum depths range from 2.5 m to 70 m. Depending on which calibration algorithm is applied, the differences in mean monthly surface water temperatures projected for future climatic conditions may exceed 1.5 degrees C for a small deep lake. On the contrary, the differences observed for shallow and relatively large lakes, due to the optimization procedure used, were lower than 0.6 degrees C each month. The largest differences in projected lake water temperatures were observed for the winter and summer months, which are especially critical for aquatic biota. Among the optimization algorithms resulting in the largest differences were those that fit historical data well, as well as those that do not reproduce historical data appropriately. Therefore, strong performance for historical data does not guarantee reliable projections for future conditions. We have shown that projected lake water temperatures largely depend on the calibration method used for a particular model.
The development of intelligent design methods for buckling-restrained brace (BRB) retrofit schemes can effectively enhance the seismic performance of reinforced concrete (RC) frame structures to address their insuffic...
详细信息
The development of intelligent design methods for buckling-restrained brace (BRB) retrofit schemes can effectively enhance the seismic performance of reinforced concrete (RC) frame structures to address their insufficient seismic capacity. This study further explores the two-stage intelligent design framework for BRB retrofitting by combining generative artificial intelligence (AI) and optimization algorithms. In Stage 1, generative AI models, including diffusion models, generative adversarial networks (GANs), and graph neural networks, extract features from design drawings to identify potential BRB locations. In Stage 2, optimization algorithms, such as genetic algorithms, simulated annealing, and online learning, integrated with YJK Y-GAMA software, determine the optimal placement and sizing of the BRBs. A comprehensive comparative analysis of design performance and efficiency is conducted for different algorithm combinations in both stages. The results indicate that GANs and diffusion models effectively capture both global and local design features, and genetic algorithms provide an efficient exploration of the design space. Combining these methods yields near-optimal solutions in a short time, ensuring compliance with mechanical standards and cost-effectiveness. In conclusion, this study offers valuable recommendations for the selection of generative AI methods and optimization algorithms in the design process, with the potential to promote the application of intelligent design in engineering practice.
In this paper, an evaluation strategy is proposed for evaluation of optimization algorithms, called the Complex Preference Analysis, that assesses the efficiency of different evolutionary algorithms by considering mul...
详细信息
Energy-efficient coverage enhancement (EEC) is a highly non-convex and challenging optimization problem in wireless sensor networks (WSNs) deployment. Traditional intelligent optimization algorithms often suffer from ...
详细信息
Accurately estimating the Energy Dissipation Rate (EDR) in Hydrofoil-Crested Stepped Spillways (HCSSs) is crucial for ensuring the safety and optimizing the performance of these hydraulic structures. This study invest...
详细信息
Accurately estimating the Energy Dissipation Rate (EDR) in Hydrofoil-Crested Stepped Spillways (HCSSs) is crucial for ensuring the safety and optimizing the performance of these hydraulic structures. This study investigates the prediction of EDR using advanced hybrid Machine Learning (ML) models, including the Tabular Neural Network with Moth Flame optimization (TabNet-MFO), Long Short-Term Memory with Ant Lion Optimizer (LSTM-ALO), Extreme Learning Machine with Jaya and Firefly optimization (ELM-JFO), and Support Vector Regression with Improved Whale optimization (SVR-IWOA). Notably, two novel models-TabNet-MFO and SVR-IWOA-are introduced for the first time, providing dynamic hyperparameter optimization to enhance prediction accuracy in complex hydraulic conditions. To develop the models, a dataset comprising 462 laboratory data points from HCSS experiments was used, with 75 % allocated for the training stage and 25 % for the testing stage. The Isolation Forest (IF) algorithm was employed to detect and remove outliers, resulting in the exclusion of 5 % of the original dataset. Dimensional analysis was conducted to identify key factors influencing EDR, including step number (NS), chute angle (theta), hydrofoil formation index (t), and the ratio of critical depth to total chute height (yC / PS). ANOVA and SHAP analyses confirmed the significant impact of the yC / PS ratio on EDR. Model performance was evaluated using metrics such as the coefficient of determination (R2), Root Mean Squared Error (RMSE), Scatter Index (SI), Weighted Mean Absolute Percentage Error (WMAPE), and symmetric Mean Absolute Percentage Error (sMAPE). Performance was further compared using Taylor diagrams, residual error curves (REC), and the Performance Index (PI). During the training stage, TabNet-MFO outperformed the other models with a PI of 0.784 and a normalized Root Mean Squared Error (E') of 1.231, followed by ELM-JFO with a PI of 0.605 and E' of 1.125. In the testing stage, TabNet-MFO m
Single-objective optimization algorithms search for the single highest quality solution with respect to an objective. Quality diversity (QD) optimization algorithms, such as Covariance Matrix Adaptation MAP-Elites (CM...
详细信息
暂无评论