Accurate prediction of carbon prices is crucial for a stable market, enabling informed decision-making and strategic planning. Over the years, several models for predicting carbon prices have been proposed to enhance ...
详细信息
Accurate prediction of carbon prices is crucial for a stable market, enabling informed decision-making and strategic planning. Over the years, several models for predicting carbon prices have been proposed to enhance accuracy. However, previous research has primarily focused on improving accuracy, often neglecting the importance of making the findings understandable and meaningful. This paper aims to bridge that gap by not only improving prediction accuracy but also ensuring that the results are transparent and comprehensible, thus contributing to more effective and informed decision-making in the carbon market. An optimized Long Short-Term Memory (LSTM) network enhanced with the modified light spectrum optimizer (MLSO) is proposed to improve carbon price prediction accuracy. Additionally, the paper incorporates Explainable AI (XAI) techniques to interpret the results, bridging the gap between accuracy and interpretability. The proposed model is evaluated on carbon price historical transaction data acquired from *** and tested on eight other benchmark datasets with different characteristics. The proposed model achieved 0.66 root mean square error (RMSE), 0.99 R-2, 0.37 mean absolute error (MAE), 0.15 mean absolute percentage error (MAPE), and 0.44 mean square error (MSE). The results showed that low price, high price, and open price features have the highest significance in driving the model's predictions in comparison to other features like date, volume, and price change features. Additionally, the results indicate that the year, day, and month do not significantly influence the carbon price. The proposed model outperforms state-of-the-art models and other well-known machine learning algorithms according to the experimental results. Moreover, the results indicate that the predictive capability of the proposed model serves as a valuable tool for investors and carbon traders to understand the factors influencing price changes, optimize their strategies, and
The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural ***,their complex and nonlinear dynamic behavior presents a significant challenge for control,necessi...
详细信息
The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural ***,their complex and nonlinear dynamic behavior presents a significant challenge for control,necessitating sophisticated algorithms to ensure stability and accuracy in *** strategies have been explored by researchers and control engineers,with learning-based methods like reinforcement learning,deep learning,and neural networks showing promise in enhancing the robustness and adaptability of quadrotor control *** paper investigates a Reinforcement Learning(RL)approach for both high and low-level quadrotor control systems,focusing on attitude stabilization and position tracking tasks.A novel reward function and actor-critic network structures are designed to stimulate high-order observable states,improving the agent’s understanding of the quadrotor’s dynamics and environmental *** address the challenge of RL hyper-parameter tuning,a new framework is introduced that combines Simulated Annealing(SA)with a reinforcement learning algorithm,specifically Simulated Annealing-Twin Delayed Deep Deterministic Policy Gradient(SA-TD3).This approach is evaluated for path-following and stabilization tasks through comparative assessments with two commonly used control methods:Backstepping and Sliding Mode Control(SMC).While the implementation of the well-trained agents exhibited unexpected behavior during real-world testing,a reduced neural network used for altitude control was successfully implemented on a Parrot Mambo mini *** results showcase the potential of the proposed SA-TD3 framework for real-world applications,demonstrating improved stability and precision across various test scenarios and highlighting its feasibility for practical deployment.
The telecom sector is currently undergoing a digital transformation by integrating artificial intelligence (AI) and Internet of Things (IoT) technologies. Customer retention in this context relies on the application o...
详细信息
The telecom sector is currently undergoing a digital transformation by integrating artificial intelligence (AI) and Internet of Things (IoT) technologies. Customer retention in this context relies on the application of autonomous AI methods for analyzing IoT device data patterns in relation to the offered service packages. One significant challenge in existing studies is treating churn recognition and customer segmentation as separate tasks, which diminishes overall system accuracy. This study introduces an innovative approach by leveraging a unified customer analytics platform that treats churn recognition and segmentation as a bi-level optimization problem. The proposed framework includes an Auto Machine Learning (AutoML) oversampling method, effectively handling three mixed datasets of customer churn features while addressing imbalanced-class distribution issues. To enhance performance, the study utilizes the strength of oversampling methods like synthetic minority oversampling technique for nominal and continuous features (SMOTENC) and synthetic minority oversampling with encoded nominal and continuous features (SMOTE-ENC). Performance evaluation, using 10-fold cross-validation, measures accuracy and F1-score. Simulation results demonstrate that the proposed strategy, particularly Random Forest (RF) with SMOTE-NC, outperforms standard methods with SMOTE. It achieves accuracy rates of 79.24%, 94.54%, and 69.57%, and F1-scores of 65.25%, 81.87%, and 45.62% for the IBM, Kaggle Telco and Cell2Cell datasets, respectively. The proposed method autonomously determines the number and density of clusters. Factor analysis employing Bayesian logistic regression identifies influential factors for accurate customer segmentation. Furthermore, the study segments consumers behaviorally and generates targeted recommendations for personalized service packages, benefiting decision-makers.
The progress of Industrial Revolution 4.0 has been supported by recent advances in several domains, and one of the main contributors is the Internet of Things. Smart factories and healthcare have both benefited in ter...
详细信息
The progress of Industrial Revolution 4.0 has been supported by recent advances in several domains, and one of the main contributors is the Internet of Things. Smart factories and healthcare have both benefited in terms of leveraged quality of service and productivity rate. However, there is always a trade-off and some of the largest concerns include security, intrusion, and failure detection, due to high dependence on the Internet of Things devices. To overcome these and other challenges, artificial intelligence, especially machine learning algorithms, are employed for fault prediction, intrusion detection, computer-aided diagnostics, and so forth. However, efficiency of machine learning models heavily depend on feature selection, predetermined values of hyper-parameters and training to deliver a desired result. This paper proposes a swarm intelligence-based approach to tune the machine learning models. A novel version of the firefly algorithm, that overcomes known deficiencies of original method by employing diversification-based mechanism, has been proposed and applied to both feature selection and hyper-parameter optimization of two machine learning models-XGBoost and extreme learning machine. The proposed approach has been tested on four real-world Industry 4.0 data sets, namely distributed transformer monitoring, elderly fall prediction, BoT-IoT, and UNSW-NB 15. Achieved results have been compared to the results of eight other cutting-edge metaheuristics, that have been implemented and tested under the same conditions. The experimental outcomes strongly indicate that the proposed approach significantly outperformed all other competitor metaheuristics in terms of convergence speed and results' quality measured with standard metrics-accuracy, precision, recall, and f1-score.
The "Distributed Denial of Service (DDoS)" threats have become a tool for the hackers, cyber swindlers, and cyber terrorists. Despite the high amount of conventional mitigation mechanisms that are present no...
详细信息
The "Distributed Denial of Service (DDoS)" threats have become a tool for the hackers, cyber swindlers, and cyber terrorists. Despite the high amount of conventional mitigation mechanisms that are present nowadays, the DDoS threats continue to enhance in severity, volume, and frequency. The DDoS attack has highly affected the availability of the networks for the previous years and still, there is no efficient defense technique against it. Moreover, the new and complex DDoS attacks are increasing on a daily basis but the traditional DDoS attack detection techniques cannot react to these threats. On the other hand, the hackers are employing very innovative strategies to initiate the threats. But, the traditional methods can become effective and reliable when combined with the deep learning-aided approaches. To solve these certain issues, a framework detection mechanism for DDoS attacks utilizes an attention-aided deep learning methodology. The primary thing is the acquisition of data from standard data online sources. Further, from the garnered data, the significant features are drawn out from the "Deep Weighted Restricted Boltzmann Machine (RBM)" using a "Deep Belief Network (DBN)", in which the parameters are tuned by employing the recommended Enhanced Gannet optimization Algorithm (EGOA). This feature extraction operation increases the network performance rate and also diminishes the dimensionality issues. Lastly, the acquired features are transferred to the model of "Attention and Cascaded Recurrent Neural Network (RNN) with Residual Long Short Term Memory (LSTM) (ACRNN-RLSTM)" blocks for the DDoS threat detection purpose. This designed network precisely identifies the complex and new attacks, thus it increases the trustworthiness of the network. In the end, the performance of the approach is contrasted with other traditional algorithms. Hence, the simulation outcomes are obtained that prove the system's efficiency. Also, the outcomes displayed that the designed s
The learning process and hyper-parameter optimization of artificial neural networks (ANNs) and deep learning (DL) architectures is considered one of the most challenging machine learning problems. Several past studies...
详细信息
The learning process and hyper-parameter optimization of artificial neural networks (ANNs) and deep learning (DL) architectures is considered one of the most challenging machine learning problems. Several past studies have used gradient-based back propagation methods to train DL architectures. However, gradient-based methods have major drawbacks such as stucking at local minimums in multi-objective cost functions, expensive execution time due to calculating gradient information with thousands of iterations and needing the cost functions to be continuous. Since training the ANNs and DLs is an NP-hard optimization problem, their structure and parametersoptimization using the meta-heuristic (MH) algorithms has been considerably raised. MH algorithms can accurately formulate the optimal estimation of DL components (such as hyper-parameter, weights, number of layers, number of neurons, learning rate, etc.). This paper provides a comprehensive review of the optimization of ANNs and DLs using MH algorithms. In this paper, we have reviewed the latest developments in the use of MH algorithms in the DL and ANN methods, presented their disadvantages and advantages, and pointed out some research directions to fill the gaps between MHs and DL methods. Moreover, it has been explained that the evolutionary hybrid architecture still has limited applicability in the literature. Also, this paper classifies the latest MH algorithms in the literature to demonstrate their effectiveness in DL and ANN training for various applications. Most researchers tend to extend novel hybrid algorithms by combining MHs to optimize the hyper-parameters of DLs and ANNs. The development of hybrid MHs helps improving algorithms performance and capable of solving complex optimization problems. In general, the optimal performance of the MHs should be able to achieve a suitable trade-off between exploration and exploitation features. Hence, this paper tries to summarize various MH algorithms in terms of th
Although machine learning (ML) techniques have been widely used in various fields of engineering practice, their applications in the field of wind engineering are still at the initial stage. In order to evaluate the f...
详细信息
Although machine learning (ML) techniques have been widely used in various fields of engineering practice, their applications in the field of wind engineering are still at the initial stage. In order to evaluate the feasibility of machine learning algorithms for prediction of wind loads on high-rise buildings, this study took the exposure category type, wind direction and the height of local wind force as the input features and adopted four different machine learning algorithms including k-nearest neighbor (KNN), support vector machine (SVM), gradient boosting regression tree (GBRT) and extreme gradient (XG) boosting to predict wind force coefficients of CAARC standard tall building model. All the hyper-parameters of four ML algorithms are optimized by tree-structured Parzen estimator (TPE). The result shows that mean drag force coefficients and RMS lift force coefficients can be well predicted by the GBRT algorithm model while the RMS drag force coefficients can be forecasted preferably by the XG boosting algorithm model. The proposed machine learning based algorithms for wind loads prediction can be an alternative of traditional wind tunnel tests and fluid simulations.
Feature selection and hyper-parameters optimization (tuning) are two of the most important and challenging tasks in machine learning. To achieve satisfying performance, every machine learning model has to be adjusted ...
详细信息
Feature selection and hyper-parameters optimization (tuning) are two of the most important and challenging tasks in machine learning. To achieve satisfying performance, every machine learning model has to be adjusted for a specific problem, as the efficient universal approach does not exist. In addition, most of the data sets contain irrelevant and redundant features that can even have a negative influence on the model's performance. Machine learning can be applied almost everywhere;however, due to the high risks involved with the growing number of malicious, phishing websites on the world wide web, feature selection and tuning are in this research addressed for this particular problem. Notwithstanding that many metaheuristics have been devised for both feature selection and machine learning tuning challenges, there is still much space for improvements. Therefore, the research exhibited in this manuscript tries to improve phishing website detection by tuning extreme learning model that utilizes the most relevant subset of phishing websites data sets features. To accomplish this goal, a novel diversity-oriented social network search algorithm has been developed and incorporated into a two-level cooperative framework. The proposed algorithm has been compared to six other cutting-edge metaheuristics algorithms, that were also implemented in the framework and tested under the same experimental conditions. All metaheuristics have been employed in level 1 of the devised framework to perform the feature selection task. The best-obtained subset of features has then been used as the input to the framework level 2, where all algorithms perform tuning of extreme learning machine. Tuning is referring to the number of neurons in the hidden layers and weights and biases initialization. For evaluation purposes, three phishing websites data sets of different sizes and the number of classes, retrieved from UCI and Kaggle repositories, were employed and all methods are compared in te
Smart manufacturing involves the use of a variety of automation solutions, such as robotics, machines with embedded software, and advanced sensors collecting vast quantities of data. Efficient control of a complex com...
详细信息
Smart manufacturing involves the use of a variety of automation solutions, such as robotics, machines with embedded software, and advanced sensors collecting vast quantities of data. Efficient control of a complex composition of such solutions, as well as the analysis of the collected data, are essential for improving the efficiency of the production processes and decision-making. Data analysis and process optimization are enabled through the application of state-of-the-art optimization and machine learning algorithms. However, the efficient use of these algorithms often depends on the careful selection of the parameters, which on its own is a process that requires a high degree of expertise and time. Therefore, another class of algorithms can be applied that is designed to discover the optimal parameter configuration given the specific nature of the manufacturing process and the used algorithm. In this work, we systematically analyze the published literature to discover which parameter selection techniques are used in the context of Industry 4.0, for which processes, and how these benefit from automated parameter selection. Within our literature review, we discover nine relevant publications, most of which concentrate on parameter selection for machine learning algorithms through various numerical optimization and metaheuristic techniques. (C) 2024 The Authors. Published by Elsevier B.V.
Training procedures for deep networks require the setting of several hyper-parameters that strongly affect the obtained results. The problem is even worse in adversarial learning strategies used for image generation w...
详细信息
Training procedures for deep networks require the setting of several hyper-parameters that strongly affect the obtained results. The problem is even worse in adversarial learning strategies used for image generation where a proper balancing of the discriminative and generative networks is fundamental for an effective training. In this work we propose a novel hyper-parameters optimization strategy based on the use of Proportional-Integral (PI) and Proportional-Integral-Derivative (PID) controllers. Both open loop and closed loop schemes for the tuning of a single parameter or of multiple parameters together are proposed allowing an efficient parameter tuning without resorting to computationally demanding trial-and-error schemes. We applied the proposed strategies to the widely used BEGAN and CycleGAN models: They allowed to achieve a more stable training that converges faster. The obtained images are also sharper with a slightly better quality both visually and according to the FID and FCN metrics. Image translation results also showed better background preservation and less color artifacts with respect to CycleGAN. (c) 2022 Elsevier Ltd. All rights reserved.
暂无评论