Electroencephalogram (EEG) and electrocardiogram(ECG) signals provide vital insights into brain and heart activity and are widely used in automated medical diagnostics. This study introduces a novel, multimodal fibrom...
详细信息
With the increasing number of electrical fires caused by fault arc, effectively identifying the fault has become a very meaningful thing. According to actual product testing, it has been found that many fault arc dete...
详细信息
In recent years, the global aging population has intensified, leading to a sharp increase in social security benefits and caregiving costs. The elderly face greater health risks, and their behaviors often indicate sig...
详细信息
In this paper, we aim to reduce the number of nodes from Graph Neural Networks (GNNs), thereby simplifying models and reducing computational costs. GNNs are highly effective for various tasks, such as prediction, clas...
In this paper, we aim to reduce the number of nodes from Graph Neural Networks (GNNs), thereby simplifying models and reducing computational costs. GNNs are highly effective for various tasks, such as prediction, classification, and clustering, due to their ability to learn node and edge attributes and relationships, and they have been utilized for intelligent transportation systems recently by converting sensor networks into graph structures. Deep spatio-temporal neural networks, including Spatio-Temporal Graph Convolutional Networks (STGCNs), capture spatial and temporal dependencies, making them suitable for traffic speed forecasting, traffic demand prediction, and travel time estimation. Despite their success, GNNs face challenges in industrial applications due to significant memory usage and time consumption. In this paper, we propose a new approach to node reduction that outperforms existing methods in computational efficiency. Our experiments on two real-world traffic datasets demonstrate that using the heuristic and edge information to reduce nodes can cut computation time of optimization up to 95% and, by eliminating noise, can even enhance prediction accuracy.
The integration of Artificial Intelligence (AI) with Low Power Wide Area Networks (LPWAN) offers a promising approach to address resource constraints and dynamic network conditions inherent in these networks. However,...
详细信息
Some researchers find data with imbalanced class conditions, where there are data with a number of minorities and a majority. SMOTE is a data approach for an imbalanced classes and XGBoost is one algorithm for an imba...
详细信息
This paper reviews the latest trends and challenges in implementing digital twin technology. A digital twin is a tool used in various industries to improve efficiency, optimize processes, and enable advanced analysis....
详细信息
Optimizing the trajectory of Unmanned Aerial Vehicle (UAV) Base Station (BS) is an important operational task to improve the Quality of Service (QoS) for remote areas. However, existing works mainly neglect the fair c...
详细信息
Transcranial alternating current stimulation (tACS) is a promising noninvasive technique for modulating disrupted neural oscillations in psychiatric disorders and enhancing cognitive functions. However, its efficacy r...
详细信息
In this paper, we propose a Mean Field reinforcement learning (MFRL) method for dynamic antenna control in High-Altitude-Platform-Station (HAPS) communication system with Multi-Cell Configuration. HAPS works at strato...
In this paper, we propose a Mean Field reinforcement learning (MFRL) method for dynamic antenna control in High-Altitude-Platform-Station (HAPS) communication system with Multi-Cell Configuration. HAPS works at stratospheric altitudes of about 20 km to provide an ultra-wide coverage area. However, the wind pressure caused HAPS movement leads to the degradation of users' throughput. Considering the multi-antenna arrays in the HAPS, to find the optimal antenna parameters of all antenna arrays for reducing the number of low-throughput users, we formulate the antenna control problem into stochastic game equilibrium. Usually, solving the stochastic to find the equilibrium needs very high computation complexity to calculate the transition probability for getting the $\mathcal{Q}$ -value under a certain state and action. Therefore, we use the reinforcement learning (RL) named Deep $\mathcal{Q}$ -Network (DQN) to learn the transition probability and predict the Q-value according to the reward fed backed from the environment. Besides, we employ the Mean field Game theory in conjunction with RL during the training phase of DQN to reduce the complexity of the interactions among agents. To evaluate the proposed method, we compare the proposed method with a genetic algorithm (GA) named Particle Swarm Optimization (PSO), $\mathcal{Q}$ -learning, Fuzzy $\mathcal{Q}$ -learning, and conventional DQN under four realistic user distribution scenarios. The simulation results show that the proposed method achieves comparable throughput performance with a high convergence rate.
暂无评论