Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data manag...
详细信息
Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data management. The combination of Web3.0 and edge content caching holds promise in providing low-latency data access for CAVs’ real-time applications. Web3.0 enables the reliable pre-migration of frequently requested content from content providers to edge nodes. However, identifying optimal edge node peers for joint content caching and replacement remains challenging due to the dynamic nature of traffic flow in IoV. Addressing these challenges, this article introduces GAMA-Cache, an innovative edge content caching methodology leveraging Graph Attention Networks (GAT) and Multi-Agent Reinforcement Learning (MARL). GAMA-Cache conceptualizes the cooperative edge content caching issue as a constrained Markov decision process. It employs a MARL technique predicated on cooperation effectiveness to discern optimal caching decisions, with GAT augmenting information extracted from adjacent nodes. A distinct collaborator selection mechanism is also developed to streamline communication between agents, filtering out those with minimal correlations in the vector input to the policy network. Experimental results demonstrate that, in terms of service latency and delivery failure, the GAMA-Cache outperforms other state-of-the-art MARL solutions for edge content caching in IoV.
The rapid advancements in big data and the Internet of Things (IoT) have significantly accelerated the digital transformation of medical institutions, leading to the widespread adoption of Digital Twin Healthcare (DTH...
详细信息
The rapid advancements in big data and the Internet of Things (IoT) have significantly accelerated the digital transformation of medical institutions, leading to the widespread adoption of Digital Twin Healthcare (DTH). The Cloud DTH Platform (CDTH) serves as a cloud-based framework that integrates DTH models, healthcare resources, patient data, and medical services. By leveraging real-time data from medical devices, the CDTH platform enables intelligent healthcare services such as disease prediction and medical resource optimization. However, the platform functions as a system of systems (SoS), comprising interconnected yet independent healthcare services. This complexity is further compounded by the integration of both black-box AI models and domain-specific mechanistic models, which pose challenges in ensuring the interpretability and trustworthiness of DTH models. To address these challenges, we propose a Model-Based Systems engineering (MBSE)-driven DTH modeling methodology derived from systematic requirement and functional analyses. To implement this methodology effectively, we introduce a DTH model development approach using the X language, along with a comprehensive toolchain designed to streamline the development process. Together, this methodology and toolchain form a robust framework that enables engineers to efficiently develop interpretable and trustworthy DTH models for the CDTH platform. By integrating domain-specific mechanistic models with AI algorithms, the framework enhances model transparency and reliability. Finally, we validate our approach through a case study involving elderly patient care, demonstrating its effectiveness in supporting the development of DTH models that meet healthcare and interpretability requirements.
Cross-project defect prediction (CPDP) utilizes the existing labeled data in the source project to assist with the prediction of unlabeled projects in the target dataset, which effectively improves the prediction perf...
详细信息
Cross-project defect prediction (CPDP) utilizes the existing labeled data in the source project to assist with the prediction of unlabeled projects in the target dataset, which effectively improves the prediction performance and has become a research hotspot in softwareengineering. At present, CPDP can be categorized into homogeneous cross-project defect prediction and heterogeneous cross-project defect prediction (HDP), in which HDP doesn’t require that the source project and the target project have the same feature space, thus, it is more widely used in the actual CPDP. Most of current HDP methods map the original features to the latent feature space and reduce the inter-project variation by transferring domain-independent features, but the transferring process ignores the use of domain-related features, which affects the prediction performance of the model. Moreover, the mapped latent features are not conducive to the model’s interpretability. Based on these, this paper proposes a heterogeneous defect prediction method based on feature disentanglement (FD-HDP). We disentangle the features using domain-related and domain-independent feature extractors, respectively, to improve the interpretability of the model by maximizing the domain adversarial loss during training and guiding the feature extractors to produce accurate domain-related and domain-independent features. The weighted sum of the prediction results from domain-related and domain-independent predictors is used as the final prediction result of the project during the prediction process, which realizes the combination of domain-independent and domain-related features and effectively improves the prediction performance. In this paper, we conducted experiments using four publicly available defect datasets to construct heterogeneous scenarios. The results demonstrate that the FD-HDP model shows significant advantages over state-of-the-art methods in six metrics.
Trajectory prediction is a crucial challenge in autonomous vehicle motion planning and decision-making techniques. However, existing methods face limitations in accurately capturing vehicle dynamics and interactions. ...
详细信息
Trajectory prediction is a crucial challenge in autonomous vehicle motion planning and decision-making techniques. However, existing methods face limitations in accurately capturing vehicle dynamics and interactions. To address this issue, this paper proposes a novel approach to extracting vehicle velocity and acceleration, enabling the learning of vehicle dynamics and encoding them as auxiliary information. The VDI-LSTM model is designed, incorporating graph convolution and attention mechanisms to capture vehicle interactions using trajectory data and dynamic information. Specifically, a dynamics encoder is designed to capture the dynamic information, a dynamic graph is employed to represent vehicle interactions, and an attention mechanism is introduced to enhance the performance of LSTM and graph convolution. To demonstrate the effectiveness of our model, extensive experiments are conducted, including comparisons with several baselines and ablation studies on real-world highway datasets. Experimental results show that VDI-LSTM outperforms other baselines compared, which obtains a 3% improvement on the average RMSE indicator over the five prediction steps.
Distributed Collaborative Machine Learning (DCML) has emerged in artificial intelligence-empowered edge computing environments, such as the Industrial Internet of Things (IIoT), to process tremendous data generated by...
详细信息
Distributed Collaborative Machine Learning (DCML) has emerged in artificial intelligence-empowered edge computing environments, such as the Industrial Internet of Things (IIoT), to process tremendous data generated by smart devices. However, parallel DCML frameworks require resource-constrained devices to update the entire Deep Neural Network (DNN) models and are vulnerable to reconstruction attacks. Concurrently, the serial DCML frameworks suffer from training efficiency problems due to their serial training nature. In this paper, we propose a Model Pruning-enabled Federated Split Learning framework (MP-FSL) to reduce resource consumption with a secure and efficient training scheme. Specifically, MP-FSL compresses DNN models by adaptive channel pruning and splits each compressed model into two parts that are assigned to the client and the server. Meanwhile, MP-FSL adopts a novel aggregation algorithm to aggregate the pruned heterogeneous models. We implement MP-FSL with a real FL platform to evaluate its performance. The experimental results show that MP-FSL outperforms the state-of-the-art frameworks in model accuracy by up to 1.35%, while concurrently reducing storage and computational resource consumption by up to 32.2% and 26.73%, respectively. These results demonstrate that MP-FSL is a comprehensive solution to the challenges faced by DCML, with superior performance in both reduced resource consumption and enhanced model performance.
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT service...
详细信息
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT services, Mobile Edge Computing (MEC) has emerged as an indispensable technology in smart health. Benefiting from the cost-effectiveness of deployment, unmanned aerial vehicles (UAVs) equipped with MEC servers in Non-Orthogonal Multiple Access (NOMA) have emerged as a promising solution for providing smart health services in proximity to medical devices (MDs). However, the escalating number of MDs and the limited availability of communication resources of UAVs give rise to a significant increase in transmission latency. Moreover, due to the limited communication range of UAVs, the geographically-distributed MDs lead to workload imbalance of UAVs, which deteriorates the service response delay. To this end, this paper proposes a UAV-enabled Distributed computation Offloading and Power control method with Multi-Agent, named DOPMA, for NOMA-based IoMT environment. Specifically, this paper introduces computation and transmission queue models to analyze the dynamic characteristics of task execution latency and energy consumption. Moreover, a credit assignment scheme-based reward function is designed considering both system-level rewards and rewards tailored to each MD, and an improved multi-agent deep deterministic policy gradient algorithm is developed to derive offloading and power control decisions independently. Extensive simulations demonstrate that the proposed method outperforms existing schemes, achieving \(7.1\%\) reduction in energy consumption and \(16\%\) decrease in average delay.
暂无评论