The rapid advancements in big data and the Internet of Things (IoT) have significantly accelerated the digital transformation of medical institutions, leading to the widespread adoption of Digital Twin Healthcare (DTH...
详细信息
The rapid advancements in big data and the Internet of Things (IoT) have significantly accelerated the digital transformation of medical institutions, leading to the widespread adoption of Digital Twin Healthcare (DTH). The Cloud DTH Platform (CDTH) serves as a cloud-based framework that integrates DTH models, healthcare resources, patient data, and medical services. By leveraging real-time data from medical devices, the CDTH platform enables intelligent healthcare services such as disease prediction and medical resource optimization. However, the platform functions as a system of systems (SoS), comprising interconnected yet independent healthcare services. This complexity is further compounded by the integration of both black-box AI models and domain-specific mechanistic models, which pose challenges in ensuring the interpretability and trustworthiness of DTH models. To address these challenges, we propose a Model-Based Systems engineering (MBSE)-driven DTH modeling methodology derived from systematic requirement and functional analyses. To implement this methodology effectively, we introduce a DTH model development approach using the X language, along with a comprehensive toolchain designed to streamline the development process. Together, this methodology and toolchain form a robust framework that enables engineers to efficiently develop interpretable and trustworthy DTH models for the CDTH platform. By integrating domain-specific mechanistic models with AI algorithms, the framework enhances model transparency and reliability. Finally, we validate our approach through a case study involving elderly patient care, demonstrating its effectiveness in supporting the development of DTH models that meet healthcare and interpretability requirements.
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT service...
详细信息
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT services, Mobile Edge Computing (MEC) has emerged as an indispensable technology in smart health. Benefiting from the cost-effectiveness of deployment, unmanned aerial vehicles (UAVs) equipped with MEC servers in Non-Orthogonal Multiple Access (NOMA) have emerged as a promising solution for providing smart health services in proximity to medical devices (MDs). However, the escalating number of MDs and the limited availability of communication resources of UAVs give rise to a significant increase in transmission latency. Moreover, due to the limited communication range of UAVs, the geographically-distributed MDs lead to workload imbalance of UAVs, which deteriorates the service response delay. To this end, this paper proposes a UAV-enabled Distributed computation Offloading and Power control method with Multi-Agent, named DOPMA, for NOMA-based IoMT environment. Specifically, this paper introduces computation and transmission queue models to analyze the dynamic characteristics of task execution latency and energy consumption. Moreover, a credit assignment scheme-based reward function is designed considering both system-level rewards and rewards tailored to each MD, and an improved multi-agent deep deterministic policy gradient algorithm is developed to derive offloading and power control decisions independently. Extensive simulations demonstrate that the proposed method outperforms existing schemes, achieving \(7.1\%\) reduction in energy consumption and \(16\%\) decrease in average delay.
Collaborative Filtering (CF) is a pivotal research area in recommender systems that capitalizes on collaborative similarities between users and items to provide personalized recommendations. With the remarkable achiev...
详细信息
Collaborative Filtering (CF) is a pivotal research area in recommender systems that capitalizes on collaborative similarities between users and items to provide personalized recommendations. With the remarkable achievements of node embedding-based Graph Neural Networks (GNNs), we explore the upper bounds of expressiveness inherent to embedding-based methodologies, and tackle the challenges by reframing the CF task as a graph-signal processing problem. To this end, we propose PolyCF, a flexible graph signal filter that leverages polynomial graph filters to process interaction signals. PolyCF exhibits the capability to capture spectral features across multiple eigenspaces through a series of Generalized Gram filters, and is able to approximate the optimal polynomial response function for recovering missing interactions. A graph optimization objective and a pair-wise ranking objective are jointly used to optimize the parameters of the convolution kernel. Experiments on three widely adopted datasets demonstrate the superiority of PolyCF over the state-of-the-art CF methods.
Large Language Models (LLMs) have shown impressive In-Context Learning (ICL) ability in code generation. LLMs take a prompt context consisting of a few demonstration examples and a new requirement as input, and output...
详细信息
Large Language Models (LLMs) have shown impressive In-Context Learning (ICL) ability in code generation. LLMs take a prompt context consisting of a few demonstration examples and a new requirement as input, and output new programs without any parameter update. Existing studies have found that the performance of ICL-based code generation heavily depends on the quality of demonstration examples and thus arises research on selecting demonstration examples: given a new requirement, a few demonstration examples are selected from a candidate pool, where LLMs are expected to learn the pattern hidden in these selected demonstration examples. Existing approaches are mostly based on heuristics or randomly selecting examples. However, the distribution of randomly selected examples usually varies greatly, making the performance of LLMs less robust. The heuristics retrieve examples by only considering textual similarities of requirements, leading to sub-optimal *** fill this gap, we propose a Large language model-Aware selection approach for In-context-Learning-based code generation named LAIL. LAIL uses LLMs themselves to select examples. It requires LLMs themselves to label a candidate example as a positive example or a negative example for a requirement. Positive examples are helpful for LLMs to generate correct programs, while negative examples are trivial and should be ignored. Based on the labeled positive and negative data, LAIL trains a model-aware retriever to learn the preference of LLMs and select demonstration examples that LLMs need. During the inference, given a new requirement, LAIL uses the trained retriever to select a few examples and feed them into LLMs to generate desired programs. We apply LAIL to four widely used LLMs and evaluate it on five code generation datasets. Extensive experiments demonstrate that LAIL outperforms the state-of-the-art (SOTA) baselines by 11.58%, 3.33%, and 5.07% on CodeGen-Multi-16B, 1.32%, 2.29%, and 1.20% on CodeLlama-3
暂无评论