Body joint modeling and human pose reconstruction provide precise motion and quantitative geometric information about human dynamics. The rich motion information obtained from human pose estimation plays important rol...
详细信息
Body joint modeling and human pose reconstruction provide precise motion and quantitative geometric information about human dynamics. The rich motion information obtained from human pose estimation plays important roles in a wide range of digital twin and connected health applications. However, current related researches have difficulties in extracting the joints’ spatial-temporal correlations from different levels. This is due to the poses being at various complexities in moving various joints differently. Hence, the typical conventional transformer method is non-adaptable and barely meets the aforementioned requirement. In this paper, we propose the Body Joint Interactive transFormers (BJIFormer) to extract the multi-level joints’ spatial-temporal information. The design enables the model to learn the inner joints’ correlation inside the body parts across frames and propagate the extracted information across the body parts with shared joints. The multi-level body joint interactive scheme has greater efficiency improvement by restricting the self-attention computation to partial body parts and connecting each body part by torso. The proposed interactive approach explores the spatial-temporal correlation following the hierarchical paradigm and effectively estimates and reconstructs 3D human poses.
Nowadays, research on session-based recommender systems (SRSs) is one of the hot spots in the recommendation domain. Existing methods make recommendations based on the user’s current intention (also called short-term...
详细信息
Nowadays, research on session-based recommender systems (SRSs) is one of the hot spots in the recommendation domain. Existing methods make recommendations based on the user’s current intention (also called short-term preference) during a session, often overlooking the specific preferences associated with these intentions. In reality, users usually exhibit diverse preferences for different intentions, and even for the same intention, individual preferences can vary significantly between users. As users interact with items throughout a session, their intentions can shift accordingly. To enhance recommendation quality, it is crucial not only to consider the user’s intentions but also to dynamically learn their varying preferences as these intentions change. In this paper, we propose a novel Intention-sensitive Preference Learning Network (IPLN) including three main modules: intention recognizer, preference detector, and prediction layer. Specifically, the intention recognizer infers the user’s underlying intention within his/her current session by analyzing complex relationships among items. Based on the acquired intention, the preference detector learns the intention-specific preference by selectively integrating latent features from items in the user’s historical sessions. Besides, the user’s general preference is utilized to refine the obtained preference to reduce the potential noise carried from historical records. Ultimately, the fine-tuned preference and intention collaborate to instruct the next-item recommendation in the prediction layer. To prove the effectiveness of the proposed IPLN, we perform extensive experiments on two real-world datasets. The experiment results demonstrate the superiority of IPLN compared with other state-of-the-art models.
The rapid increase in volume and complexity of biomedical data requires changes in research, communication, and clinical practices. This includes learning how to effectively integrate automated analysis with high–dat...
The rapid increase in volume and complexity of biomedical data requires changes in research, communication, and clinical practices. This includes learning how to effectively integrate automated analysis with high–data density visualizations that clearly express complex phenomena. In this review, we summarize key principles and resources from data visualization research that help address this difficult challenge. We then survey how visualization is being used in a selection of emerging biomedical research areas, including three-dimensional genomics, single-cell RNA sequencing (RNA-seq), the protein structure universe, phosphoproteomics, augmented reality–assisted surgery, and metagenomics. While specific research areas need highly tailored visualizations, there are common challenges that can be addressed with general methods and strategies. Also common, however, are poor visualization practices. We outline ongoing initiatives aimed at improving visualization practices in biomedical research via better tools, peer-to-peer learning, and interdisciplinary collaboration with computer scientists, science communicators, and graphic designers. These changes are revolutionizing how we see and think about our data.
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT service...
详细信息
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT services, Mobile Edge Computing (MEC) has emerged as an indispensable technology in smart health. Benefiting from the cost-effectiveness of deployment, unmanned aerial vehicles (UAVs) equipped with MEC servers in Non-Orthogonal Multiple Access (NOMA) have emerged as a promising solution for providing smart health services in proximity to medical devices (MDs). However, the escalating number of MDs and the limited availability of communication resources of UAVs give rise to a significant increase in transmission latency. Moreover, due to the limited communication range of UAVs, the geographically-distributed MDs lead to workload imbalance of UAVs, which deteriorates the service response delay. To this end, this paper proposes a UAV-enabled Distributed computation Offloading and Power control method with Multi-Agent, named DOPMA, for NOMA-based IoMT environment. Specifically, this paper introduces computation and transmission queue models to analyze the dynamic characteristics of task execution latency and energy consumption. Moreover, a credit assignment scheme-based reward function is designed considering both system-level rewards and rewards tailored to each MD, and an improved multi-agent deep deterministic policy gradient algorithm is developed to derive offloading and power control decisions independently. Extensive simulations demonstrate that the proposed method outperforms existing schemes, achieving \(7.1\%\) reduction in energy consumption and \(16\%\) decrease in average delay.
The Anchor-based Multi-view Subspace Clustering (AMSC) has turned into a favourable tool for large-scale multi-view clustering. However, there still exist some limitations to the current AMSC approaches. First, they t...
详细信息
The Anchor-based Multi-view Subspace Clustering (AMSC) has turned into a favourable tool for large-scale multi-view clustering. However, there still exist some limitations to the current AMSC approaches. First, they typically recover anchor graph structure in the original linear space, restricting their feasibility for nonlinear scenarios. Second, they usually overlook the potential benefits of jointly capturing the inter-view and intra-view information for enhancing the anchor representation learning. Third, these approaches mostly perform anchor-based subspace learning by a specific matrix norm, neglecting the latent high-order correlation across different views. To overcome these limitations, this paper presents an efficient and effective approach termed Large-scale Tensorized Multi-view Kernel Subspace Clustering (LTKMSC). Different from the existing AMSC approaches, our LTKMSC approach exploits both inter-view and intra-view awareness for anchor-based representation building. Concretely, the low-rank tensor learning is leveraged to capture the high-order correlation (i.e., the inter-view complementary information) among distinct views, upon which the \(l_{1,2}\) norm is imposed to explore the intra-view anchor graph structure in each view. Moreover, the kernel learning technique is leveraged to explore the nonlinear anchor-sample relationships embedded in multiple views. With the unified objective function formulated, an efficient optimization algorithm that enjoys low computational complexity is further designed. Extensive experiments on a variety of multi-view datasets have confirmed the efficiency and effectiveness of our approach when compared with the other competitive approaches.
Graph pattern mining is essential for deciphering complex networks. In the real world, graphs are dynamic and evolve over time, necessitating updates in mining patterns to reflect these changes. Traditional methods us...
详细信息
Graph pattern mining is essential for deciphering complex networks. In the real world, graphs are dynamic and evolve over time, necessitating updates in mining patterns to reflect these changes. Traditional methods use fine-grained incremental computation to avoid full re-mining after each update, which improves speed but often overlooks potential gains from examining inter-update interactions holistically, thus missing out on overall efficiency *** this paper, we introduce Cheetah, a dynamic graph mining system that processes updates in a coarse-grained manner by leveraging exploration domains. These domains exploit the community structure of real-world graphs to uncover data reuse opportunities typically missed by existing approaches. Exploration domains, which encapsulate extensive portions of the graph relevant to updates, allow multiple updates to explore the same regions efficiently. Cheetah dynamically constructs these domains using a management module that identifies and maintains areas of redundancy as the graph changes. By grouping updates within these domains and employing a neighbor-centric expansion strategy, Cheetah minimizes redundant data accesses. Our evaluation of Cheetah across five real-world datasets shows it outperforms current leading systems by an average factor of 2.63 ×.
This book presents the best-selected research papers presented at the Third International Conference on Computing, Communication, Security & Intelligent Systems (IC3SIS 2024), organized by SCMS school of Engineeri...
详细信息
ISBN:
(数字)9789819602285
ISBN:
(纸本)9789819602278;9789819602308
This book presents the best-selected research papers presented at the Third International Conference on Computing, Communication, Security & Intelligent Systems (IC3SIS 2024), organized by SCMS school of engineering and Technology, Kochi, on July 11–12, 2024. It discusses the latest technologies in communication and intelligent systems, covering various areas of computing, such as advanced computing, communication and networking, intelligent systems and analytics, 5G and IoT, soft computing, and cybersecurity in general. Featuring work by leading researchers and technocrats, the book serves as a valuable reference resource for young researchers, academics, and industry practitioners.
暂无评论