版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Faculty of Computing and Information Technology University of the Punjab Lahore Pakistan School of Science Engineering and Environment University of Salford Manchester U.K. Faculty of Computing Universiti Teknologi Malaysia Johor Bahru Malaysia
出 版 物:《IEEE Access》 (IEEE Access)
年 卷 期:2025年第13卷
页 面:104106-104119页
基 金:British Council under the Global Partnerships Program
主 题:Human activity recognition Feature extraction Accuracy Deep learning Wearable sensors Real-time systems Performance evaluation Data models Convolutional neural networks Smart phones
摘 要:Human activity recognition (HAR) plays a pivotal role in applications such as healthcare monitoring, fitness tracking, and smart homes. Multi-modal sensor data from wearable devices offers diverse perspectives on human motion, enhancing recognition accuracy and robustness. However, integrating these modalities poses challenges due to sensor heterogeneity and variability in placement. This study examines the role of multi-modalities in HAR using a hybrid convolutional multi-modal attention network (HCMMA-Net), designed to exploit spatial and temporal dependencies in sensor data. We evaluate the model on two benchmark datasets Cogage, achieving an accuracy of 93.94%, and WISDM, with an accuracy of 99.29%, demonstrating its strong generalizability across varied sensor configurations. Additionally, we present a newly collected multi-modal dataset, HumcareV1.0, comprising different activities in smart-home-like scenarios. On this real-world dataset, HCMMA-Net attains an accuracy of 97.56%, highlighting its effectiveness in capturing subtle behavioral nuances in practical environments. The model exhibits robust generalization across complex activity patterns and sensor configurations, underscoring the significance of multi-modal integration in advancing HAR systems. These findings highlight the potential of our approach for deployment in real-time, context-aware smart environments.