Identification of hand gestures using wearable interfaces has gained significant attention in areas such as human–computer interaction, gaming, sign language recognition, rehabilitation and assistive robotics. This s...
详细信息
Identification of hand gestures using wearable interfaces has gained significant attention in areas such as human–computer interaction, gaming, sign language recognition, rehabilitation and assistive robotics. This study presents a hybrid deeplearning architecture that integrates convolutional layers with BiLSTM and attention mechanisms to classify upper limb movements in healthy individuals using four distinct feature sets (F1, F2, F3, F4). A key contribution of this work is the optimization of convolutional hyperparameters, such as the number of filters, using various nature-inspired metaheuristic algorithms, thereby eliminating the need for manual tuning. Extensive experiments conducted on EMAHA-DB1 dataset demonstrate the effectiveness of the proposed method. Comparative evaluations against several state-of-the-art ML models, namely Cubic SVM, LDA, Decision Tree, and deep Neural Network, reveal that our model chieves superior performance across all four distinct feature sets. Specifically, the AVOA-optimized DNN-BiLSTM-Attention model achieved classification accuracies of 99.35 % for F1, 99.72 % for F2, 94.39 % for F3, and 99.19 % for F4 feature set. Furthermore, statistical analysis confirms the model’s robustness and significant improvements across all feature sets, highlighting its superiority.
暂无评论