版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Malaya Fac Comp Sci & Informat Technol Ctr Image & Signal Proc Kuala Lumpur 50603 Malaysia
出 版 物:《NEURAL COMPUTING & APPLICATIONS》 (神经网络计算与应用)
年 卷 期:2016年第27卷第4期
页 面:845-856页
核心收录:
学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:Fundamental Research Grant Scheme (FRGS) MoE Grant from Ministry of Education Malaysia [FP027-2013A H-00000-60010-E13110]
主 题:Human motion analysis Video surveillance system Computer vision Fuzzy qualitative reasoning
摘 要:The integration of advance human motion analysis techniques in low-cost video cameras has emerged for consumer applications, particularly in video surveillance systems. These smart and cheap devices provide the practical solutions for improving the public safety and homeland security with the capability of understanding the human behaviour automatically. In this sense, an intelligent video surveillance system should not be constrained on a person viewpoint, as in natural, a person is not restricted to perform an action from a fixed camera viewpoint. To achieve the objective, many state-of-the-art approaches require the information from multiple cameras in their processing. This is an impractical solution by considering its feasibility and computational complexity. First, it is very difficult to find an open space in real environment with perfect overlapping for multi-camera calibration. Secondly, the processing of information from multiple cameras is computational burden. With this, a surge of interest has sparked on single camera approach with notable work on the concept of view specific action recognition. However in their work, the viewpoints are assumed in a priori. In this paper, we extend it by proposing a viewpoint estimation framework where a novel human contour descriptor namely the fuzzy qualitative human contour is extracted from the fuzzy qualitative Poisson human model for viewpoint analysis. Clustering algorithms are used to learn and classify the viewpoints. In addition, our system is also integrated with the capability to classify front and rear views. Experimental results showed the reliability and effectiveness of our proposed viewpoint estimation framework by using the challenging IXMAS human action dataset.