咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Extraction of visual and acous... 收藏
Advances in Intelligent and Soft Computing

Extraction of visual and acoustic features of the driver for monitoring driver ergonomics applied to extended driver assistance systems

作     者:Vankayalapati, H.D. Anne, K.R. Kyamakya, K. 

作者机构:Institute of Smart System Technologies Transportation Informatics Research Group University of Klagenfurt Klagenfurt Austria Department of Information Technology TIFAC-CORE in Telematics VR Siddhartha Engineering College Vijayawada India 

出 版 物:《Advances in Intelligent and Soft Computing》 (Adv. Intell. Soft Comput.)

年 卷 期:2010年第81卷

页      面:83-94页

核心收录:

主  题:Ergonomics 

摘      要:The National Highway Traffic Safety Administration (NHTSA) estimates that in the USA alone approximately 100,000 crashes each year are caused primarily by driver drowsiness or fatigue. The major cause for inattentiveness has been found to be a deficit in what we call in this paper an extended view of ergonomics, i.e. the extended ergonomics status of the driving process. This deficit is multidimensional as it includes aspects such as drowsiness (sleepiness), fatigue (lack of energy) and emotions/stress (for example sadness, anger, joy, pleasure, despair and irritation). Different approaches have been proposed for monitoring driver states, especially drowsiness and fatigue, using visual features of the driver such as head movement patterns eyelid movements, facial expressions or all of these together. The effectiveness of the approach depends on the quality of the extracted features, efficiency and the responsiveness of the classification algorithm. In this work, we propose the usage of acoustic information along with visual features to increase the robustness of the emotion/stress measurement system. In terms of the acoustic signals, this work will enlist the appropriate features for the driving situation and correlate them to parameters/dimensions of the extended ergonomics status vector. Prosodic features as well as the phonetic features of the acoustic signal are taken into account for the emotion recognition here. In this paper, a linear discriminant analysis based on a classification method using the Hausdorff distance measure is proposed for classifying the different emotional states. Experimental evaluation based on the Berlin voice database shows that the proposed method results in 85% recognition accuracy in speaker-independent emotion recognition experiments. © 2010 Springer-Verlag Berlin Heidelberg.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分