咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Holmes-VAU: Towards Long-term ... 收藏
arXiv

Holmes-VAU: Towards Long-term Video Anomaly Understanding at Any Granularity

作     者:Zhang, Huaxin Xu, Xiaohao Wang, Xiang Zuo, Jialong Huang, Xiaonan Gao, Changxin Zhang, Shanjun Yu, Li Sang, Nong 

作者机构:Key Laboratory of Image Processing and Intelligent Control School of Artificial Intelligence and Automation Huazhong University of Science and Technology China University of Michigan Ann Arbor United States Kanagawa University Japan 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Visual languages 

摘      要:How can we enable models to comprehend video anomalies occurring over varying temporal scales and contexts? Traditional Video Anomaly Understanding (VAU) methods focus on frame-level anomaly prediction, often missing the interpretability of complex and diverse real-world anomalies. Recent multimodal approaches leverage visual and textual data but lack hierarchical annotations that capture both short-term and long-term anomalies. To address this challenge, we introduce HIVAU-70k, a large-scale benchmark for hierarchical video anomaly understanding across any granularity. We develop a semi-automated annotation engine that efficiently scales high-quality annotations by combining manual video segmentation with recursive free-text annotation using large language models (LLMs). This results in over 70,000 multi-granular annotations organized at clip-level, event-level, and video-level segments. For efficient anomaly detection in long videos, we propose the Anomaly-focused Temporal Sampler (ATS). ATS integrates an anomaly scorer with a density-aware sampler to adaptively select frames based on anomaly scores, ensuring that the multimodal LLM concentrates on anomaly-rich regions, which significantly enhances both efficiency and accuracy. Extensive experiments demonstrate that our hierarchical instruction data markedly improves anomaly comprehension. The integrated ATS and visual-language model outperform traditional methods in processing long videos. Our benchmark and model are publicly available at https: //***/pipixin321/HolmesVAU. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分