咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Crowd Density Forecasting by M... 收藏

Crowd Density Forecasting by Modeling Patch-Based Dynamics

作     者:Minoura, Hiroaki Yonetani, Ryo Nishimura, Mai Ushiku, Yoshitaka 

作者机构:Chubu Univ Matsumotocho 1200 Kasugai Aichi 4878501 Japan OMRON SINIC X Corp Bunkyo Ku Hongo 5-24-5 Tokyo 1130033 Japan 

出 版 物:《IEEE ROBOTICS AND AUTOMATION LETTERS》 (IEEE Robot. Autom.)

年 卷 期:2021年第6卷第2期

页      面:287-294页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程] 

主  题:Forecasting Videos Trajectory Vehicle dynamics Surveillance Estimation Spatiotemporal phenomena Computer vision for transportation deep learning for visual perception surveillance robotic systems 

摘      要:Forecasting human activities observed in videos is a long-standing challenge in computer vision and robotics and is also beneficial for various real-world applications such as mobile robot navigation and drone landing. In this work, we present a new forecasting task called crowd density forecasting. Given a video of a crowd captured by a surveillance camera, our goal is to predict how the density of the crowd will change in unseen future frames. To address this task, we developed the patch-based density forecasting networks (PDFNs), which directly forecasts crowd density maps of future frames instead of trajectories of each moving person in the crowd. The PDFNs represent crowd density maps based on spatially or spatiotemporally overlapping patches and learn a simple density dynamics of fewer people in each patch. Doing so allows us to efficiently deal with diverse and complex crowd density dynamics observed when input videos involve a variable number of crowds moving independently. Experimental results with several public datasets of surveillance videos demonstrate the effectiveness of our approaches compared with state-of-the-art forecasting methods.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分