咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Efficient Camera Exposure Cont... 收藏

Efficient Camera Exposure Control for Visual Odometry via Deep Reinforcement Learning

作     者:Zhang, Shuyang He, Jinhao Zhu, Yilong Wu, Jin Yuan, Jie 

作者机构:Hong Kong Univ Sci & Technol Dept Elect & Comp Engn Hong Kong Peoples R China Hong Kong Univ Sci & Technol GZ Thrust Robot & Autonomous Syst Guangzhou 511453 Peoples R China 

出 版 物:《IEEE ROBOTICS AND AUTOMATION LETTERS》 (IEEE Robot. Autom.)

年 卷 期:2025年第10卷第2期

页      面:1609-1616页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程] 

主  题:Training Cameras Lighting Measurement Imaging Hardware Robot vision systems Optimization Visual odometry Deep reinforcement learning SLAM reinforcement learning model learning for control 

摘      要:The stability of visual odometry (VO) systems is undermined by degraded image quality, especially in environments with significant illumination changes. This study employs a deep reinforcement learning (DRL) framework to train agents for exposure control, aiming to enhance imaging performance in challenging conditions. A lightweight image simulator is developed to facilitate the training process, enabling the diversification of image exposure and sequence trajectory. This setup enables completely offline training, eliminating the need for direct interaction with camera hardware and the real environments. Different levels of reward functions are crafted to enhance the VO systems, equipping the DRL agents with varying intelligence. Extensive experiments have shown that our exposure control agents achieve superior efficiency-with an average inference duration of 1.58 ms per frame on a CPU-and respond more quickly than traditional feedback control schemes. By choosing an appropriate reward function, agents acquire an intelligent understanding of motion trends and can anticipate future changes in illumination. This predictive capability allows VO systems to deliver more stable and precise odometry results.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分