版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Henan Inst Sci & Technol Sch Comp Sci & Technol Xinxiang 453003 Henan Peoples R China Xidian Univ Sch Math & Stat Xian 710071 Shanxi Peoples R China Zhejiang Normal Univ Coll Math Med Jinhua 321004 Zhejiang Peoples R China
出 版 物:《DIGITAL SIGNAL PROCESSING》 (Digital Signal Process Rev J)
年 卷 期:2025年第159卷
核心收录:
学科分类:0808[工学-电气工程] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学]
基 金:National Natural Science Foundation of China [62001158, 62372359, 12371429] China Postdoctoral Science Foundation [2019M652545] Key R&D projects in Henan Province Key Scientific and Technological Re-search Projects in Henan Province
主 题:Low-light image enhancement Retinex theory Deep learning JIRE-Net IB-CAM IDGB
摘 要:Images captured in dark conditions unavoidably suffer from poor visibility issues. Numerous methods addressing these challenges are developed based on the Retinex theory, which decomposes an observed image into illumination and reflection maps, promoting refined processing to enhance image quality. However, most of such methods treat the illumination and reflection components separately, without considering their informational interaction. The proposed method reinforces the collaboration of illumination and reflection with a joint enhancement network named JIRE-Net. We first utilize the powerful feature extraction capability of the convolutional neural network (CNN) to construct a decomposition network. Subsequently, we elaborately designed an Illumination-Driven Transformer-based network structure to reconstruct the normal-light image. Specifically, the Channel Attention Module (IB-CAM) is formulated to promote the features in reflection, which utilize the information of attention weights calculated based on the illumination map. Thereafter, the Illumination-Driven Guidance Block (IDGB) is designed to capture dependencies across input features, cooperatively enhancing the reflection and illumination features. The experimental results on the existing benchmark datasets show that our method obtains better quantitative and qualitative results, achieving a more balanced overall brightness appearance and color quality while preserving finer texture and structural details.