Image haze removal is highly desired for the application of computer vision. This study proposes a novel context-guided generative adversarial network (CGGAN) for single image dehazing. Of which, a novel new encoder-d...
详细信息
Image haze removal is highly desired for the application of computer vision. This study proposes a novel context-guided generative adversarial network (CGGAN) for single image dehazing. Of which, a novel new encoder-decoder is employed as the generator. In addition, it consists of a feature-extraction net, a context-extraction net, and a fusion net in sequence. The feature-extraction net acts as an encoder, and is used for extracting haze features. The content-extraction net is a multi-scale parallel pyramid decoder and is used for extracting the deep features of the encoder and generating coarse dehazing image. The fusion net is a decoder and is used for obtaining the final haze-free image. In order to get better dehazing results, multi-scale information obtained during the decoding process of the context extraction decoder is used for guiding the fusion decoder. By introducing an extra coarse decoder to the original encoder-decoder, the CGGAN can make better use of the deep feature information extracted by the encoder. To ensure that the proposed CGGAN works effectively for different haze scenarios, different loss functions are employed for the two decoders. Experiments results show the advantage and the effectiveness of the proposed CGGAN, evidential improvements over existing state-of-the-art methods are obtained.
Adapter based fine-tuning has been studied for improving the performance of SAM on downstream tasks. However, there is still a significant performance gap between fine-tuned SAMs and domain-specific models. To reduce ...
详细信息
ISBN:
(纸本)9798350390155;9798350390162
Adapter based fine-tuning has been studied for improving the performance of SAM on downstream tasks. However, there is still a significant performance gap between fine-tuned SAMs and domain-specific models. To reduce the gap, we propose Two-Stream SAM (TS-SAM). On the one hand, inspired by the side network in Parameter-Efficient Fine-Tuning (PEFT), we designed a lightweight Convolutional Side Adapter (CSA), which integrates the powerful features from SAM into side network training for comprehensive feature fusion. On the other hand, in line with the characteristics of segmentation tasks, we designed Multi-scale Refinement Module (MRM) and Feature fusion decoder (FFD) to keep both the detailed and semantic features. Extensive experiments on ten public datasets from three tasks demonstrate that TS-SAM not only significantly outperforms the recently proposed SAM-Adapter and SSOM, but achieves competitive performance with the SOTA domain-specific models. Our code is available at: https://***/maoyangou147/TS- SAM.
暂无评论