咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >ICM-Assistant: Instruction-tun... 收藏
arXiv

ICM-Assistant: Instruction-tuning Multimodal Large Language Models for Rule-based Explainable Image Content Moderation

作     者:Wu, Mengyang Zhao, Yuzhi Cao, Jialun Xu, Mingjie Jiang, Zhongming Wang, Xuehui Li, Qinbin Hu, Guangneng Qin, Shengchao Fu, Chi-Wing 

作者机构:Department of Computer Science and Engineering The Chinese University of Hong Kong Hong Kong Huawei Hong Kong Research Center Hong Kong Department of Computer Science and Engineering The Hong Kong University of Science and Technology Hong Kong Huawei 2012 Laboratories Artificial Intelligence Institute Shanghai Jiao Tong University China School of Computer Science and Technology Huazhong University of Science and Technology China School of Computer Science and Technology Xidian University China Guangzhou Institute of Technology Xidian University China ICTT and ISN Laboratory Xidian University China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:HTTP 

摘      要:Controversial contents largely inundate the Internet, infringing various cultural norms and child protection standards. Traditional Image Content Moderation (ICM) models fall short in producing precise moderation decisions for diverse standards, while recent multimodal large language models (MLLMs), when adopted to general rule-based ICM, often produce classification and explanation results that are inconsistent with human moderators. Aiming at flexible, explainable, and accurate ICM, we design a novel rule-based dataset generation pipeline, decomposing concise human-defined rules and leveraging well-designed multi-stage prompts to enrich short explicit image annotations. Our ICM-Instruct dataset includes detailed moderation explanation and moderation Q-A pairs. Built upon it, we create our ICM-Assistant model in the framework of rule-based ICM, making it readily applicable in real practice. Our ICM-Assistant model demonstrates exceptional performance and flexibility. Specifically, it significantly outperforms existing approaches on various sources, improving both the moderation classification (36.8% on average) and moderation explanation quality (26.6% on average) consistently over existing MLLMs. Caution: Content includes offensive language or images. Code — https://***/zhaoyuzhi/ICM-Assistant Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分