咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >GPT detectors are biased again... 收藏
arXiv

GPT detectors are biased against non-native English writers

作     者:Liang, Weixin Yuksekgonul, Mert Mao, Yining Wu, Eric Zou, James 

作者机构:Department of Computer Science Stanford University StanfordCA United States Department of Electrical Engineering Stanford University StanfordCA United States Department of Biomedical Data Science Stanford University StanfordCA United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2023年

核心收录:

主  题:Digital communication systems 

摘      要:The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse. The published version of this study can be accessed at: ***/patterns/fulltext/S2666-3899(23)00130-7. Copyright © 2023, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分