版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:North China Univ Sci & Technol Coll Artificial Intelligence Tangshan 063210 Peoples R China Tsinghua Univ Inst Precis Med Beijing Natl Res Ctr Informat Sci & Technol Beijing 100084 Peoples R China Xian Jiaotong Liverpool Univ Dept Comp Suzhou 215000 Peoples R China Univ Limerick Telecommun Res Ctr TRC Limerick V94 T9PX Ireland Univ Plovdiv Paisii Hilendarski Dept Comp Syst Plovdiv 4000 Bulgaria Bulgarian Acad Sci Inst Math & Informat Sofia 1040 Bulgaria
出 版 物:《IEEE ACCESS》 (IEEE Access)
年 卷 期:2023年第11卷
页 面:40192-40203页
核心收录:
基 金:National Key Research and Development Program of China [2017YFE0135700] Tsinghua Precision Medicine Foundation [2022TS003] Science and Education for Smart Growth Operational Program European Union [BG05M2OP001-1.001-0003] Telecommunications Research Centre (TRC), University of Limerick, Ireland
主 题:Labeling Classification algorithms Text categorization Artificial neural networks Support vector machines Feature extraction Task analysis Adversarial machine learning Text classification non-negative positive-unlabeled (nnPU) learning focal loss virtual adversarial training (VAT) adversarial training for large neural language models (ALUM)
摘 要:With the development of Internet technology, network platforms have gradually become a tool for people to obtain hot news. How to filter out the current hot news from a large number of news collections and push them to users has important application value. In supervised learning scenarios, each piece of news needs to be labeled manually, which takes a lot of time and effort. From the perspective of semi-supervised learning, on the basis of the non-negative Positive-Unlabeled (nnPU) learning, this paper proposes a novel algorithm, called Enhanced nnPU with Focal Loss (FLPU), for news headline classification, which uses the Focal Loss to replace the way the classical nnPU calculates the empirical risk of positive and negative samples. Then, by introducing the Virtual Adversarial Training (VAT) of the Adversarial training for large neural LangUage Models (ALUM) into FLPU, another (and better) algorithm, called FLPU+ALUM , is proposed for the same purpose, aiming to label only a small number of positive samples. The superiority of both algorithms to the state-of-the-art PU algorithms considered is demonstrated by means of experiments, conducted on two datasets for performance comparison. Moreover, through another set of experiments, it is shown that, if enriched by the proposed algorithms, the RoBERTa-wwm-ext model can achieve better classification performance than the state-of-the-art binary classification models included in the comparison. In addition, a Ratio Batch method is elaborated and proposed as more stable for use in scenarios involving only a small number of labeled positive samples, which is also experimentally demonstrated.