咨询与建议

限定检索结果

文献类型

  • 17 篇 会议
  • 16 篇 期刊文献

馆藏范围

  • 33 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 30 篇 工学
    • 27 篇 计算机科学与技术...
    • 8 篇 电气工程
    • 6 篇 信息与通信工程
    • 5 篇 软件工程
    • 2 篇 控制科学与工程
    • 1 篇 机械工程
    • 1 篇 交通运输工程
    • 1 篇 安全科学与工程
    • 1 篇 网络空间安全
  • 3 篇 文学
    • 3 篇 外国语言文学
  • 2 篇 管理学
    • 2 篇 图书情报与档案管...
    • 1 篇 管理科学与工程(可...
  • 1 篇 教育学
    • 1 篇 心理学(可授教育学...
  • 1 篇 理学
    • 1 篇 物理学
  • 1 篇 医学
    • 1 篇 临床医学

主题

  • 33 篇 masked language ...
  • 5 篇 bert
  • 4 篇 natural language...
  • 3 篇 language modelin...
  • 3 篇 transformers
  • 3 篇 domain adaptatio...
  • 3 篇 predictive model...
  • 2 篇 deep learning
  • 2 篇 task analysis
  • 2 篇 indexes
  • 2 篇 zero-shot learni...
  • 2 篇 self-supervised ...
  • 2 篇 inter-culture
  • 2 篇 cross-lingual
  • 2 篇 computational mo...
  • 2 篇 cross-lingual wr...
  • 2 篇 text style trans...
  • 2 篇 steganography
  • 2 篇 localization
  • 2 篇 sentiment analys...

机构

  • 1 篇 univ informat en...
  • 1 篇 chinese acad sci...
  • 1 篇 realtek semicond...
  • 1 篇 guangdong univ f...
  • 1 篇 soochow univ sch...
  • 1 篇 univ n carolina ...
  • 1 篇 ochanomizu univ
  • 1 篇 univ qom dept co...
  • 1 篇 blended learning...
  • 1 篇 netease inc fuxi...
  • 1 篇 int inst informa...
  • 1 篇 univ toulouse je...
  • 1 篇 shandong univ sc...
  • 1 篇 tianjin chengjia...
  • 1 篇 fudan univ artif...
  • 1 篇 pathfinders tran...
  • 1 篇 jd ai res people...
  • 1 篇 natl cheng kung ...
  • 1 篇 lab adv comp & i...
  • 1 篇 bogazici univ de...

作者

  • 3 篇 qian ming
  • 1 篇 wu jheng-long
  • 1 篇 nathan cooper
  • 1 篇 ren xiangyuan
  • 1 篇 yang hao
  • 1 篇 shang yue
  • 1 篇 celik emrecan
  • 1 篇 gan tian
  • 1 篇 chaudhury santan...
  • 1 篇 li xia
  • 1 篇 tao jianrong
  • 1 篇 upadhyay akarsh
  • 1 篇 du ruizhong
  • 1 篇 ge jidong
  • 1 篇 mesut andac sahi...
  • 1 篇 du liming
  • 1 篇 zhang ji
  • 1 篇 kim eun chong
  • 1 篇 yang kaiyuan
  • 1 篇 huang liguo

语言

  • 33 篇 英文
检索条件"主题词=Masked Language Modeling"
33 条 记 录,以下是11-20 订阅
排序:
Improving Source Code Pre-Training via Type-Specific Masking
收藏 引用
ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY 2025年 第3期34卷 1-34页
作者: Zou, Wentao Li, Qi Li, Chuanyi Ge, Jidong Chen, Xiang Huang, Liguo Luo, Bin Nanjing Univ Natl Key Lab Novel Software Technol Nanjing Peoples R China Nantong Univ Sch Artificial Intelligence & Comp Sci Nantong Peoples R China Southern Methodist Univ Dept Comp Sci Dallas TX USA
The masked language modeling (MLM) task is widely recognized as one of the most effective pre-training tasks and currently derives many variants in the Software Engineering (SE) field. However, most of these variants ... 详细信息
来源: 评论
Argument Structure Mining Based on Effective Usage of Contextual Information
收藏 引用
KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS 2025年 第1期19卷 1-16页
作者: Xu, Menglong Zhang, Yanliang Yang, Yapu Henan Polytech Univ Sch Phys & Elect Informat Engn Jiaozuo 454000 Peoples R China Xu Ji Elect Co Ltd Xuchang 461000 Peoples R China
Argument Structure Extraction (ASE) is increasingly prominent for its role in identifying discourse structure within documents. Many pioneering works have demonstrated that the contextual information in the document i... 详细信息
来源: 评论
Written Term Detection Improves Spoken Term Detection
收藏 引用
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND language PROCESSING 2024年 32卷 3213-3223页
作者: Yusuf, Bolaji Saraclar, Murat Bogazici Univ Dept Elect & Elect Engn TR-34342 Istanbul Turkiye Brno Univ Technol Fac Informat Technol SpeechFIT Brno 61200 Czech Republic
End-to-end (E2E) approaches to keyword search (KWS) are considerably simpler in terms of training and indexing complexity when compared to approaches which use the output of automatic speech recognition (ASR) systems.... 详细信息
来源: 评论
SNP-S3: Shared Network Pre-Training and Significant Semantic Strengthening for Various Video-Text Tasks
收藏 引用
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 2024年 第4期34卷 2525-2535页
作者: Dong, Xingning Guo, Qingpei Gan, Tian Wang, Qing Wu, Jianlong Ren, Xiangyuan Cheng, Yuan Chu, Wei Shandong Univ Sch Comp Sci & Technol Qingdao 266237 Peoples R China Ant Grp Co Ltd Hangzhou 310023 Peoples R China Harbin Inst Technol Shenzhen Sch Comp Sci & Technol Shenzhen 518055 Peoples R China Fudan Univ Artificial Intelligence Innovat & Incubat Inst AI3 Shanghai 200433 Peoples R China
We present a framework for learning cross-modal video representations by directly pre-training on raw data to facilitate various downstream video-text tasks. Our main contributions lie in the pre-training framework an... 详细信息
来源: 评论
Improving edit-based unsupervised sentence simplification using fine-tuned BERT
收藏 引用
PATTERN RECOGNITION LETTERS 2023年 第1期166卷 112-118页
作者: Rashid, Mohammad Amin Amirkhani, Hossein Univ Qom Dept Comp Engn Qom Iran
Word suggestion in unsupervised sentence simplification aims to replace complex words of a given sen-tence with their simpler alternatives. This is mostly done without considering their context within the input senten... 详细信息
来源: 评论
Hypert: hypernymy-aware BERT with Hearst pattern exploitation for hypernym discovery
收藏 引用
JOURNAL OF BIG DATA 2023年 第1期10卷 141页
作者: Yun, Geonil Lee, Yongjae Moon, A-Seong Lee, Jaesung Chung Ang Univ Dept Artificial Intelligence 84 Heukseok Ro Seoul 06974 South Korea S2W Inc 12 Pangyoyeok Ro192 beon Gil Seongnam 13524 Gyeonggi Do South Korea Chung Ang Univ AI ML Innovat Res Ctr 84 Heukseok Ro Seoul 06974 South Korea
Hypernym discovery is challenging because it aims to find suitable instances for a given hyponym from a predefined hypernym vocabulary. Existing hypernym discovery methods used supervised learning with word embedding ... 详细信息
来源: 评论
GDP: Generic Document Pretraining to Improve Document Understanding  18th
GDP: Generic Document Pretraining to Improve Document Unders...
收藏 引用
18th International Conference on Document Analysis and Recognition (ICDAR)
作者: Trivedi, Akkshita Upadhyay, Akarsh Mukhopadhyay, Rudrabha Chaudhury, Santanu Indian Inst Technol Jodhpur Karwar India Int Inst Informat Technol Hyderabad India
In this paper, we propose a novel pretraining approach for document analysis that advances beyond conventional methods. The approach, called the GDPerformer, trains a suite of unique architectures to predict both mask... 详细信息
来源: 评论
High Fidelity Text-to-Speech Via Discrete Tokens Using Token Transducer and Group masked language Model  25
High Fidelity Text-to-Speech Via Discrete Tokens Using Token...
收藏 引用
25th Interspeech Conference
作者: Lee, Joun Yeop Jeong, Myeonghun Kim, Minchan Lee, Ji-Hyun Choi, Hoon-Young Kim, Nam Soo Samsung Res Seoul South Korea Seoul Natl Univ Dept ECE & INMC Seoul South Korea
We propose a novel two-stage text-to-speech (TTS) framework with two types of discrete tokens, i.e., semantic and acoustic tokens, for high-fidelity speech synthesis. It features two core components: the Interpreting ... 详细信息
来源: 评论
Exploring terminological relations between multi-word terms in distributional semantic models
收藏 引用
TERMINOLOGY 2024年 第2期30卷 159-189页
作者: Wang, Yizhe Daille, Beatrice Hathout, Nabil Univ Toulouse Jean Jaures Toulouse France Nantes Univ Nantes France CNRS Lab Cognit Langues Langage Ergonomie CLLE Paris France Nantes Univ Dept Informat LS2N 2 Chemin HoussiniereBP 92208 F-44322 Nantes 3 France
A term is a lexical unit with specialized meaning in a particular domain. Terms may be simple (STs) or multi-word (MWTs). The organization of terms gives a representation of the structure of domain knowledge, which is... 详细信息
来源: 评论
TVD-BERT: A Domain-Adaptation Pre-trained Model for Textural Vulnerability Descriptions  20th
TVD-BERT: A Domain-Adaptation Pre-trained Model for Textural...
收藏 引用
20th International Conference on Intelligent Computing (ICIC)
作者: Wang, Ziyuan Liang, Xiaoyan Du, Ruizhong Tian, Junfeng Zhang, Siyi Hebei Univ Key Lab High Trusted Informat Syst Hebei Prov Baoding Peoples R China Hebei Univ Sch Cyber Secur & Comp Baoding Peoples R China
Textual Vulnerability Descriptions (TVD) serve as concise natural language summaries within databases like the National Vulnerability Database(NVD), elucidating key facets of software vulnerabilities including impacte... 详细信息
来源: 评论