咨询与建议

限定检索结果

文献类型

  • 242 篇 期刊文献
  • 206 篇 会议

馆藏范围

  • 448 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 404 篇 工学
    • 363 篇 计算机科学与技术...
    • 118 篇 电气工程
    • 66 篇 软件工程
    • 52 篇 信息与通信工程
    • 32 篇 控制科学与工程
    • 15 篇 电子科学与技术(可...
    • 4 篇 机械工程
    • 4 篇 仪器科学与技术
    • 4 篇 生物医学工程(可授...
    • 4 篇 网络空间安全
    • 3 篇 材料科学与工程(可...
    • 3 篇 交通运输工程
    • 2 篇 土木工程
    • 2 篇 化学工程与技术
    • 2 篇 石油与天然气工程
  • 64 篇 理学
    • 31 篇 物理学
    • 21 篇 生物学
    • 10 篇 数学
    • 8 篇 化学
    • 3 篇 系统科学
  • 60 篇 管理学
    • 38 篇 管理科学与工程(可...
    • 17 篇 图书情报与档案管...
    • 8 篇 公共管理
  • 38 篇 医学
    • 19 篇 基础医学(可授医学...
    • 15 篇 临床医学
    • 4 篇 公共卫生与预防医...
  • 25 篇 文学
    • 24 篇 外国语言文学
  • 8 篇 农学
  • 6 篇 教育学
    • 6 篇 教育学
  • 4 篇 法学
    • 2 篇 法学
    • 2 篇 社会学
  • 1 篇 经济学

主题

  • 448 篇 pre-trained lang...
  • 39 篇 natural language...
  • 33 篇 deep learning
  • 24 篇 task analysis
  • 23 篇 prompt learning
  • 21 篇 bert
  • 16 篇 transfer learnin...
  • 16 篇 transformers
  • 16 篇 training
  • 15 篇 contrastive lear...
  • 15 篇 few-shot learnin...
  • 15 篇 sentiment analys...
  • 14 篇 knowledge graph
  • 14 篇 named entity rec...
  • 13 篇 transformer
  • 13 篇 prompt tuning
  • 12 篇 bit error rate
  • 11 篇 multi-task learn...
  • 11 篇 semantics
  • 10 篇 knowledge graph ...

机构

  • 9 篇 alibaba grp peop...
  • 7 篇 tsinghua univ de...
  • 6 篇 chinese acad sci...
  • 6 篇 univ chinese aca...
  • 5 篇 yunnan univ sch ...
  • 4 篇 peking univ peop...
  • 4 篇 harbin inst tech...
  • 4 篇 chinese univ hon...
  • 4 篇 yuan ze univ dep...
  • 4 篇 beijing language...
  • 4 篇 peng cheng lab p...
  • 4 篇 renmin univ chin...
  • 3 篇 east china univ ...
  • 3 篇 hong kong polyte...
  • 3 篇 beihang univ sch...
  • 3 篇 beijing univ tec...
  • 3 篇 univ chinese aca...
  • 3 篇 univ manchester ...
  • 3 篇 guangxi normal u...
  • 3 篇 natl univ def te...

作者

  • 9 篇 wang chengyu
  • 8 篇 huang jun
  • 7 篇 qiu minghui
  • 6 篇 li peng
  • 6 篇 sun maosong
  • 6 篇 luo xudong
  • 5 篇 liu zhiyuan
  • 4 篇 zhou jie
  • 4 篇 liu cheng
  • 4 篇 li ge
  • 4 篇 chen xiang
  • 4 篇 ou yu-yen
  • 4 篇 ko youngjoong
  • 4 篇 lin yankai
  • 4 篇 zhang yu
  • 3 篇 zhang xuejie
  • 3 篇 jin zhi
  • 3 篇 wen ji-rong
  • 3 篇 yang hao
  • 3 篇 chen yufeng

语言

  • 443 篇 英文
  • 7 篇 德文
  • 7 篇 法文
  • 7 篇 意大利文
  • 5 篇 其他
检索条件"主题词=Pre-trained Language Model"
448 条 记 录,以下是71-80 订阅
排序:
T4SEfinder: a bioinformatics tool for genome-scale prediction of bacterial type IV secreted effectors using pre-trained protein language model
收藏 引用
BRIEFINGS IN BIOINFORMATICS 2022年 第1期23卷 bbab420页
作者: Zhang, Yumeng Zhang, Yangming Xiong, Yi Wang, Hui Deng, Zixin Song, Jiangning Ou, Hong-Yu Shanghai Jiao Tong Univ Sch Life Sci & Biotechnol Shanghai 200030 Peoples R China Monash Univ Biomed Discovery Inst Melbourne Vic 3800 Australia Monash Univ Dept Biochem & Mol Biol Melbourne Vic 3800 Australia Beijing Inst Microbiol & Epidemiol State Key Lab Pathogens & Biosecur Beijing Peoples R China
Bacterial type IV secretion systems (T4SSs) are versatile and membrane-spanning apparatuses, which mediate both genetic exchange and delivery of effector proteins to target eukaryotic cells. The secreted effectors (T4... 详细信息
来源: 评论
LightToken: A Task and model-agnostic Lightweight Token Embedding Framework for pre-trained language models  23
LightToken: A Task and Model-agnostic Lightweight Token Embe...
收藏 引用
29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)
作者: Wang, Haoyu Li, Ruirui Jiang, Haoming Wang, Zhengyang Tang, Xianfeng Bi, Bin Cheng, Monica Yin, Bing Wang, Yaqing Zhao, Tuo Gao, Jing Purdue Univ W Lafayette IN 47907 USA Amazon com Inc Seattle WA USA Georgia Inst Technol Atlanta GA 30332 USA
pre-trained language models (PLMs) such as BERT, RoBERTa, and DeBERTa have achieved state-of-the-art performance on various downstream tasks. The enormous sizes of PLMs hinder their deployment in resource-constrained ... 详细信息
来源: 评论
A diachronic language model for long-time span classical Chinese
收藏 引用
INFORMATION PROCESSING & MANAGEMENT 2025年 第1期62卷
作者: Wei, Yuting Li, Meiling Zhu, Yangfu Xu, Yuanxing Li, Yuqing Wu, Bin Beijing Univ Posts & Telecommun Beijing Key Lab Intelligent Telecommun Software & Beijing 100876 Peoples R China
Classical Chinese literature, with its long history spanning thousands of years, serves as an invaluable resource for historical and humanistic studies. previous classical Chinese language models have achieved signifi... 详细信息
来源: 评论
Historical facts learning from Long-Short Terms with language model for Temporal Knowledge Graph Reasoning
收藏 引用
INFORMATION PROCESSING & MANAGEMENT 2025年 第3期62卷
作者: Xu, Wenjie Liu, Ben Peng, Miao Jiang, Zihao Jia, Xu Liu, Kai Liu, Lei Peng, Min Wuhan Univ Sch Comp Sci Wuhan 430072 Hubei Peoples R China Hong Kong Univ Sci & Technol Guangzhou Data Sci & Analyt Thrust Guangzhou 511458 Guangdong Peoples R China Hebei Normal Univ Coll Comp & Cyber Secur Shijiazhuang 050024 Hebei Peoples R China
Temporal Knowledge Graph Reasoning (TKGR) aims to reason the missing parts in TKGs based on historical facts from different time periods. Traditional GCN-based TKGR models depend on structured relations between entiti... 详细信息
来源: 评论
Enhancing pre-trained language models with Chinese character morphological knowledge
收藏 引用
INFORMATION PROCESSING & MANAGEMENT 2025年 第1期62卷
作者: Zheng, Zhenzhong Wu, Xiaoming Liu, Xiangzhi Qilu Univ Technol Shandong Acad Sci Shandong Comp Sci Ctr Key Lab Comp Power Network & Informat SecurMinist Jinan 250014 Peoples R China Shandong Fundamental Res Ctr Comp Sci Shandong Prov Key Lab Comp Networks Jinan 250014 Peoples R China
pre-trained language models (PLMs) have demonstrated success in Chinese natural language processing (NLP) tasks by acquiring high-quality representations through contextual learning. However, these models tend to negl... 详细信息
来源: 评论
K-Bloom: unleashing the power of pre-trained language models in extracting knowledge graph with predefined relations
收藏 引用
KNOWLEDGE AND INFORMATION SYSTEMS 2025年 第5期67卷 4487-4521页
作者: Vo, Trung Luu, Son T. Nguyen, Le-Minh Japan Adv Inst Sci & Technol Nomi Ishikawa Japan VNU Univ Informat Technol Ho Chi Minh City Vietnam
pre-trained language models have become popular in natural language processing tasks, but their inner workings and knowledge acquisition processes remain unclear. To address this issue, we introduce K-Bloom-a refined ... 详细信息
来源: 评论
Memory-Tuning: A Unified Parameter-Efficient Tuning Method for pre-trained language models
收藏 引用
IEEE TRANSACTIONS ON AUDIO SPEECH AND language PROCESSING 2025年 33卷 1-10页
作者: Qi, Wang Liu, Rui Zuo, Yuan Li, Fengzhi Chen, Yong Wu, Junjie Beihang Univ Sch Comp Sci & Engn Beijing 100191 Peoples R China Beihang Univ Sch Econ & Management Beijing 100191 Peoples R China MIIT Key Lab Data Intelligence & Management Beijing 100191 Peoples R China Beijing Univ Posts & Telecommun Sch Comp Sci Beijing 100876 Peoples R China
Conventional fine-tuning encounters increasing difficulties given the size of current pre-trained language models, which makes parameter-efficient tuning become the focal point of frontier research. Recent advances in... 详细信息
来源: 评论
ASRLM: ASR-Robust language model pre-training via Generative and Discriminative Learning  13th
ASRLM: ASR-Robust Language Model Pre-training via Generative...
收藏 引用
13th International Conference on Natural language Processing and Chinese Computing
作者: Hu, Qian Han, Xue Wang, Yiting Wang, Yitong Deng, Chao Feng, Junlan China Mobile Res Inst JiuTian Team Beijing Peoples R China
The rise of voice interface applications has renewed interest in improving the robustness of spoken language understanding(SLU). Many advances have come from end-to-end speech-language joint training, such as inferrin... 详细信息
来源: 评论
FedBM: Stealing knowledge from pre-trained language models for heterogeneous federated learning
收藏 引用
MEDICAL IMAGE ANALYSIS 2025年 102卷 103524页
作者: Zhu, Meilu Yang, Qiushi Gao, Zhifan Yuan, Yixuan Liu, Jun City Univ Hong Kong Dept Mech Engn Hong Kong Peoples R China City Univ Hong Kong Dept Elect Engn Hong Kong Peoples R China Sun Yat Sen Univ Sch Biomed Engn Guangzhou Peoples R China Univ Hong Kong Dept Data & Syst Engn Hong Kong Peoples R China
Federated learning (FL) has shown great potential in medical image computing since it provides a decentralized learning paradigm that allows multiple clients to train a model collaboratively without privacy leakage. H... 详细信息
来源: 评论
Comprehensive Study on Zero-Shot Text Classification Using Category Mapping
收藏 引用
IEEE ACCESS 2025年 13卷 23526-23546页
作者: Zhang, Kai Zhang, Qiuxia Wang, Chung-Che Jang, Jyh-Shing Roger Yiwu Ind & Commercial Coll Yiwu 322000 Zhejiang Peoples R China Natl Taiwan Univ Grad Inst Network & Multimedia Taipei 106 Taiwan Natl Taiwan Univ Dept Comp Sci & Informat Engn Taipei 106 Taiwan
Existing zero-shot text classification methods based on large pre-trained models with added prompts exhibit strong representational capacity and scalability but have relatively poor commercial applicability. Approache... 详细信息
来源: 评论