咨询与建议

限定检索结果

文献类型

  • 14,549 篇 会议
  • 662 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,352 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,015 篇 工学
    • 10,349 篇 计算机科学与技术...
    • 5,460 篇 软件工程
    • 1,467 篇 信息与通信工程
    • 956 篇 电气工程
    • 892 篇 控制科学与工程
    • 447 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 177 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,486 篇 理学
    • 1,156 篇 数学
    • 654 篇 物理学
    • 520 篇 生物学
    • 394 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,427 篇 管理学
    • 1,756 篇 图书情报与档案管...
    • 759 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,762 篇 文学
    • 1,710 篇 外国语言文学
    • 184 篇 中国语言文学
  • 515 篇 医学
    • 303 篇 临床医学
    • 286 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 279 篇 法学
    • 249 篇 社会学
  • 239 篇 教育学
    • 226 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 10 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,552 篇 natural language...
  • 1,789 篇 natural language...
  • 953 篇 computational li...
  • 741 篇 semantics
  • 683 篇 machine learning
  • 612 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 334 篇 large language m...
  • 334 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 255 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 alibaba grp peop...
  • 31 篇 school of artifi...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 26 篇 wen ji-rong
  • 26 篇 liu zhiyuan
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,307 篇 英文
  • 930 篇 其他
  • 114 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15353 条 记 录,以下是1131-1140 订阅
排序:
TREE OF UNCERTAIN THOUGHTS REASONING FOR LARGE language MODELS  49
TREE OF UNCERTAIN THOUGHTS REASONING FOR LARGE LANGUAGE MODE...
收藏 引用
49th IEEE International conference on Acoustics, Speech, and Signal processing (ICASSP)
作者: Mo, Shentong Xin, Miao Carnegie Mellon Univ Pittsburgh PA USA MBZUAI Abu Dhabi U Arab Emirates Chinese Acad Sci Inst Automat Beijing Peoples R China
While the recently introduced Tree of Thoughts (ToT) has heralded advancements in allowing Large language Models (LLMs) to reason through foresight and backtracking for global decision-making, it has overlooked the in... 详细信息
来源: 评论
Can we teach language models to gloss endangered languages?
Can we teach language models to gloss endangered languages?
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ginn, Michael Hulden, Mans Palmer, Alexis University of Colorado United States New College of Florida United States
Interlinear glossed text (IGT) is a popular format in language documentation projects, where each morpheme is labeled with a descriptive annotation. Automating the creation of interlinear glossed text would be desirab... 详细信息
来源: 评论
Exploring Design Choices for Building language-Specific LLMs
Exploring Design Choices for Building Language-Specific LLMs
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tejaswi, Atula Gupta, Nilesh Choi, Eunsol Department of Computer Science The University of Texas Austin United States
Despite rapid progress in large language models (LLMs), their performance on a vast majority of languages remains unsatisfactory. In this paper, we study building language-specific LLMs by adapting monolingual and mul... 详细信息
来源: 评论
The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 languages
The Skipped Beat: A Study of Sociopragmatic Understanding in...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Zhang, Chiyu Doan, Khai Duy Liao, Qisheng Abdul-Mageed, Muhammad Univ British Columbia Deep Learning & Nat Language Proc Grp Vancouver BC Canada MBZUAI Dept Nat Language Proc Abu Dhabi U Arab Emirates MBZUAI Dept Machine Learning Abu Dhabi U Arab Emirates
Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate remarkable performance in a wide range of tasks. Despite numerous recent studies that examine the performance of instruction-tuned LLMs on v... 详细信息
来源: 评论
PIEClass: Weakly-Supervised Text Classification with Prompting and Noise-Robust Iterative Ensemble Training
PIEClass: Weakly-Supervised Text Classification with Prompti...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Zhang, Yunyi Jiang, Minhao Meng, Yu Zhang, Yu Han, Jiawei Univ Illinois Urbana IL 61801 USA
Weakly-supervised text classification trains a classifier using the label name of each target class as the only supervision, which largely reduces human annotation efforts. Most existing methods first use the label na... 详细信息
来源: 评论
UNO Arena for Evaluating Sequential Decision-Making Capability of Large language Models
UNO Arena for Evaluating Sequential Decision-Making Capabili...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Qin, Zhanyue Wang, Haochuan Liu, Deyuan Song, Ziyang Fan, Cunhang Lv, Zhao Wu, Jinlin Lei, Zhen Tu, Zhiying Chu, Dianhui Yu, Xiaoyan Sui, Dianbo Harbin Institute of Technology China Anhui University China CAIR HKISI-CAS China CASIA China UCAS China Beijing Institute of Technology China
Sequential decision-making refers to algorithms that take into account the dynamics of the environment, where early decisions affect subsequent decisions. With large language models (LLMs) demonstrating powerful capab... 详细信息
来源: 评论
Adapter Pruning using Tropical Characterization
Adapter Pruning using Tropical Characterization
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Bhardwaj, Rishabh Vaidya, Tushar Poria, Soujanya Singapore Univ Technol & Design Singapore Singapore Nanyang Technol Univ Singapore Singapore
Adapters are widely popular parameter-efficient transfer learning approaches in natural language processing that insert trainable modules in between layers of a pre-trained language model. Apart from several heuristic... 详细信息
来源: 评论
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in language Models
Prompt as Triggers for Backdoor Attack: Examining the Vulner...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Zhao, Shuai Wen, Jinming Tuan, Luu Anh Zhao, Junbo Fu, Jie Jinan Univ Guangzhou Peoples R China Hong Kong Univ Sci & Technol Hong Kong Peoples R China Nanyang Technol Univ Singapore Singapore Zhejiang Univ Hangzhou Zhejiang Peoples R China
The prompt-based learning paradigm, which bridges the gap between pre-training and fine-tuning, achieves state-of-the-art performance on several NLP tasks, particularly in few-shot settings. Despite being widely appli... 详细信息
来源: 评论
Based on the Semantics Analysis of the URL Identification and Malicious Code Detection  24
Based on the Semantics Analysis of the URL Identification an...
收藏 引用
1st International conference on Image processing Machine Learning and Pattern Recognition
作者: Wang Mingxin Wuhan Res Inst Posts & Telecommun Wuhan Peoples R China
The traditional malicious URL identification methods usually adopt blacklist technology, heuristic algorithm and machine learning algorithm. This paper considers that natural language processing technology can be intr... 详细信息
来源: 评论
Shall We Pretrain Autoregressive language Models with Retrieval? A Comprehensive Study
Shall We Pretrain Autoregressive Language Models with Retrie...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wang, Boxin Ping, Wei Xu, Peng McAfee, Lawrence Liu, Zihan Shoeybi, Mohammad Dong, Yi Kuchaiev, Oleksii Li, Bo Xiao, Chaowei Anandkumar, Anima Catanzaro, Bryan UIUC Urbana IL 61801 USA NVIDIA Santa Clara CA 95051 USA Univ Wisconsin Madison WI 53706 USA
Large decoder-only language models (LMs) can be largely improved in terms of perplexity by retrieval (e.g., RETRO), but its impact on text generation quality and downstream task accuracy is unclear. Thus, it is still ... 详细信息
来源: 评论