咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是811-820 订阅
排序:
Abstraction-of-Thought Makes language Models Better Reasoners
Abstraction-of-Thought Makes Language Models Better Reasoner...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hong, Ruixin Zhang, Hongming Pan, Xiaoman Yu, Dong Zhang, Changshui China China Department of Automation Tsinghua University Beijing China Tencent AI Lab Seattle United States
Abstract reasoning, the ability to reason from the abstract essence of a problem, serves as a key to generalization in human reasoning. However, eliciting language models to perform reasoning with abstraction remains ... 详细信息
来源: 评论
GRASS: Compute Efficient Low-Memory LLM Training with Structured Sparse Gradients
GRASS: Compute Efficient Low-Memory LLM Training with Struct...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Muhamed, Aashiq Li, Oscar Woodruff, David Diab, Mona Smith, Virginia Language Technologies Institute Carnegie Mellon University United States Machine Learning Department Carnegie Mellon University United States Department of Computer Science Carnegie Mellon University United States
Large language model (LLM) training and finetuning are often bottlenecked by limited GPU memory. While existing projection-based optimization methods address this by projecting gradients into a lower-dimensional subsp... 详细信息
来源: 评论
Self-Training for Sample-Efficient Active Learning for Text Classification with Pre-Trained language Models
Self-Training for Sample-Efficient Active Learning for Text ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Schröder, Christopher Heyer, Gerhard Leipzig Dresden Germany TUD Dresden University of Technology Germany Leipzig University Germany
Active learning is an iterative labeling process that is used to obtain a small labeled subset, despite the absence of labeled data, thereby enabling to train a model for supervised tasks such as text classification. ... 详细信息
来源: 评论
Structure Guided Prompt: Instructing Large language Model in Multi-Step Reasoning by Exploring Graph Structure of the Text
Structure Guided Prompt: Instructing Large Language Model in...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cheng, Kewei Ahmed, Nesreen K. Willke, Theodore L. Sun, Yizhou Amazon United States Cisco Outshift Hungary Intel Labs United States UCLA United States
Although Large language Models (LLMs) excel at addressing straightforward reasoning tasks, they frequently struggle with difficulties when confronted by more complex multi-step reasoning due to a range of factors. Fir... 详细信息
来源: 评论
LONGEMBED: Extending Embedding Models for Long Context Retrieval
LONGEMBED: Extending Embedding Models for Long Context Retri...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhu, Dawei Wang, Liang Yang, Nan Song, Yifan Wu, Wenhao Wei, Furu Li, Sujian School of Computer Science Peking University China National Key Laboratory for Multimedia Information Processing Peking University China Jiangsu Collaborative Innovation Center for Language Ability Jiangsu Normal University China Microsoft Corporation United States
Embedding models play a pivotal role in modern NLP applications such as document retrieval. However, existing embedding models are limited to encoding short documents of typically 512 tokens, restrained from applicati... 详细信息
来源: 评论
FreeAL: Towards Human-Free Active Learning in the Era of Large language Models
FreeAL: Towards Human-Free Active Learning in the Era of Lar...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Xiao, Ruixuan Dong, Yiwen Zhao, Junbo Wu, Runze Lin, Minmin Chen, Gang Wang, Haobo Zhejiang Univ Hangzhou Peoples R China NetEase Fuxi AI Lab Hangzhou Peoples R China
Collecting high-quality labeled data for model training is notoriously time-consuming and labor-intensive for various NLP tasks. While copious solutions, such as active learning for small language models (SLMs) and pr... 详细信息
来源: 评论
FOLIO: natural language Reasoning with First-Order Logic
FOLIO: Natural Language Reasoning with First-Order Logic
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Han, Simeng Schoelkopf, Hailey Zhao, Yilun Qi, Zhenting Riddell, Martin Zhou, Wenfei Coady, James Peng, David Qiao, Yujie Benson, Luke Sun, Lucy Wardle-Solano, Alex Szabo, Hannah Zubova, Ekaterina Burtell, Matthew Fan, Jonathan Liu, Yixin Wong, Brian Sailor, Malcolm Ni, Ansong Nan, Linyong Kasai, Jungo Yu, Tao Zhang, Rui Fabbri, Alexander R. Kryscinski, Wojciech Yavuz, Semih Liu, Ye Lin, Xi Victoria Joty, Shafiq Zhou, Yingbo Xiong, Caiming Ying, Rex Cohan, Arman Radev, Dragomir Yale University United States Harvard University United States NVIDIA United States Iowa City West High School United States University of Washington United States University of Hong Kong Hong Kong Penn State University United States Meta AI United States Salesforce Research United States
Large language models (LLMs) have achieved remarkable performance on a variety of natural language understanding tasks. However, existing benchmarks are inadequate in measuring the complex logical reasoning capabiliti... 详细信息
来源: 评论
Distractor Generation in Multiple-Choice Tasks: A Survey of methods, Datasets, and Evaluation
Distractor Generation in Multiple-Choice Tasks: A Survey of ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Alhazmi, Elaf Sheng, Quan Z. Zhang, Wei Emma Zaib, Munazza Alhazmi, Ahoud School of Computing Macquarie University Australia School of Computer and Mathematical Sciences The University of Adelaide Australia College of Engineering and Computing in Al-Lith Umm Al-Qura University Saudi Arabia
The distractor generation task focuses on generating incorrect but plausible options for objective questions such as fill-in-the-blank and multiple-choice questions. This task is widely utilized in educational setting... 详细信息
来源: 评论
SEEKR: Selective Attention-Guided Knowledge Retention for Continual Learning of Large language Models
SEEKR: Selective Attention-Guided Knowledge Retention for Co...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: He, Jinghan Guo, Haiyun Zhu, Kuan Zhao, Zihan Tang, Ming Wang, Jinqiao Foundation Model Research Center Institute of Automation Chinese Academy of Sciences China School of Artificial Intelligence University of Chinese Academy of Sciences China Peng Cheng Laboratory China Wuhan AI Research China Chongqing University China
Continual learning (CL) is crucial for language models to dynamically adapt to the evolving real-world demands. To mitigate the catastrophic forgetting problem in CL, data replay has been proven a simple and effective... 详细信息
来源: 评论
Improving Referring Ability for Biomedical language Models
Improving Referring Ability for Biomedical Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jiang, Junfeng Fei, Cheng Aizawa, Akiko The University of Tokyo Japan Kyoto University Japan National Institute of Informatics Japan
Existing auto-regressive large language models (LLMs) are primarily trained using documents from general domains. In the biomedical domain, continual pre-training is a prevalent method for domain adaptation to inject ...
来源: 评论