咨询与建议

限定检索结果

文献类型

  • 14,549 篇 会议
  • 662 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,352 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,015 篇 工学
    • 10,349 篇 计算机科学与技术...
    • 5,460 篇 软件工程
    • 1,467 篇 信息与通信工程
    • 956 篇 电气工程
    • 892 篇 控制科学与工程
    • 447 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 177 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,486 篇 理学
    • 1,156 篇 数学
    • 654 篇 物理学
    • 520 篇 生物学
    • 394 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,427 篇 管理学
    • 1,756 篇 图书情报与档案管...
    • 759 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,762 篇 文学
    • 1,710 篇 外国语言文学
    • 184 篇 中国语言文学
  • 515 篇 医学
    • 303 篇 临床医学
    • 286 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 279 篇 法学
    • 249 篇 社会学
  • 239 篇 教育学
    • 226 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 10 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,552 篇 natural language...
  • 1,789 篇 natural language...
  • 953 篇 computational li...
  • 741 篇 semantics
  • 683 篇 machine learning
  • 612 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 334 篇 large language m...
  • 334 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 255 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 alibaba grp peop...
  • 31 篇 school of artifi...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 26 篇 wen ji-rong
  • 26 篇 liu zhiyuan
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,307 篇 英文
  • 930 篇 其他
  • 114 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15353 条 记 录,以下是1221-1230 订阅
排序:
ConvKGYarn: Spinning Configurable and Scalable Conversational Knowledge Graph QA Datasets with Large language Models
ConvKGYarn: Spinning Configurable and Scalable Conversationa...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Pradeep, Ronak Lee, Daniel Mousavi, Ali Pound, Jeff Sang, Yisi Lin, Jimmy Ilyas, Ihab Potdar, Saloni Arefiyan, Mostafa Li, Yunyao Apple United States University of Waterloo Canada Adobe United States
The rapid evolution of Large language Models (LLMs) and conversational assistants necessitates dynamic, scalable, and configurable conversational datasets for training and evaluation. These datasets must accommodate d... 详细信息
来源: 评论
Unveiling Narrative Reasoning Limits of Large language Models with Trope in Movie Synopses
Unveiling Narrative Reasoning Limits of Large Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Su, Hung-Ting Hsu, Ya-Ching Lin, Xudong Shi, Xiang-Qian Niu, Yulei Hsu, Han-Yuan Lee, Hung-Yi Hsu, Winston H. National Taiwan University Taiwan Columbia University United States Mobile Drive Technology
Large language models (LLMs) equipped with chain-of-thoughts (CoT) prompting have shown significant multi-step reasoning capabilities in factual content like mathematics, commonsense, and logic. However, their perform... 详细信息
来源: 评论
natural language processing (NLP) Techniques: Usability in Human-Computer Interactions  6
Natural Language Processing (NLP) Techniques: Usability in H...
收藏 引用
6th International conference on natural language processing (ICNLP)
作者: Das, Satyesh Das, Divyesh Rajiv Gandhi Inst Petr Technol Comp Sci & Engn Amethi Uttar Pradesh India Thapar Inst Engn & Technol Elect & Comp Engn Patiala Punjab India
The study sheds light on the concept of natural language processing (NLP) which enables Human-Computer Interaction (HCI). Recent advances in Artificial Intelligence (AI) and brain architecture have led to significant ... 详细信息
来源: 评论
Training-free Deep Concept Injection Enables language Models for Video Question Answering
Training-free Deep Concept Injection Enables Language Models...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lin, Xudong Li, Manling Zemel, Richard Ji, Heng Chang, Shih-Fu Columbia University United States University of Illinois Urbana-Champaign United States
Recently, enabling pretrained language models (PLMs) to perform zero-shot crossmodal tasks such as video question answering has been extensively studied. A popular approach is to learn a projection network that projec... 详细信息
来源: 评论
Monotonic Paraphrasing Improves Generalization of language Model Prompting
Monotonic Paraphrasing Improves Generalization of Language M...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Qin Wang, Fei Xu, Nan Yan, Tianyi Meng, Tao Chen, Muhao UC Davis United States USC United States UCLA United States
Performance of large language models (LLMs) may vary with different prompts or instructions for even the same *** commonly recognized factor for this phenomenon is the model's familiarity with the given prompt or ... 详细信息
来源: 评论
The State of the Art of Large language Models on Chartered Financial Analyst Exams
The State of the Art of Large Language Models on Chartered F...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mahfouz, Mahmoud Callanan, Ethan Sibue, Mathieu Papadimitriou, Antony Ma, Zhiqiang Liu, Xiaomo Zhu, Xiaodan J.P. Morgan AI Research Queen’s University Canada
The Chartered Financial Analyst (CFA) program is one of the most widely recognized financial certifications globally. In this work, we test a variety of state-of-the-art large language models (LLMs) on mock CFA exams ... 详细信息
来源: 评论
How Reliable Are Automatic Evaluation methods for Instruction-Tuned LLMs?
How Reliable Are Automatic Evaluation Methods for Instructio...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Doostmohammadi, Ehsan Holmström, Oskar Kuhlmann, Marco Linköping University Sweden
Work on instruction-tuned Large language Models (LLMs) has used automatic methods based on text overlap and LLM judgments as cost-effective alternatives to human evaluation. In this paper, we perform a meta-evaluation... 详细信息
来源: 评论
Advancing Adversarial Suffix Transfer Learning on Aligned Large language Models
Advancing Adversarial Suffix Transfer Learning on Aligned La...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Hongfu Xie, Yuxi Wang, Ye Shieh, Michael National University of Singapore Singapore
language language Models (LLMs) face safety concerns due to potential misuse by malicious users. Recent red-teaming efforts have identified adversarial suffixes capable of jail-breaking LLMs using the gradient-based s... 详细信息
来源: 评论
Mentor-KD: Making Small language Models Better Multi-step Reasoners
Mentor-KD: Making Small Language Models Better Multi-step Re...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lee, Hojae Kim, Junho Lee, SangKeun Department of Computer Science and Engineering Korea Republic of Department of Artificial Intelligence Korea University Seoul Korea Republic of
Large language Models (LLMs) have displayed remarkable performances across various complex tasks by leveraging Chain-of-Thought (CoT) prompting. Recently, studies have proposed a Knowledge Distillation (KD) approach, ... 详细信息
来源: 评论
Optimized Speculative Sampling for GPU Hardware Accelerators
Optimized Speculative Sampling for GPU Hardware Accelerators
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wagner, Dominik Lee, Seanie Baumann, Ilja Seeberger, Philipp Riedhammer, Korbinian Bocklet, Tobias Technische Hochschule Nürnberg Georg Simon Ohm Germany KAIST Korea Republic of
In this work, we optimize speculative sampling for parallel hardware accelerators to improve sampling speed. We notice that substantial portions of the intermediate matrices necessary for speculative sampling can be c... 详细信息
来源: 评论