咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 653 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,257 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,943 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,409 篇 软件工程
    • 1,461 篇 信息与通信工程
    • 953 篇 电气工程
    • 879 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,417 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 758 篇 管理科学与工程(可...
    • 240 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,534 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 741 篇 semantics
  • 680 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 336 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 261 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,663 篇 英文
  • 481 篇 其他
  • 105 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15258 条 记 录,以下是431-440 订阅
排序:
Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions
Advancing Social Intelligence in AI Agents: Technical Challe...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mathur, Leena Liang, Paul Pu Morency, Louis-Philippe Language Technologies Institute School of Computer Science Carnegie Mellon University United States Machine Learning Department School of Computer Science Carnegie Mellon University United States
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal that involves creating agents that can sense, perceive, reason about, learn from, and respond to affect, behavior, a... 详细信息
来源: 评论
ConU: Conformal Uncertainty in Large language Models with Correctness Coverage Guarantees
ConU: Conformal Uncertainty in Large Language Models with Co...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Zhiyuan Duan, Jinhao Cheng, Lu Zhang, Yue Wang, Qingni Shi, Xiaoshuang Xu, Kaidi Shen, Hengtao Zhu, Xiaofeng School of Computer Science and Engineering University of Electronic Science and Technology of China China Department of Computer Science Drexel University United States Department of Computer Science University of Illinois Chicago United States
Uncertainty quantification (UQ) in natural language generation (NLG) tasks remains an open challenge, exacerbated by the closed-source nature of the latest large language models (LLMs). This study investigates applyin... 详细信息
来源: 评论
Adaptive Gating in Mixture-of-Experts based language Models
Adaptive Gating in Mixture-of-Experts based Language Models
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Li, Jiamin Su, Qiang Yang, Yitao Jiang, Yimin Wang, Cong Xu, Hong City Univ Hong Kong Hong Kong Peoples R China Chinese Univ Hong Kong Hong Kong Peoples R China
Large language models, such as OpenAI's Chat-GPT, have demonstrated exceptional language understanding capabilities in various NLP tasks. Sparsely activated mixture-of-experts (MoE) has emerged as a promising solu... 详细信息
来源: 评论
Impact of Sample Selection on In-Context Learning for Entity Extraction from Scientific Writing
Impact of Sample Selection on In-Context Learning for Entity...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Bolucu, Necva Rybinski, Maciej Wan, Stephen CSIRO Data61 Eveleigh NSW Australia
Prompt-based usage of Large language Models (LLMs) is an increasingly popular way to tackle many well-known natural language problems. This trend is due, in part, to the appeal of the In-Context Learning (ICL) prompt ... 详细信息
来源: 评论
Faithful and Plausible natural language Explanations for Image Classification: A Pipeline Approach
Faithful and Plausible Natural Language Explanations for Ima...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wojciechowski, Adam Lango, Mateusz Dušek, Ondřej Poznan University of Technology Faculty of Computing and Telecommunications Poland Charles University Faculty of Mathematics and Physics Prague Czech Republic Samsung AI Center Warsaw Poland
Existing explanation methods for image classification struggle to provide faithful and plausible explanations. This paper addresses this issue by proposing a post-hoc natural language explanation method that can be ap... 详细信息
来源: 评论
ToxiCloakCN: Evaluating Robustness of Offensive language Detection in Chinese with Cloaking Perturbations
ToxiCloakCN: Evaluating Robustness of Offensive Language Det...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xiao, Yunze Hu, Yujia Choo, Kenny Tsu Wei Lee, Roy Ka-Wei Carnegie Mellon University Qatar Qatar Singapore University of Technology and Design Singapore
Detecting hate speech and offensive language is essential for maintaining a safe and respectful digital environment. This study examines the limitations of state-of-the-art large language models (LLMs) in identifying ... 详细信息
来源: 评论
Fine-grained Pluggable Gradient Ascent for Knowledge Unlearning in language Models
Fine-grained Pluggable Gradient Ascent for Knowledge Unlearn...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Feng, Xiaohua Chen, Chaochao Li, Yuyuan Lin, Zibin Zhejiang University China Hangzhou Dianzi University China
Pre-trained language models acquire knowledge from vast amounts of text data, which can inadvertently contain sensitive information. To mitigate the presence of undesirable knowledge, the task of knowledge unlearning ... 详细信息
来源: 评论
The Past, Present and Better Future of Feedback Learning in Large language Models for Subjective Human Preferences and Values
The Past, Present and Better Future of Feedback Learning in ...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kirk, Hannah Rose Bean, Andrew M. Vidgen, Bertie Rottger, Paul Hale, Scott A. Univ Oxford Oxford England Bocconi Univ Milan Italy Meedan San Francisco CA USA
Human feedback is increasingly used to steer the behaviours of Large language Models (LLMs). However, it is unclear how to collect and incorporate feedback in a way that is efficient, effective and unbiased, especiall... 详细信息
来源: 评论
A Thorough Examination of Decoding methods in the Era of LLMs
A Thorough Examination of Decoding Methods in the Era of LLM...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Shi, Chufan Yang, Haoran Cai, Deng Zhang, Zhisong Wang, Yifan Yang, Yujiu Lam, Wai Tsinghua University China The Chinese University of Hong Kong Hong Kong Tencent AI Lab China
Decoding methods play an indispensable role in converting language models from next-token predictors into practical task solvers. Prior research on decoding methods, primarily focusing on task-specific models, may not... 详细信息
来源: 评论
On the Robustness of Editing Large language Models
On the Robustness of Editing Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ma, Xinbei Ju, Tianjie Qiu, Jiyang Zhang, Zhuosheng Zhao, Hai Liu, Lifeng Wang, Yulong School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University China Department of Computer Science and Engineering Shanghai Jiao Tong University China Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University China Shanghai Key Laboratory of Trusted Data Circulation and Governance in Web3 China Baichuan Intelligent Technology China
Large language models (LLMs) have played a pivotal role in building communicative AI, yet they encounter the challenge of efficient updates. Model editing enables the manipulation of specific knowledge memories and th... 详细信息
来源: 评论