咨询与建议

限定检索结果

文献类型

  • 14,413 篇 会议
  • 650 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,204 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,937 篇 工学
    • 10,278 篇 计算机科学与技术...
    • 5,404 篇 软件工程
    • 1,460 篇 信息与通信工程
    • 953 篇 电气工程
    • 875 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,413 篇 管理学
    • 1,747 篇 图书情报与档案管...
    • 754 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,523 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 736 篇 semantics
  • 680 篇 machine learning
  • 606 篇 deep learning
  • 520 篇 natural language...
  • 345 篇 computational mo...
  • 334 篇 training
  • 331 篇 sentiment analys...
  • 330 篇 accuracy
  • 325 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 263 篇 speech recogniti...
  • 250 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 211 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 carnegie mellon ...
  • 44 篇 tsinghua univers...
  • 42 篇 zhejiang univers...
  • 42 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 28 篇 peking universit...
  • 27 篇 language technol...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,611 篇 英文
  • 481 篇 其他
  • 104 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15205 条 记 录,以下是261-270 订阅
排序:
LoRA-Guard: Parameter-Efficient Guardrail Adaptation for Content Moderation of Large language Models
LoRA-Guard: Parameter-Efficient Guardrail Adaptation for Con...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Elesedy, Hayder Esperança, Pedro M. Oprea, Silviu Vlad Ozay, Mete United Kingdom
Guardrails have emerged as comprehensive method of content moderation for large language models (LLMs), complementing safety alignment from fine-tuning. However, existing model-based guardrails are too memory intensiv... 详细信息
来源: 评论
Large language Models: The Need for Nuance in Current Debates and a Pragmatic Perspective on Understanding
Large Language Models: The Need for Nuance in Current Debate...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: van Dijk, Bram Kouwenhoven, Tom Spruit, Marco van Duijn, Max Leiden Inst Adv Comp Sci Leiden Netherlands Leiden Univ Med Ctr Leiden Netherlands
Current Large language Models (LLMs) are unparalleled in their ability to generate grammatically correct, fluent text. LLMs are appearing rapidly, and debates on LLM capacities have taken off, but reflection is laggin... 详细信息
来源: 评论
CLAIR: Evaluating Image Captions with Large language Models
CLAIR: Evaluating Image Captions with Large Language Models
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Chan, David M. Petryk, Suzanne Gonzalez, Joseph E. Darrell, Trevor Canny, John Univ Calif Berkeley Berkeley CA 94720 USA
The evaluation of machine-generated image captions poses an interesting yet persistent challenge. Effective evaluation measures must consider numerous dimensions of similarity, including semantic relevance, visual str... 详细信息
来源: 评论
AnyMAL: An Efficient and Scalable Any-Modality Augmented language Model
AnyMAL: An Efficient and Scalable Any-Modality Augmented Lan...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Moon, Seungwhan Madotto, Andrea Lin, Zhaojiang Nagarajan, Tushar Smith, Matt Jain, Shashank Yeh, Chun-Fu Murugesan, Prakash Heidari, Peyman Liu, Yue Srinet, Kavya Damavandi, Babak Kumar, Anuj FAIR Meta & Meta Reality Labs United States
We present Any-Modality Augmented language Model (AnyMAL), a unified model that reasons over diverse input modality signals (i.e. text, image, video, audio, IMU motion sensor), and generates textual responses. AnyMAL ... 详细信息
来源: 评论
Layer by Layer: Uncovering Where Multi-Task Learning Happens in Instruction-Tuned Large language Models
Layer by Layer: Uncovering Where Multi-Task Learning Happens...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhao, Zheng Ziser, Yftah Cohen, Shay B. Institute for Language Cognition and Computation University of Edinburgh United Kingdom Nvidia Research United States
Fine-tuning pre-trained large language models (LLMs) on a diverse array of tasks has become a common approach for building models that can solve various natural language processing (NLP) tasks. However, where and to w... 详细信息
来源: 评论
Semformer: Transformer language Models with Semantic Planning
Semformer: Transformer Language Models with Semantic Plannin...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yin, Yongjing Ding, Junran Song, Kai Zhang, Yue Zhejiang University China School of Engineering Westlake University China Institute of Advanced Technology Westlake Institute for Advanced Study China ByteDance China
Next-token prediction serves as the dominant component in current neural language *** the training phase, the model employs teacher forcing, which predicts tokens based on all preceding ground truth ***, this approach... 详细信息
来源: 评论
DALE: Generative Data Augmentation for Low-Resource Legal NLP
DALE: Generative Data Augmentation for Low-Resource Legal NL...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ghosh, Sreyan Evuru, Chandra Kiran Kumar, Sonal Ramaneswaran, S. Sakshi, S. Tyagi, Utkarsh Manocha, Dinesh Univ Maryland College Pk MD 20742 USA UMass Amherst MA USA NVIDIA Bangalore Karnataka India
We present DALE, a novel and effective generative Data Augmentation framework for lowresource LEgal NLP. DALE addresses the challenges existing frameworks pose in generating effective data augmentations of legal docum... 详细信息
来源: 评论
language models and brains align due to more than next-word prediction and word-level information
Language models and brains align due to more than next-word ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Merlin, Gabriele Toneva, Mariya MPI for Software Systems Saarbrücken Germany
Pretrained language models have been shown to significantly predict brain recordings of people comprehending *** work suggests that the prediction of the next word is a key mechanism that contributes to this *** is no... 详细信息
来源: 评论
Simul-MuST-C: Simultaneous Multilingual Speech Translation Corpus Using Large language Model
Simul-MuST-C: Simultaneous Multilingual Speech Translation C...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Makinae, Mana Sakai, Yusuke Kamigaito, Hidetaka Watanabe, Taro Nara Institute of Science and Technology Japan
Simultaneous Speech Translation (SiST) begins translating before the entire source input is received, making it crucial to balance quality and latency. In real interpreting situations, interpreters manage this simulta... 详细信息
来源: 评论
ECCO: Can We Improve Model-Generated Code Efficiency Without Sacrificing Functional Correctness?
ECCO: Can We Improve Model-Generated Code Efficiency Without...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Waghjale, Siddhant Veerendranath, Vishruth Wang, Zora Zhiruo Fried, Daniel Language Technologies Institute Carnegie Mellon University United States
Although large language models (LLMs) have been largely successful in generating functionally correct programs, conditioning models to produce efficient solutions while ensuring correctness remains a challenge. Furthe... 详细信息
来源: 评论