咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是161-170 订阅
排序:
MMTE: Corpus and Metrics for Evaluating Machine Translation Quality of Metaphorical language
MMTE: Corpus and Metrics for Evaluating Machine Translation ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Shun Zhang, Ge Wu, Han Loakman, Tyler Huang, Wenhao Lin, Chenghua Department of Computer Science The University of Sheffield United Kingdom Department of Computer Science The University of Manchester United Kingdom Department of Computer Science University of Waterloo Canada School of Foreign Studies UIBE Beijing China 01.AI Beijing China
Machine Translation (MT) has developed rapidly since the release of Large language Models and current MT evaluation is performed through comparison with reference human translations or by predicting quality scores fro... 详细信息
来源: 评论
Exploring the Learning Capabilities of language Models using LEVERWORLDS
Exploring the Learning Capabilities of Language Models using...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wagner, Eitan Feder, Amir Abend, Omri Hebrew University of Jerusalem Israel Columbia University United States
Learning a model of a stochastic setting often involves learning both general structure rules and specific properties of the instance. This paper investigates the interplay between learning the general and the specifi... 详细信息
来源: 评论
Speechworthy Instruction-tuned language Models
Speechworthy Instruction-tuned Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cho, Hyundong Jedema, Nicolaas Ribeiro, Leonardo F.R. Sharma, Karishma Szekely, Pedro Moschitti, Alessandro Janssen, Ruben May, Jonathan University of Southern California Information Sciences Institute United States Amazon United States
Current instruction-tuned language models are exclusively trained with textual preference data and thus are often not aligned with the unique requirements of other modalities, such as speech. To better align language ... 详细信息
来源: 评论
Neuron-Level Knowledge Attribution in Large language Models
Neuron-Level Knowledge Attribution in Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yu, Zeping Ananiadou, Sophia Department of Computer Science National Centre for Text Mining The University of Manchester United Kingdom
Identifying important neurons for final predictions is essential for understanding the mechanisms of large language models. Due to computational constraints, current attribution techniques struggle to operate at neuro... 详细信息
来源: 评论
Reconsidering Sentence-Level Sign language Translation
Reconsidering Sentence-Level Sign Language Translation
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tanzer, Garrett Shengelia, Maximus Harrenstien, Ken Uthus, David Google United States Rochester Institute of Technology United States
Historically, sign language machine translation has been posed as a sentence-level task: datasets consisting of continuous narratives are chopped up and presented to the model as isolated clips. In this work, we explo... 详细信息
来源: 评论
Don't Forget Your Reward Values: language Model Alignment via Value-based Calibration
Don't Forget Your Reward Values: Language Model Alignment vi...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mao, Xin Li, Feng-Lin Xu, Huimin Zhang, Wei Chen, Wang Luu, Anh Tuan Nanyang Technological University Singapore Shopee Pte. Ltd. Singapore Singapore SEA Group Singapore
While Reinforcement Learning from Human Feedback (RLHF) significantly enhances the generation quality of Large language Models (LLMs), recent studies have raised concerns regarding the complexity and instability assoc...
来源: 评论
Studying and Mitigating Biases in Sign language Understanding Models
Studying and Mitigating Biases in Sign Language Understandin...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Atwell, Katherine Bragg, Danielle Alikhani, Malihe Northeastern University United States Microsoft Research United States
Ensuring that the benefits of sign language technologies are distributed equitably among all community members is crucial. Thus, it is important to address potential biases and inequities that may arise from the desig... 详细信息
来源: 评论
Knowledge Graph Enhanced Large language Model Editing
Knowledge Graph Enhanced Large Language Model Editing
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Mengqi Ye, Xiaotian Liu, Qiang Ren, Pengjie Wu, Shu Chen, Zhumin School of Computer Science and Technology Shandong University China School of Computer Science Beijing University of Posts and Telecommunications China Institute of Automation Chinese Academy of Sciences China
Large language models (LLMs) are pivotal in advancing natural language processing (NLP) tasks, yet their efficacy is hampered by inaccuracies and outdated knowledge. Model editing emerges as a promising solution to ad... 详细信息
来源: 评论
OmAgent: A Multi-modal Agent Framework for Complex Video Understanding with Task Divide-and-Conquer
OmAgent: A Multi-modal Agent Framework for Complex Video Und...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Lu Zhao, Tiancheng Ying, Heting Ma, Yibo Lee, Kyusong Om AI Research Binjiang Institute of Zhejiang University China
Recent advancements in Large language Models (LLMs) have expanded their capabilities to multimodal contexts, including comprehensive video understanding. However, processing extensive videos such as 24-hour CCTV foota... 详细信息
来源: 评论
Grounding language in Multi-Perspective Referential Communication
Grounding Language in Multi-Perspective Referential Communic...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tang, Zineng Mao, Lingjun Suhr, Alane University of California Berkeley United States
We introduce a task and dataset for referring expression generation and comprehension in multi-agent embodied *** this task, two agents in a shared scene must take into account one another's visual perspective, wh... 详细信息
来源: 评论