咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 653 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,257 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,943 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,409 篇 软件工程
    • 1,461 篇 信息与通信工程
    • 953 篇 电气工程
    • 879 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,417 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 758 篇 管理科学与工程(可...
    • 240 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,534 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 741 篇 semantics
  • 680 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 336 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 261 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,663 篇 英文
  • 481 篇 其他
  • 105 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15258 条 记 录,以下是311-320 订阅
排序:
PROSE: A Pronoun Omission Solution for Chinese-English Spoken language Translation
PROSE: A Pronoun Omission Solution for Chinese-English Spoke...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wang, Ke Zhao, Xiutian Li, Yanghui Peng, Wei Huawei IT Innovat & Res Ctr Shenzhen Peoples R China
Neural Machine Translation (NMT) systems encounter a significant challenge when translating a pro-drop ('pronoun-dropping') language (e.g., Chinese) to a non-pro-drop one (e.g., English), since the pro-drop ph... 详细信息
来源: 评论
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large language Models
HaluEval: A Large-Scale Hallucination Evaluation Benchmark f...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Li, Junyi Cheng, Xiaoxue Zhao, Wayne Xin Nie, Jian-Yun Wen, Ji-Rong Renmin Univ China Gaoling Sch Artificial Intelligence Beijing Peoples R China Renmin Univ China Sch Informat Beijing Peoples R China Univ Montreal DIRO Montreal PQ Canada Beijing Key Lab Big Data Management & Anal Method Beijing Peoples R China
Large language models (LLMs), such as ChatGPT, are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge. To understand what types of content and... 详细信息
来源: 评论
Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for language Models
Towards Robust Pruning: An Adaptive Knowledge-Retention Prun...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Li, Jianwei Lei, Qi Cheng, Wei Xu, Dongkuan North Carolina State Univ Raleigh NC 27695 USA NYU New York NY USA NEC Labs Princeton NJ USA
The pruning objective has recently extended beyond accuracy and sparsity to robustness in language models. Despite this, existing methods struggle to enhance robustness against adversarial attacks when continually inc... 详细信息
来源: 评论
CLEAR: Can language Models Really Understand Causal Graphs?
CLEAR: Can Language Models Really Understand Causal Graphs?
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Sirui Xu, Mengying Wang, Kun Zeng, Xingyu Zhao, Rui Zhao, Shengjie Lu, Chaochao Tongji University China The Chinese University of Hong Kong Hong Kong Shanghai Artificial Intelligence Laboratory China
Causal reasoning is a cornerstone of how humans interpret the world. To model and reason about causality, causal graphs offer a concise yet effective solution. Given the impressive advancements in language models, a c... 详细信息
来源: 评论
An empirical Study on Cross-lingual Vocabulary Adaptation for Efficient language Model Inference
An Empirical Study on Cross-lingual Vocabulary Adaptation fo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yamaguchi, Atsuki Villavicencio, Aline Aletras, Nikolaos School of Computer Science University of Sheffield United Kingdom Department of Computer Science Institute of Data Science and Artificial Intelligence University of Exeter United Kingdom The Alan Turing Institute United Kingdom
The development of state-of-the-art generative large language models (LLMs) disproportionately relies on English-centric tokenizers, vocabulary and pre-training data. Despite the fact that some LLMs have multilingual ... 详细信息
来源: 评论
Exploring the Learning Capabilities of language Models using LEVERWORLDS
Exploring the Learning Capabilities of Language Models using...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wagner, Eitan Feder, Amir Abend, Omri Hebrew University of Jerusalem Israel Columbia University United States
Learning a model of a stochastic setting often involves learning both general structure rules and specific properties of the instance. This paper investigates the interplay between learning the general and the specifi... 详细信息
来源: 评论
Working Memory Identifies Reasoning Limits in language Models
Working Memory Identifies Reasoning Limits in Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Chunhui Jian, Yiren Ouyang, Zhongyu Vosoughi, Soroush Department of Computer Science Dartmouth College United States
This study explores the inherent limitations of Large language Models (LLMs) from a scaling perspective, focusing on the upper bounds of their cognitive capabilities. We integrate insights from cognitive science to qu... 详细信息
来源: 评论
GDTB: Genre Diverse Data for English Shallow Discourse Parsing across Modalities, Text Types, and Domains
GDTB: Genre Diverse Data for English Shallow Discourse Parsi...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Yang Janet Aoyama, Tatsuya Scivetti, Wesley Zhu, Yilun Behzad, Shabnam Levine, Lauren Elizabeth Lin, Jessica Tiwari, Devika Zeldes, Amir Corpling Lab Georgetown University United States MaiNLP Center for Information and Language Processing LMU Munich Germany Germany
Work on shallow discourse parsing in English has focused on the Wall Street Journal corpus, the only large-scale dataset for the language in the PDTB framework. However, the data is not openly available, is restricted... 详细信息
来源: 评论
CUTE: Measuring LLMs' Understanding of Their Tokens
CUTE: Measuring LLMs' Understanding of Their Tokens
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Edman, Lukas Schmid, Helmut Fraser, Alexander Center for Information and Language Processing LMU Munich Germany School of Computation Information and Technology TU Munich Germany Munich Center for Machine Learning Germany Munich Data Science Institute Germany
Large language Models (LLMs) show remarkable performance on a wide variety of tasks. Most LLMs split text into multi-character tokens and process them as atomic units without direct access to individual characters. Th... 详细信息
来源: 评论
Middleware for LLMs: Tools Are Instrumental for language Agents in Complex Environments
Middleware for LLMs: Tools Are Instrumental for Language Age...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gu, Yu Shu, Yiheng Yu, Hao Liu, Xiao Dong, Yuxiao Tang, Jie Srinivasa, Jayanth Latapie, Hugo Su, Yu The Ohio State University United States Tsinghua University China Cisco Research
The applications of large language models (LLMs) have expanded well beyond the confines of text processing, signaling a new era where LLMs are envisioned as generalist agents capable of operating within complex enviro... 详细信息
来源: 评论