咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是841-850 订阅
排序:
Once Upon a Time in Graph: Relative-Time Pretraining for Complex Temporal Reasoning
Once Upon a Time in Graph: Relative-Time Pretraining for Com...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Yang, Sen Li, Xin Bing, Lidong Lam, Wai Chinese Univ Hong Kong Hong Kong Peoples R China Alibaba Grp DAMO Acad Hangzhou Peoples R China Hupan Lab Hangzhou 310023 Peoples R China
Our physical world is constantly evolving over time, rendering challenges for pre-trained language models to understand and reason over the temporal contexts of texts. Existing work focuses on strengthening the direct... 详细信息
来源: 评论
DecorateLM: Data Engineering through Corpus Rating, Tagging, and Editing with language Models
DecorateLM: Data Engineering through Corpus Rating, Tagging,...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhao, Ranchi Thai, Zhen Leng Zhang, Yifan Hu, Shengding Ba, Yunqi Zhou, Jie Cai, Jie Liu, Zhiyuan Sun, Maosong Modelbest Inc. China Department of Computer Science and Technology Tsinghua University China
The performance of Large language Models (LLMs) is substantially influenced by the pretraining corpus, which consists of vast quantities of unsupervised data processed by the models. Despite its critical role in model... 详细信息
来源: 评论
Seemingly Plausible Distractors in Multi-Hop Reasoning: Are Large language Models Attentive Readers?
Seemingly Plausible Distractors in Multi-Hop Reasoning: Are ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Bhuiya, Neeladri Schlegel, Viktor Winkler, Stefan Singapore National University of Singapore Singapore University of Manchester United Kingdom Imperial Global Singapore Singapore
State-of-the-art Large language Models (LLMs) are accredited with an increasing number of different capabilities, ranging from reading comprehension over advanced mathematical and reasoning skills to possessing scient... 详细信息
来源: 评论
Assessing the influence of attractor-verb distance on grammatical agreement in humans and language models
Assessing the influence of attractor-verb distance on gramma...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Zacharopoulos, Christos-Nikolaos Desbordes, Theo Sable-Meyer, Mathias NeuroSpin Ctr Cognit Neuroimaging Unit Gif Sur Yvette France Sensome SAS Massy France Meta AI Res Menlo Pk CA USA Univ PSL Coll France Paris France
Subject-verb agreement in the presence of an attractor noun located between the main noun and the verb elicits complex behavior: judgments of grammaticality are modulated by the grammatical features of the attractor. ... 详细信息
来源: 评论
Gender Identity in Pretrained language Models: An Inclusive Approach to Data Creation and Probing
Gender Identity in Pretrained Language Models: An Inclusive ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Knupleš, Urban Falenska, Agnieszka Miletić, Filip Institute for Natural Language Processing University of Stuttgart Germany Interchange Forum for Reflecting on Intelligent Systems University of Stuttgart Germany
Pretrained language models (PLMs) have been shown to encode binary gender information of text authors, raising the risk of skewed representations and downstream harms. This effect is yet to be examined for transgender... 详细信息
来源: 评论
An empirical investigation of the neural base approaches based on the sentence length using low-resource language: English-to-Nyishi
收藏 引用
INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS 2025年 1-11页
作者: Kakum, Nabam Kri, Rushanti Sambyo, Koj Natl Inst Technol Dept CSE Jote Arunachal Prade India
Machine translation eliminates the obstacles caused by linguistic disparities around the world. The automatic translation of natural languages using machine translation methods breaks communication barriers and brings... 详细信息
来源: 评论
natural language Annotations for Reasoning about Program Semantics
Natural Language Annotations for Reasoning about Program Sem...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Zocca, Marco UnfoldML Gothenburg Sweden
By grounding natural language inference in code (and vice versa), researchers aim to create programming assistants that explain their work, are "coachable" and can surface any gaps in their reasoning. Can we... 详细信息
来源: 评论
FAC2E: Better Understanding Large language Model Capabilities by Dissociating language and Cognition
FAC2E: Better Understanding Large Language Model Capabilitie...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Xiaoqiang Wu, Lingfei Ma, Tengfei Liu, Bang DIRO Institut Courtois Université de Montréal Canada Mila - Quebec AI Institute Canada Anytime.AI Stony Brook University United States
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks. However, such a paradigm fails to comprehensively differentiate the fine-grained language... 详细信息
来源: 评论
KnowTuning: Knowledge-aware Fine-tuning for Large language Models
KnowTuning: Knowledge-aware Fine-tuning for Large Language M...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lyu, Yougang Yan, Lingyong Wang, Shuaiqiang Shi, Haibo Yin, Dawei Ren, Pengjie Chen, Zhumin de Rijke, Maarten Ren, Zhaochun Shandong University Qingdao China Baidu Inc. Beijing China University of Amsterdam Amsterdam Netherlands Leiden University Leiden Netherlands
Despite their success at many natural language processing (NLP) tasks, large language models (LLMs) still struggle to effectively leverage knowledge for knowledge-intensive tasks, manifesting limitations such as gener... 详细信息
来源: 评论
The Lou Dataset Exploring the Impact of Gender-Fair language in German Text Classification
The Lou Dataset Exploring the Impact of Gender-Fair Language...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Waldis, Andreas Birrer, Joel Lauscher, Anne Gurevych, Iryna Technical University of Darmstadt Germany Information Systems Research Lab Lucerne University of Applied Sciences and Arts Switzerland Data Science Group University of Hamburg Germany
Gender-fair language, an evolving German linguistic variation, fosters inclusion by addressing all genders or using neutral forms. Nevertheless, there is a significant lack of resources to assess the impact of this li... 详细信息
来源: 评论