咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是731-740 订阅
排序:
HISTALIGN: Improving Context Dependency in language Generation by Aligning with History
HISTALIGN: Improving Context Dependency in Language Generati...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wan, David Zhang, Shiyue Bansal, Mohit UNC Chapel Hill Chapel Hill NC 27599 USA
language models (LMs) can generate hallucinations and incoherent outputs, which highlights their weak context dependency. CacheLMs, which augment LMs with a memory of recent history, can increase context dependency an... 详细信息
来源: 评论
Prompt-based Logical Semantics Enhancement for Implicit Discourse Relation Recognition
Prompt-based Logical Semantics Enhancement for Implicit Disc...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wang, Chenxu Jian, Ping Huang, Mu Beijing Inst Technol Sch Comp Sci & Technol Beijing Peoples R China Beijing Inst Technol Beijing Engn Res Ctr High Volume Language Informa Beijing Peoples R China
Implicit Discourse Relation Recognition (IDRR), which infers discourse relations without the help of explicit connectives, is still a crucial and challenging task for discourse parsing. Recent works tend to exploit th... 详细信息
来源: 评论
DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths in Smaller language Models
DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Han, Chengcheng Du, Xiaowei Zhang, Che Lian, Yixin Li, Xiang Gao, Ming Wang, Baoyuan East China Normal Univ Sch Data Sci & Engn Shanghai Peoples R China Xiaobing AI Boston MA 02199 USA Peking Univ Sch Software & Microelect Beijing Peoples R China East China Normal Univ KLATASDS MOE Sch Stat Shanghai Peoples R China
Chain-of-Thought (CoT) prompting has proven to be effective in enhancing the reasoning capabilities of Large language Models (LLMs) with at least 100 billion parameters. However, it is ineffective or even detrimental ... 详细信息
来源: 评论
Can Transformers Learn n-gram language Models?
Can Transformers Learn n-gram Language Models?
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Svete, Anej Borenstein, Nadav Zhou, Mike Augenstein, Isabelle Cotterell, Ryan ETH Zürich Switzerland University of Copenhagen Denmark University of Pennsylvania United States
Much theoretical work has described the ability of transformers to represent formal languages. However, linking theoretical results to empirical performance is not straightforward due to the complex interplay between ... 详细信息
来源: 评论
Beyond Factuality: A Comprehensive Evaluation of Large language Models as Knowledge Generators
Beyond Factuality: A Comprehensive Evaluation of Large Langu...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Chen, Liang Deng, Yang Bian, Yatao Qin, Zeyu Wu, Bingzhe Chua, Tat-Seng Wong, Kam-Fai Chinese Univ Hong Kong Hong Kong Peoples R China Tencent AI Lab Shenzhen Peoples R China Natl Univ Singapore Singapore Singapore Hong Kong Univ Sci & Technol Hong Kong Peoples R China
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks when being prompted to generate world knowledge. Yet, community concerns abound regarding the factualit... 详细信息
来源: 评论
Large language Models are Temporal and Causal Reasoners for Video Question Answering
Large Language Models are Temporal and Causal Reasoners for ...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ko, Dohwan Lee, Ji Soo Kang, Wooyoung Roh, Byungseok Kim, Hyunwoo J. Korea Univ Dept Comp Sci & Engn Seoul South Korea Kakao Brain Jeju City South Korea
Large language Models (LLMs) have shown remarkable performances on a wide range of natural language understanding and generation tasks. We observe that the LLMs provide effective priors in exploiting linguistic shortc... 详细信息
来源: 评论
On the empirical Complexity of Reasoning and Planning in LLMs
On the Empirical Complexity of Reasoning and Planning in LLM...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kang, Liwei Zhao, Zirui Hsu, David Lee, Wee Sun National University of Singapore Singapore
Chain-of-thought (CoT), tree-of-thought (ToT), and related techniques work surprisingly well in practice for some complex reasoning tasks with Large language Models (LLMs), but why? This work seeks the underlying reas... 详细信息
来源: 评论
Enhancing Reinforcement Learning with Dense Rewards from language Model Critic
Enhancing Reinforcement Learning with Dense Rewards from Lan...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cao, Meng Shu, Lei Yu, Lei Zhu, Yun Wichers, Nevan Liu, Yinxiao Meng, Lei School of Computer Science McGill University Canada Department of Computer Science University of Toronto Canada Mila - Québec AI Institute Canada Google Deepmind United Kingdom
Reinforcement learning (RL) can align language models with non-differentiable reward signals, such as human preferences. However, a major challenge arises from the sparsity of these reward signals - typically, there i... 详细信息
来源: 评论
KMatrix: A Flexible Heterogeneous Knowledge Enhancement Toolkit for Large language Model
KMatrix: A Flexible Heterogeneous Knowledge Enhancement Tool...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wu, Shun Wu, Di Luo, Kun Zhang, XueYou Zhao, Jun Liu, Kang The Key Laboratory of Cognition and Decision Intelligence for Complex Systems Institute of Automation Chinese Academy of Sciences China School of Artificial Intelligence University of Chinese Academy of Sciences China Shanghai Artificial Intelligence Laboratory China
Knowledge-Enhanced Large language Models (K-LLMs) system enhances Large language Models (LLMs) abilities using external knowledge. Existing K-LLMs toolkits mainly focus on free-textual knowledge, lacking support for h... 详细信息
来源: 评论
ADELIE: Aligning Large language Models on Information Extraction
ADELIE: Aligning Large Language Models on Information Extrac...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Qi, Yunjia Peng, Hao Wang, Xiaozhi Xu, Bin Hou, Lei Li, Juanzi Department of Computer Science and Technology BNRist Tsinghua University China
Large language models (LLMs) usually fall short on information extraction (IE) tasks and struggle to follow the complex instructions of IE tasks. This primarily arises from LLMs not being aligned with humans, as mains... 详细信息
来源: 评论