咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是891-900 订阅
排序:
Auto-Intent: Automated Intent Discovery and Self-Exploration for Large language Model Web Agents
Auto-Intent: Automated Intent Discovery and Self-Exploration...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kim, Jaekyeom Kim, Dong-Ki Logeswaran, Lajanugen Sohn, Sungryull Lee, Honglak LG AI Research Korea Republic of Field AI United States University of Michigan United States
In this paper, we introduce Auto-Intent, a method to adapt a pre-trained large language model (LLM) as an agent for a target domain without direct fine-tuning, where we empirically focus on web navigation tasks. Our a... 详细信息
来源: 评论
Predict and Use: Harnessing Predicted Gaze to Improve Multimodal Sarcasm Detection
Predict and Use: Harnessing Predicted Gaze to Improve Multim...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Tiwari, Divyank Pratap Kanojia, Diptesh Ray, Anupama Nunna, Apoorva Bhattacharyya, Pushpak Indian Inst Technol Comp Indian Language Technol Mumbai Maharashtra India Univ Surrey Surrey Inst People Centred AI Guildford Surrey England IBM Res India Bangalore Karnataka India
Sarcasm is a complex linguistic construct with incongruity at its very core. Detecting sarcasm depends on the actual content spoken and tonality, facial expressions, the context of an utterance, and personal traits li... 详细信息
来源: 评论
SELFCHECKGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large language Models
SELFCHECKGPT: Zero-Resource Black-Box Hallucination Detectio...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Manakul, Potsawee Liusie, Adian Gales, Mark J. F. Univ Cambridge ALTA Inst Dept Engn Cambridge England
Generative Large language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements w... 详细信息
来源: 评论
Large language Models Can Self-Correct with Key Condition Verification
Large Language Models Can Self-Correct with Key Condition Ve...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wu, Zhenyu Zeng, Qingkai Zhang, Zhihan Tan, Zhaoxuan Shen, Chao Jiang, Meng Xi'an Jiaotong University China University of Notre Dame United States
Intrinsic self-correct was a method that instructed large language models (LLMs) to verify and correct their responses without external feedback. Unfortunately, the study concluded that the LLMs could not self-correct... 详细信息
来源: 评论
To Preserve or To Compress: An In-Depth Study of Connector Selection in Multimodal Large language Models
To Preserve or To Compress: An In-Depth Study of Connector S...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lin, Junyan Chen, Haoran Zhu, Dawei Shen, Xiaoyu Digital Twin Institute Eastern Institute of Technology Ningbo China Saarland University Saarland Informatics Campus Germany
In recent years, multimodal large language models (MLLMs) have garnered significant attention from both industry and academia. However, there is still considerable debate on constructing MLLM architectures, particular... 详细信息
来源: 评论
Learning to Extract Structured Entities Using language Models
Learning to Extract Structured Entities Using Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wu, Haolun Yuan, Ye Mikaelyan, Liana Meulemans, Alexander Liu, Xue Hensman, James Mitra, Bhaskar McGill University Canada Mila - Quebec AI Institute Canada Microsoft Research United States ETH Zürich Switzerland
Recent advances in machine learning have significantly impacted the field of information extraction, with language Models (LMs) playing a pivotal role in extracting structured information from unstructured text. Prior... 详细信息
来源: 评论
Mixed Distillation Helps Smaller language Models Reason Better
Mixed Distillation Helps Smaller Language Models Reason Bett...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Chenglin Chen, Qianglong Li, Liangyue Wang, Caiyu Tao, Feng Li, Yicheng Chen, Zulong Zhang, Yin Zhejiang University Hangzhou China Dalian Medical University Dalian China Alibaba Hangzhou China
As large language models (LLMs) have demonstrated impressive multiple step-by-step reasoning capabilities in recent natural language processing (NLP) reasoning tasks, many studies are interested in distilling reasonin... 详细信息
来源: 评论
Revisiting Catastrophic Forgetting in Large language Model Tuning
Revisiting Catastrophic Forgetting in Large Language Model T...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Hongyu Ding, Liang Fang, Meng Tao, Dacheng Wuhan University China University of Liverpool United Kingdom The University of Sydney Australia Nanyang Technological University Singapore
Catastrophic Forgetting (CF) means models forgetting previously acquired knowledge when learning new data. It compromises the effectiveness of large language models (LLMs) during fine-tuning, yet the underlying causes... 详细信息
来源: 评论
MetaGPT: Merging Large language Models Using Model Exclusive Task Arithmetic
MetaGPT: Merging Large Language Models Using Model Exclusive...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhou, Yuyan Song, Liang Wang, Bingning Chen, Weipeng Baichuan Inc. China
The advent of large language models (LLMs) like GPT-4 has catalyzed the exploration of multi-task learning (MTL), in which a single model demonstrates proficiency across diverse tasks. Task arithmetic has emerged as a... 详细信息
来源: 评论
Visual Question Decomposition on Multimodal Large language Models
Visual Question Decomposition on Multimodal Large Language M...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Haowei Liu, Jianzhe Han, Zhen Chen, Shuo He, Bailan Tresp, Volker Xu, Zhiqiang Gu, Jindong Technical University of Munich Germany Amazon Web Services United States LMU Munich Germany Munich Center for Machine Learning Germany MBZUAI United Arab Emirates University of Oxford United Kingdom
Question decomposition has emerged as an effective strategy for prompting Large language Models (LLMs) to answer complex questions. However, while existing methods primarily focus on unimodal language models, the ques... 详细信息
来源: 评论