咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是911-920 订阅
排序:
Perceptions to Beliefs: Exploring Precursory Inferences for Theory of Mind in Large language Models
Perceptions to Beliefs: Exploring Precursory Inferences for ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jung, Chani Kim, Dongkwan Jin, Jiho Kim, Jiseon Seonwoo, Yeon Choi, Yejin Oh, Alice Kim, Hyunwoo KAIST Korea Republic of Amazon United States University of Washington United States Allen Institute for AI United States
While humans naturally develop theory of mind (ToM), the capability to understand other people's mental states and beliefs, state-of-the-art large language models (LLMs) underperform on simple ToM *** posit that w... 详细信息
来源: 评论
Transfer-Free Data-Efficient Multilingual Slot Labeling
Transfer-Free Data-Efficient Multilingual Slot Labeling
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Razumovskaia, Evgeniia Vulic, Ivan Korhonen, Anna Univ Cambridge Language Technol Lab Cambridge England
Slot labeling (SL) is a core component of task-oriented dialogue (TOD) systems, where slots and corresponding values are usually language-, task- and domain-specific. Therefore, extending the system to any new languag... 详细信息
来源: 评论
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large language Models Fine-tuning
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large L...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Jingfan Zhao, Yi Chen, Dan Tian, Xing Zheng, Huanran Zhu, Wei iFLYTEK Co. Ltd. China University of Pennsylvania United States Lenovo Connect Co. Ltd. China Niuxin Network Technology Co. Ltd. China East China Normal University China
Low-rank adaptation (LoRA) and its mixture-of-experts (MOE) variants are highly effective parameter-efficient fine-tuning (PEFT) methods. However, they introduce significant latency in multi-tenant settings due to the... 详细信息
来源: 评论
Improving Retrieval in Sponsored Search by Leveraging Query Context Signals
Improving Retrieval in Sponsored Search by Leveraging Query ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mohankumar, Akash Kumar Gururaj, K. Madan, Gagan Singh, Amit Microsoft India
Accurately retrieving relevant bid keywords for user queries is critical in Sponsored Search but remains challenging, particularly for short, ambiguous queries. Existing dense and generative retrieval models often fai... 详细信息
来源: 评论
Enhancing language Model Factuality via Activation-Based Confidence Calibration and Guided Decoding
Enhancing Language Model Factuality via Activation-Based Con...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Xin Bayat, Farima Fatahi Wang, Lu University of Michigan Ann ArborMI United States
Calibrating language models (LMs) aligns their generation confidence with the actual likelihood of answer correctness, which can inform users about LMs' reliability and mitigate hallucinated content. However, prio... 详细信息
来源: 评论
Social Media Topic Classification on Greek Reddit
收藏 引用
INFORMATION 2024年 第9期15卷 521页
作者: Mastrokostas, Charalampos Giarelis, Nikolaos Karacapilidis, Nikos Univ Patras Ind Management & Informat Syst Lab MEAD Rion 26504 Greece
Text classification (TC) is a subtask of natural language processing (NLP) that categorizes text pieces into predefined classes based on their textual content and thematic aspects. This process typically includes the ... 详细信息
来源: 评论
A Cheaper and Better Diffusion language Model with Soft-Masked Noise
A Cheaper and Better Diffusion Language Model with Soft-Mask...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Chen, Jiaao Zhang, Aston Li, Mu Smola, Alex Yang, Diyi Georgia Inst Technol Atlanta GA 30332 USA Meta GenAI Sunnyvale CA USA Stanford Univ Stanford CA 94305 USA
Diffusion models that are based on iterative denoising have been recently proposed and leveraged in various generation tasks like image generation. Whereas, as a way inherently built for continuous data, existing diff... 详细信息
来源: 评论
What's "up" with vision-language models? Investigating their struggle with spatial reasoning
What's "up" with vision-language models? Investigating their...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kamath, Amita Hessel, Jack Chang, Kai-Wei Univ Calif Los Angeles Los Angeles CA 90095 USA Allen Inst AI Seattle WA USA
Recent vision-language (VL) models are powerful, but can they reliably distinguish "right" from "left"? We curate three new corpora to quantify model comprehension of such basic spatial relations. ... 详细信息
来源: 评论
Characterizing Mechanisms for Factual Recall in language Models
Characterizing Mechanisms for Factual Recall in Language Mod...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Yu, Qinan Merullo, Jack Pavlick, Ellie Brown Univ Dept Comp Sci Providence RI 02912 USA
language Models (LMs) often must integrate facts they memorized in pretraining with new information that appears in a given context. These two sources can disagree, causing competition within the model, and it is uncl... 详细信息
来源: 评论
Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For Large language Models
Have LLMs Advanced Enough? A Challenging Problem Solving Ben...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Arora, Daman Singh, Himanshu Gaurav Mausam Microsoft Res Redmond WA 98052 USA Univ Calif Berkeley Berkeley CA 94720 USA IIT Delhi New York NY USA
The performance of large language models (LLMs) on existing reasoning benchmarks has significantly improved over the past years. In response, we present JEEBENCH, a considerably more challenging benchmark dataset for ... 详细信息
来源: 评论