咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是831-840 订阅
排序:
Making Large language Models Better Reasoners with Orchestrated Streaming Experiences
Making Large Language Models Better Reasoners with Orchestra...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Xiangyang He, Junliang Qiu, Xipeng School of Computer Science Fudan University Shanghai Collaborative Innovation Center of Intelligent Visual Computing China
Large language models (LLMs) can perform complex reasoning by generating intermediate thoughts under zero-shot or few-shot settings. However, zero-shot prompting always encounters low performance, and the superior per... 详细信息
来源: 评论
Exploring Chain of Thought Style Prompting for Text-to-SQL
Exploring Chain of Thought Style Prompting for Text-to-SQL
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Tai, Chang-You Chen, Ziru Zhang, Tianshu Deng, Xiang Sun, Huan Ohio State Univ Columbus OH 43210 USA
In-context learning with large language models (LLMs) has recently caught increasing attention due to its superior few-shot performance on various tasks. However, its performance on text-to-SQL parsing still has much ... 详细信息
来源: 评论
From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large language Models
From Complex to Simple: Enhancing Multi-Constraint Complex I...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: He, Qianyu Zeng, Jie He, Qianxi Liang, Jiaqing Xiao, Yanghua Shanghai Key Laboratory of Data Science School of Computer Science Fudan University China School of Data Science Fudan University China
It is imperative for Large language models (LLMs) to follow instructions with elaborate requirements (i.e. Complex Instructions Following). Yet, it remains under-explored how to enhance the ability of LLMs to follow c... 详细信息
来源: 评论
Unveiling Factual Recall Behaviors of Large language Models through Knowledge Neurons
Unveiling Factual Recall Behaviors of Large Language Models ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Yifei Chen, Yuheng Wen, Wanting Sheng, Yu Li, Linjing Zeng, Daniel State Key Laboratory of Multimodal Artificial Intelligence Systems Institute of Automation Chinese Academy of Sciences Beijing China School of Artificial Intelligence University of Chinese Academy of Sciences Beijing China Beijing Wenge Technology Co. Ltd. Beijing China
In this paper, we investigate whether Large language Models (LLMs) actively recall or retrieve their internal repositories of factual knowledge when faced with reasoning tasks. Through an analysis of LLMs' interna... 详细信息
来源: 评论
CONTESTS: a Framework for Consistency Testing of Span Probabilities in language Models
CONTESTS: a Framework for Consistency Testing of Span Probab...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wagner, Eitan Slavutsky, Yuli Abend, Omri School of Computer Science and Engineering Department of Statistics and Data Science Hebrew University of Jerusalem Israel
Although language model scores are often treated as probabilities, their reliability as probability estimators has mainly been studied through calibration, overlooking other aspects. In particular, it is unclear wheth... 详细信息
来源: 评论
The empirical Variability of Narrative Perceptions of Social Media Texts
The Empirical Variability of Narrative Perceptions of Social...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mire, Joel Antoniak, Maria Ash, Elliott Piper, Andrew Sap, Maarten Carnegie Mellon University United States University of Copenhagen Denmark ETH Zürich Switzerland McGill University Canada Allen Institute for AI United States
Most NLP work on narrative detection has focused on prescriptive definitions of stories crafted by researchers, leaving open the questions: how do crowd workers perceive texts to be a story, and why? We investigate th... 详细信息
来源: 评论
Hateful Word in Context Classification
Hateful Word in Context Classification
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hoeken, Sanne Zarrieß, Sina Alaçam, Özge Department of Linguistics Bielefeld University Germany Center for Information and Language Processing LMU Munich Germany
Hate speech detection is a prevalent research field, yet it remains underexplored at the level of word meaning. This is significant, as terms used to convey hate often involve non-standard or novel usages which might ... 详细信息
来源: 评论
Can language Models Induce Grammatical Knowledge from Indirect Evidence?
Can Language Models Induce Grammatical Knowledge from Indire...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Oba, Miyu Oseki, Yohei Fukatsu, Akiyo Haga, Akari Ouchi, Hiroki Watanabe, Taro Sugawara, Saku Nara Institute of Science and Technology Japan The University of Tokyo Japan National Institute of Informatics Japan
What kinds of and how much data is necessary for language models to induce grammatical knowledge to judge sentence acceptability? Recent language models still have much room for improvement in their data efficiency co... 详细信息
来源: 评论
ToViLaG: Your Visual-language Generative Model is Also An Evildoer
ToViLaG: Your Visual-Language Generative Model is Also An Ev...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wang, Xinpeng Yi, Xiaoyuan Jiang, Han Zhou, Shanlin Wei, Zhihua Xie, Xing Tongji Univ Dept Comp Sci & Technol Shanghai Peoples R China Microsoft Res Asia Redmond WA USA
Warning: this paper includes model outputs showing offensive content. Recent large-scale Visual-language Generative Models (VLGMs) have achieved unprecedented improvement in multimodal image/text generation. However, ... 详细信息
来源: 评论
Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large language Model
Counting the Bugs in ChatGPT's Wugs: A Multilingual Investig...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Weissweiler, Leonie Hofmann, Valentin Kantharuban, Anjali Cai, Anna Dutt, Ritam Hengle, Amey Kabra, Anubha Kulkarni, Atharva Vijayakumar, Abhishek Yu, Haofei Schuetze, Hinrich Oflazer, Kemal Mortensen, David R. Carnegie Mellon Univ Pittsburgh PA 15213 USA Ludwig Maximilians Univ Munchen Munich Germany Univ Oxford Oxford England Munich Ctr Machine Learning Munich Germany Allen Inst AI Seattle WA USA IIT Delhi Delhi India
Large language models (LLMs) have recently reached an impressive level of linguistic capability, prompting comparisons with human language skills. However, there have been relatively few systematic inquiries into the ... 详细信息
来源: 评论