咨询与建议

限定检索结果

文献类型

  • 7,582 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,703 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,480 篇 工学
    • 6,252 篇 计算机科学与技术...
    • 3,600 篇 软件工程
    • 748 篇 信息与通信工程
    • 507 篇 控制科学与工程
    • 271 篇 电气工程
    • 213 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 85 篇 电子科学与技术(可...
    • 76 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,524 篇 管理学
    • 1,167 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,472 篇 文学
    • 1,465 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,447 篇 理学
    • 775 篇 数学
    • 352 篇 物理学
    • 250 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 165 篇 法学
    • 153 篇 社会学
  • 130 篇 医学
    • 94 篇 临床医学
    • 76 篇 基础医学(可授医学...
  • 112 篇 教育学
    • 106 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,183 篇 natural language...
  • 872 篇 computational li...
  • 621 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 106 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 30 篇 university of ch...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 carnegie mellon ...
  • 25 篇 gaoling school o...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 20 篇 wen ji-rong
  • 20 篇 zhang qi

语言

  • 6,985 篇 英文
  • 689 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7704 条 记 录,以下是291-300 订阅
排序:
Who is better at math, Jenny or Jingzhen? Uncovering Stereotypes in Large language Models
Who is better at math, Jenny or Jingzhen? Uncovering Stereot...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Siddique, Zara Turner, Liam D. Espinosa-Anke, Luis School of Computer Science and Informatics Cardiff University United Kingdom AMPLYFI United Kingdom
Large language models (LLMs) have been shown to propagate and amplify harmful stereotypes, particularly those that disproportionately affect marginalised *** understand the effect of these stereotypes more comprehensi... 详细信息
来源: 评论
Experimental Contexts Can Facilitate Robust Semantic Property Inference in language Models, but Inconsistently
Experimental Contexts Can Facilitate Robust Semantic Propert...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Misra, Kanishka Ettinger, Allyson Mahowald, Kyle The University of Texas Austin United States Allen Institute for Artificial Intelligence United States
Recent zero-shot evaluations have highlighted important limitations in the abilities of language models (LMs) to perform meaning extraction. However, it is now well known that LMs can demonstrate radical improvements ... 详细信息
来源: 评论
Do LLMs Overcome Shortcut Learning? An Evaluation of Shortcut Challenges in Large language Models
Do LLMs Overcome Shortcut Learning? An Evaluation of Shortcu...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yuan, Yu Zhao, Lili Zhang, Kai Zheng, Guangting Liu, Qi State Key Lab of Cognitive Intelligence University of Science and Technology of China China School of Computer Science and Technology University of Science and Technology of China China
Large language Models (LLMs) have shown remarkable capabilities in various natural language processing tasks. However, LLMs may rely on dataset biases as shortcuts for prediction, which can significantly impair their ... 详细信息
来源: 评论
Addressing Linguistic Bias through a Contrastive Analysis of Academic Writing in the NLP Domain
Addressing Linguistic Bias through a Contrastive Analysis of...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ridley, Robert Wu, Zhen Zhang, Jianbing Huang, Shujian Dai, Xinyu Nanjing Univ Natl Key Lab Novel Software Technol Nanjing Peoples R China Collaborat Innovat Ctr Novel Software Technol & I Nanjing Peoples R China
It has been well documented that a reviewer's opinion of the nativeness of expression in an academic paper affects the likelihood of it being accepted for publication. Previous works have also shone a light on the... 详细信息
来源: 评论
You Make me Feel like a natural Question: Training QA Systems on Transformed Trivia Questions
You Make me Feel like a Natural Question: Training QA System...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kabir, Tasnim Sung, Yoo Yeon Bandyopadhyay, Saptarashmi Zou, Hao Chandra, Abhranil Boyd-Graber, Jordan Lee University of Maryland United States College of Information University of Maryland United States Columbia University United States University of Waterloo Canada
Training question answering (QA) and information retrieval systems for web queries require large, expensive datasets that are difficult to annotate and time-consuming to ***, while natural datasets of information-seek... 详细信息
来源: 评论
Right for Right Reasons: Large language Models for Verifiable Commonsense Knowledge Graph Question Answering
Right for Right Reasons: Large Language Models for Verifiabl...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Toroghi, Armin Guo, Willis Pour, Mohammad Mahdi Abdollah Sanner, Scott University of Toronto Canada Vector Institute of Artificial Intelligence Toronto Canada
Knowledge Graph Question Answering (KGQA) methods seek to answer natural language questions using the relational information stored in Knowledge Graphs (KGs). With the recent advancements of Large language Models (LLM... 详细信息
来源: 评论
Creative Problem Solving in Large language and Vision Models - What Would it Take?
Creative Problem Solving in Large Language and Vision Models...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Nair, Lakshmi Gizzi, Evana Sinapov, Jivko Georgia Institute of Technology AtlantaGA United States Tufts University MedfordMA United States
We advocate for a strong integration of Computational Creativity (CC) with research in large language and vision models (LLVMs) to address a key limitation of these models, i.e., creative problem solving. We present p... 详细信息
来源: 评论
Comparative Study of Explainability methods for Legal Outcome Prediction  6
Comparative Study of Explainability Methods for Legal Outcom...
收藏 引用
6th natural Legal language processing Workshop 2024, NLLP 2024, co-located with the 2024 conference on empirical methods in natural language processing
作者: Staliunaite, Ieva Raminta Valvoda, Josef Satoh, Ken University of Cambridge United Kingdom University of Copenhagen Denmark National Institute of Informatics Japan
This paper investigates explainability in natural Legal language processing (NLLP). We study the task of legal outcome prediction of the European Court of Human Rights cases in a ternary classification setup, where a ... 详细信息
来源: 评论
Does Large language Model Contain Task-Specific Neurons?
Does Large Language Model Contain Task-Specific Neurons?
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Song, Ran He, Shizhu Jiang, Shuting Xian, Yantuan Gao, Shengxiang Liu, Kang Yu, Zhengtao Faculty of Information Engineering and Automation Kunming University of Science and Technology Kunming China Yunnan Key Laboratory of Artificial Intelligence Kunming China The Key Laboratory of Cognition and Decision Intelligence for Complex Systems Institute of Automation Chinese Academy of Sciences Beijing China School of Artificial Intelligence University of Chinese Academy of Science Beijing China
Large language models (LLMs) have demonstrated remarkable capabilities in comprehensively handling various types of natural language processing (NLP) tasks. However, there are significant differences in the knowledge ... 详细信息
来源: 评论
How Susceptible are Large language Models to Ideological Manipulation?
How Susceptible are Large Language Models to Ideological Man...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Kai He, Zihao Yan, Jun Shi, Taiwei Lerman, Kristina Department of Computer Science University of Southern California United States Information Sciences Institute University of Southern California United States
Large language Models (LLMs) possess the potential to exert substantial influence on public perceptions and interactions with information. This raises concerns about the societal impact that could arise if the ideolog... 详细信息
来源: 评论