咨询与建议

限定检索结果

文献类型

  • 14,413 篇 会议
  • 646 篇 期刊文献
  • 39 篇 学位论文
  • 36 册 图书
  • 1 篇 科技报告

馆藏范围

  • 15,134 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,934 篇 工学
    • 10,275 篇 计算机科学与技术...
    • 5,404 篇 软件工程
    • 1,460 篇 信息与通信工程
    • 953 篇 电气工程
    • 875 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,413 篇 管理学
    • 1,747 篇 图书情报与档案管...
    • 754 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,523 篇 natural language...
  • 1,768 篇 natural language...
  • 945 篇 computational li...
  • 736 篇 semantics
  • 676 篇 machine learning
  • 606 篇 deep learning
  • 520 篇 natural language...
  • 346 篇 computational mo...
  • 334 篇 training
  • 333 篇 sentiment analys...
  • 330 篇 accuracy
  • 327 篇 large language m...
  • 322 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 263 篇 speech recogniti...
  • 250 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 211 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 carnegie mellon ...
  • 44 篇 tsinghua univers...
  • 42 篇 zhejiang univers...
  • 41 篇 national univers...
  • 37 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 28 篇 peking universit...
  • 27 篇 language technol...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 28 篇 lapata mirella
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,541 篇 英文
  • 481 篇 其他
  • 104 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15135 条 记 录,以下是241-250 订阅
排序:
Evalverse: Unified and Accessible Library for Large language Model Evaluation
Evalverse: Unified and Accessible Library for Large Language...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kim, Jihoo Song, Wonho Kim, Dahyun Kim, Yunsu Kim, Yungi Park, Chanjun Upstage AI
This paper introduces Evalverse, a novel library that streamlines the evaluation of Large language Models (LLMs) by unifying disparate evaluation tools into a single, user-friendly framework. Evalverse enables individ... 详细信息
来源: 评论
Beyond Agreement: Diagnosing the Rationale Alignment of Automated Essay Scoring methods based on Linguistically-informed Counterfactuals
Beyond Agreement: Diagnosing the Rationale Alignment of Auto...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Yupei Hu, Renfen Zhao, Zhe Beijing Normal University China Tencent AI Lab China
While current Automated Essay Scoring (AES) methods demonstrate high scoring agreement with human raters, their decision-making mechanisms are not fully *** proposed method, using counterfactual intervention assisted ... 详细信息
来源: 评论
Conceptor-Aided Debiasing of Large language Models
Conceptor-Aided Debiasing of Large Language Models
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Yifei, Li S. Ungar, Lyle Sedoc, Joao Univ Penn Philadelphia PA 19104 USA NYU New York NY USA
Pre-trained large language models (LLMs) reflect the inherent social biases of their training corpus. Many methods have been proposed to mitigate this issue, but they often fail to debias or they sacrifice model accur... 详细信息
来源: 评论
Outcome-Constrained Large language Models for Countering Hate Speech
Outcome-Constrained Large Language Models for Countering Hat...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hong, Lingzi Luo, Pengcheng Blanco, Eduardo Song, Xiaoying University of North Texas United States Peking University China University of Arizona United States
Automatic counterspeech generation methods have been developed to assist efforts in combating hate speech. Existing research focuses on generating counterspeech with linguistic attributes such as being polite, informa...
来源: 评论
Hop, skip, jump to Convergence: Dynamics of Learning Rate Transitions for Improved Training of Large language Models
Hop, skip, jump to Convergence: Dynamics of Learning Rate Tr...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Subramanian, Shreyas Ganapathiraman, Vignesh Barrett, Corey *** United States
Various types of learning rate (LR) schedulers are being used for training or fine tuning of Large language Models today. In practice, several mid-flight changes are required in the LR schedule either manually, or wit... 详细信息
来源: 评论
Visually-Situated natural language Understanding with Contrastive Reading Model and Frozen Large language Models
Visually-Situated Natural Language Understanding with Contra...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ki, Geewook Lee, Hodong Kim, Daehee Jung, Haeji Park, Sanghee Kim, Yoonsik Yun, Sangdoo Kim, Taeho Lee, Bado Park, Seunghyun NAVER Cloud AI Seoul South Korea KAIST Ai Daejeon South Korea Korea Univ Seoul South Korea NAVER AI Lab Seoul South Korea
Recent advances in Large language Models (LLMs) have stimulated a surge of research aimed at extending their applications to the visual domain. While these models exhibit promise in generating abstract image captions ... 详细信息
来源: 评论
TWBias: A Benchmark for Assessing Social Bias in Traditional Chinese Large language Models through a Taiwan Cultural Lens
TWBias: A Benchmark for Assessing Social Bias in Traditional...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hsieh, Hsin-Yi Huang, Shih-Cheng Tsai, Richard Tzong-Han Department of Computer Science and Information Engineering National Central University Taiwan Graduate Institute of Communication Engineering National Taiwan University Taiwan Center for GIS RCHSS Academia Sinica Taiwan
Large language Models (LLMs) have shown remarkable capabilities in natural language processing, but concerns about social bias amplification have *** research on social bias in LLMs is extensive, studies on non-Englis... 详细信息
来源: 评论
Irrelevant Alternatives Bias Large language Model Hiring Decisions
Irrelevant Alternatives Bias Large Language Model Hiring Dec...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Valkanova, Kremena Yordanov, Pencho ETH Zurich Switzerland The Adecco Group Switzerland
We investigate whether LLMs display a well-known human cognitive bias, the attraction effect, in hiring decisions. The attraction effect occurs when the presence of an inferior candidate makes a superior candidate mor... 详细信息
来源: 评论
When language Models Fall in Love: Animacy processing in Transformer language Models
When Language Models Fall in Love: Animacy Processing in Tra...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Hanna, Michael Belinkov, Yonatan Pezzelle, Sandro Univ Amsterdam ILLC Amsterdam Netherlands Technion IIT Haifa Israel
Animacy-whether an entity is alive and sentient-is fundamental to cognitive processing, impacting areas such as memory, vision, and language. However, animacy is not always expressed directly in language: in English i... 详细信息
来源: 评论
Chain-of-Thought Tuning: Masked language Models can also Think Step By Step in natural language Understanding
Chain-of-Thought Tuning: Masked Language Models can also Thi...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Fan, Caoyun Tian, Jidong Li, Yitian Chen, Wenqing He, Hao Jin, Yaohui Shanghai Jiao Tong Univ AI Inst MoE Key Lab Artificial Intelligence Shanghai Peoples R China Sun Yat Sen Univ Sch Software Engn Guangzhou Peoples R China
Chain-of-Thought (CoT) is a technique that guides Large language Models (LLMs) to decompose complex tasks into multi-step reasoning through intermediate steps in natural language form. Briefly, CoT enables LLMs to thi... 详细信息
来源: 评论