咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是171-180 订阅
排序:
RWKV-CLIP: A Robust Vision-language Representation Learner
RWKV-CLIP: A Robust Vision-Language Representation Learner
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gu, Tiancheng Yang, Kaicheng An, Xiang Feng, Ziyong Liu, Dongnan Cai, Weidong Deng, Jiankang University of Sydney Australia DeepGlint China Imperial College United Kingdom
Contrastive language-Image Pre-training (CLIP) has significantly improved performance in various vision-language tasks by expanding the dataset with image-text pairs obtained from the web. This paper further explores ... 详细信息
来源: 评论
WALLEDEVAL: A Comprehensive Safety Evaluation Toolkit for Large language Models
WALLEDEVAL: A Comprehensive Safety Evaluation Toolkit for La...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gupta, Prannaya Yau, Le Qi Low, Hao Han Lee, I-Shiang Lim, Hugo M. Teoh, Yu Xin Koh, Jia Hng Liew, Dar Win Bhardwaj, Rishabh Bhardwaj, Rajat Poria, Soujanya Walled AI Labs
WALLEDEVAL is a comprehensive AI safety testing toolkit designed to evaluate large language models (LLMs). It accommodates a diverse range of models, including both open-weight and API-based ones, and features over 35... 详细信息
来源: 评论
Divide-Conquer-Reasoning for Consistency Evaluation and Automatic Improvement of Large language Models
Divide-Conquer-Reasoning for Consistency Evaluation and Auto...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cui, Wendi Li, Zhuohang Lopez, Damien Das, Kamalika Malin, Bradley Kumar, Sricharan Zhang, Jiaxin Intuit United States Intuit AI Research United States Vanderbilt University United States Vanderbilt University Medical Center United States
Evaluating the quality and consistency of text generated by Large language Models (LLMs) poses a significant, yet unresolved challenge for industry research. We propose DCR, an automated framework for evaluating and i... 详细信息
来源: 评论
This Reads Like That: Deep Learning for Interpretable natural language processing
This Reads Like That: Deep Learning for Interpretable Natura...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Fanconi, Claudio Vandenhirtz, Moritz Husmann, Severin Vogt, Julia E. Swiss Fed Inst Technol Zurich Switzerland
Prototype learning, a popular machine learning method designed for inherently interpretable decisions, leverages similarities to learned prototypes for classifying new data. While it is mainly applied in computer visi... 详细信息
来源: 评论
Pre-training Cross-lingual Open Domain Question Answering with Large-scale Synthetic Supervision
Pre-training Cross-lingual Open Domain Question Answering wi...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jiang, Fan Drummond, Tom Cohn, Trevor School of Computing and Information Systems The University of Melbourne Victoria Australia
Cross-lingual open domain question answering (CLQA) is a complex problem, comprising cross-lingual retrieval from a multilingual knowledge base, followed by answer generation in the query language. Both steps are usua... 详细信息
来源: 评论
Evaluating Large language Models along Dimensions of language Variation: A Systematik Invesdigatiom uv Cross-lingual Generalization
Evaluating Large Language Models along Dimensions of Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Bafna, Niyati Murray, Kenton Yarowsky, David Johns Hopkins University Center for Language and Speech Processing United States
While large language models exhibit certain cross-lingual generalization capabilities, they suffer from performance degradation (PD) on unseen closely-related languages (CRLs) and dialects relative to their high-resou... 详细信息
来源: 评论
TRANSAGENTS: Build Your Translation Company with language Agents
TRANSAGENTS: Build Your Translation Company with Language Ag...
收藏 引用
2024 conference on empirical methods in natural language processing: System Demonstrations, EMNLP 2024
作者: Wu, Minghao Xu, Jiahao Wang, Longyue Monash University Australia Nanyang Technological University Singapore Tencent AI Lab China
Multi-agent systems empowered by large language models (LLMs) have demonstrated remarkable capabilities in a wide range of downstream applications. In this work, we introduce TRANSAGENTS, a novel multi-agent translati... 详细信息
来源: 评论
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large language Model Tuning
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Rajabzadeh, Hossein Valipour, Mojtaba Zhu, Tianshu Tahaei, Marzieh Kwon, Hyock Ju Ghodsi, Ali Chen, Boxing Rezagholizadeh, Mehdi University of Waterloo Canada Huawei Noah’s Ark Lab Canada
Finetuning large language models requires huge GPU memory, restricting the choice to acquire Larger models. While the quantized version of the Low-Rank Adaptation technique, named QLoRA, significantly alleviates this ... 详细信息
来源: 评论
Back Transcription as a Method for Evaluating Robustness of natural language Understanding Models to Speech Recognition Errors
Back Transcription as a Method for Evaluating Robustness of ...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kubis, Marek Skorzewski, Pawel Sowminski, Marcin Zietkiewicz, Tomasz Adam Mickiewicz Univ Ul Uniwersytetu Poznanskiego 4 PL-61614 Poznan Poland Samsung Res Poland Plac Europejski 1 PL-00844 Warsaw Poland
In a spoken dialogue system, an NLU model is preceded by a speech recognition system that can deteriorate the performance of natural language understanding. This paper proposes a method for investigating the impact of... 详细信息
来源: 评论
Rethinking Pragmatics in Large language Models: Towards Open-Ended Evaluation and Preference Tuning
Rethinking Pragmatics in Large Language Models: Towards Open...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wu, Shengguang Yang, Shusheng Chen, Zhenglun Su, Qi Peking University China Huazhong University of Science and Technology China
This study addresses the challenges of assessing and enhancing social-pragmatic inference in large language models (LLMs). We first highlight the inadequacy of current accuracy-based multiple choice question answering... 详细信息
来源: 评论