咨询与建议

限定检索结果

文献类型

  • 14,413 篇 会议
  • 646 篇 期刊文献
  • 39 篇 学位论文
  • 36 册 图书
  • 1 篇 科技报告

馆藏范围

  • 15,134 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,934 篇 工学
    • 10,275 篇 计算机科学与技术...
    • 5,404 篇 软件工程
    • 1,460 篇 信息与通信工程
    • 953 篇 电气工程
    • 875 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,413 篇 管理学
    • 1,747 篇 图书情报与档案管...
    • 754 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,523 篇 natural language...
  • 1,768 篇 natural language...
  • 945 篇 computational li...
  • 736 篇 semantics
  • 676 篇 machine learning
  • 606 篇 deep learning
  • 520 篇 natural language...
  • 346 篇 computational mo...
  • 334 篇 training
  • 333 篇 sentiment analys...
  • 330 篇 accuracy
  • 327 篇 large language m...
  • 322 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 263 篇 speech recogniti...
  • 250 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 211 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 carnegie mellon ...
  • 44 篇 tsinghua univers...
  • 42 篇 zhejiang univers...
  • 41 篇 national univers...
  • 37 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 28 篇 peking universit...
  • 27 篇 language technol...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 28 篇 lapata mirella
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,541 篇 英文
  • 481 篇 其他
  • 104 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15135 条 记 录,以下是371-380 订阅
排序:
Bridging the Digital Divide: Performance Variation across Socio-Economic Factors in Vision-language Models
Bridging the Digital Divide: Performance Variation across So...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Nwatu, Joan Ignat, Oana Mihalcea, Rada Univ Michigan Ann Arbor MI 48109 USA
Despite the impressive performance of current AI models reported across various tasks, performance reports often do not include evaluations of how these models perform on the specific groups that will be impacted by t... 详细信息
来源: 评论
Evaluating Large language Models along Dimensions of language Variation: A Systematik Invesdigatiom uv Cross-lingual Generalization
Evaluating Large Language Models along Dimensions of Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Bafna, Niyati Murray, Kenton Yarowsky, David Johns Hopkins University Center for Language and Speech Processing United States
While large language models exhibit certain cross-lingual generalization capabilities, they suffer from performance degradation (PD) on unseen closely-related languages (CRLs) and dialects relative to their high-resou... 详细信息
来源: 评论
RevMUX: Data Multiplexing with Reversible Adapters for Efficient LLM Batch Inference
RevMUX: Data Multiplexing with Reversible Adapters for Effic...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xu, Yige Guo, Xu Zeng, Zhiwei Miao, Chunyan Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly Singapore College of Computing and Data Science Nanyang Technological University Singapore
Large language models (LLMs) have brought a great breakthrough to the natural language processing (NLP) community, while leading the challenge of handling concurrent customer queries due to their high throughput deman... 详细信息
来源: 评论
QA-NatVer: Question Answering for natural Logic-based Fact Verification
QA-NatVer: Question Answering for Natural Logic-based Fact V...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Aly, Rami Strong, Marek Vlachos, Andreas Univ Cambridge Dept Comp Sci & Technol Cambridge England
Fact verification systems assess a claim's veracity based on evidence. An important consideration in designing them is faithfulness, i.e. generating explanations that accurately reflect the reasoning of the model.... 详细信息
来源: 评论
Neuron-Level Knowledge Attribution in Large language Models
Neuron-Level Knowledge Attribution in Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yu, Zeping Ananiadou, Sophia Department of Computer Science National Centre for Text Mining The University of Manchester United Kingdom
Identifying important neurons for final predictions is essential for understanding the mechanisms of large language models. Due to computational constraints, current attribution techniques struggle to operate at neuro... 详细信息
来源: 评论
Better Call SAUL: Fluent and Consistent language Model Editing with Generation Regularization
Better Call SAUL: Fluent and Consistent Language Model Editi...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Mingyang Lange, Lukas Adel, Heike Strötgen, Jannik Schütze, Hinrich LMU Munich Germany Germany Hochschule der Medien Stuttgart Germany Karlsruhe University of Applied Sciences Germany
To ensure large language models contain up-to-date knowledge, they need to be updated ***, model editing is challenging as it might also affect knowledge that is unrelated to the new ***-of-the-art methods identify pa... 详细信息
来源: 评论
LongForm: Effective Instruction Tuning with Reverse Instructions
LongForm: Effective Instruction Tuning with Reverse Instruct...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Köksal, Abdullatif Schick, Timo Korhonen, Anna Schütze, Hinrich Center for Information and Language Processing LMU Munich Germany Munich Center for Machine Learning Germany Language Technology Lab University of Cambridge United Kingdom
Instruction tuning enables language models to more effectively generalize and better follow user intent. However, obtaining instruction data is costly and challenging. Prior work employs methods such as expensive huma... 详细信息
来源: 评论
OmAgent: A Multi-modal Agent Framework for Complex Video Understanding with Task Divide-and-Conquer
OmAgent: A Multi-modal Agent Framework for Complex Video Und...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Lu Zhao, Tiancheng Ying, Heting Ma, Yibo Lee, Kyusong Om AI Research Binjiang Institute of Zhejiang University China
Recent advancements in Large language Models (LLMs) have expanded their capabilities to multimodal contexts, including comprehensive video understanding. However, processing extensive videos such as 24-hour CCTV foota... 详细信息
来源: 评论
Zero-Shot Cross-Lingual Named Entity Recognition via Progressive Multi-Teacher Distillation
收藏 引用
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND language processing 2024年 32卷 4617-4630页
作者: Li, Zhuoran Hu, Chunming Zhang, Richong Chen, Junfan Guo, Xiaohui Beihang Univ Sch Comp Sci & Engn Beijing 100191 Peoples R China Beihang Univ Sch Software Beijing 100191 Peoples R China Beihang Univ Hangzhou Innovat Inst Hangzhou 310051 Peoples R China
Cross-lingual learning aims to transfer knowledge from one natural language to another. Zero-shot cross-lingual named entity recognition (NER) tasks are to train an NER model on source languages and to identify named ... 详细信息
来源: 评论
M5 - A Diverse Benchmark to Assess the Performance of Large Multimodal Models Across Multilingual and Multicultural Vision-language Tasks
M5 - A Diverse Benchmark to Assess the Performance of Large ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Schneider, Florian Sitaram, Sunayana Language Technology Group Universität Hamburg Germany Microsoft Research India Bangalore India
Since the release of ChatGPT, the field of natural language processing has experienced rapid advancements, particularly in Large language Models (LLMs) and their multimodal counterparts, Large Multimodal Models (LMMs)... 详细信息
来源: 评论