咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是981-990 订阅
排序:
TransferTOD: A Generalizable Chinese Multi-Domain Task-Oriented Dialogue System with Transfer Capabilities
TransferTOD: A Generalizable Chinese Multi-Domain Task-Orien...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Ming Huang, Caishuang Wu, Yilong Liu, Shichun Zheng, Huiyuan Dong, Yurui Shen, Yujiong Dou, Shihan Zhao, Jun Ye, Junjie Zhang, Qi Gui, Tao Huang, Xuanjing School of Computer Science Fudan University China Shanghai Key Laboratory of Intelligent Information Processing Fudan University China Institute of Modern Languages and Linguistics Fudan University China
Task-oriented dialogue (TOD) systems aim to efficiently handle task-oriented conversations, including information collection. How to utilize TOD accurately, efficiently and effectively for information collection has a... 详细信息
来源: 评论
IAEval: A Comprehensive Evaluation of Instance Attribution on natural language Understanding
IAEval: A Comprehensive Evaluation of Instance Attribution o...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Gni, Peijian Shen, Yaozong Wang, Lijie Wang, Quan Wu, Hua Mao, Zhendong Univ Sci & Technol China Hefei Peoples R China Baidu Inc Beijing Peoples R China Beijing Univ Posts & Telecommun MOE Key Lab Trustworthy Distributed Comp & Serv Beijing Peoples R China
Instance attribution (IA) aims to identify the training instances leading to the prediction of a test example, helping researchers understand the dataset better and optimize data processing. While many IA methods have... 详细信息
来源: 评论
DecoMT: Decomposed Prompting for Machine Translation Between Related languages using Large language Models
DecoMT: Decomposed Prompting for Machine Translation Between...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Puduppully, Ratish Kunchukuttan, Anoop Dabre, Raj Aw, Ai Ti Chen, Nancy F. ASTAR Inst Infocomm Res I2R Singapore Singapore CNRS CREATE Singapore Singapore Microsoft Bangalore Karnataka India Natl Inst Informat & Communicat Technol Tokyo Japan IIT Madras Madras Tamil Nadu India AI4Bharat Madras Tamil Nadu India
This study investigates machine translation between related languages i.e., languages within the same family that share linguistic characteristics such as word order and lexical similarity. Machine translation through... 详细信息
来源: 评论
Sparse Low-rank Adaptation of Pre-trained language Models
Sparse Low-rank Adaptation of Pre-trained Language Models
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ding, Ning Lv, Xingtai Wang, Qiaosen Chen, Yulin Zhou, Bowen Liu, Zhiyuan Sun, Maosong Tsinghua Univ Dept Elect Engn Beijing Peoples R China Tsinghua Univ Dept Comp Sci & Technol Beijing Peoples R China Tsinghua Univ BNRIST IAI Beijing Peoples R China Univ Chicago Dept Stat Chicago IL USA
Fine-tuning pre-trained large language models in a parameter-efficient manner is widely studied for its effectiveness and efficiency. The popular method of low-rank adaptation (LoRA) offers a notable approach, hypothe... 详细信息
来源: 评论
PaperMage: A Unified Toolkit for processing, Representing, and Manipulating Visually-Rich Scientific Documents
PaperMage: A Unified Toolkit for Processing, Representing, a...
收藏 引用
2023 conference on empirical methods in natural language processing, EMNLP 2023
作者: Lo, Kyle Shen, Zejiang Newman, Benjamin Chang, Joseph Chee Authur, Russell Bransom, Erin Candra, Stefan Chandrasekhar, Yoganand Huff, Regan Kuehl, Bailey Singh, Amanpreet Wilhelm, Chris Zamarron, Angele Hearst, Marti A. Weld, Daniel S. Downey, Doug Soldaini, Luca Allen Institute for AI United States Massachusetts Institute of Technology United States University of California Berkeley United States University of Washington United States Northwestern University United States
Despite growing interest in applying natural language processing (NLP) and computer vision (CV) models to the scholarly domain, scientific documents remain challenging to work with. They’re often in difficult-to-use ... 详细信息
来源: 评论
MVP-Bench: Can Large Vision-language Models Conduct Multi-level Visual Perception Like Humans?
MVP-Bench: Can Large Vision-Language Models Conduct Multi-le...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Guanzhen Xie, Yuxi Kan, Min-Yen National University of Singapore Singapore
Humans perform visual perception at multiple levels, including low-level object recognition and high-level semantic interpretation such as behavior understanding. Subtle differences in low-level details can lead to su... 详细信息
来源: 评论
Self-Influence Guided Data Reweighting for language Model Pre-training
Self-Influence Guided Data Reweighting for Language Model Pr...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Thakkar, Megh Bolukbasi, Tolga Ganapathy, Sriram Vashishth, Shikhar Chandar, Sarath Talukdar, Partha Mila Quebec AI Inst Montreal PQ Canada Google Deepmind London England Google Res India Bangalore Karnataka India Indian Inst Sci Bangalore Karnataka India Polytech Montreal Montreal PQ Canada Canada CIFAR AI Chair Montreal PQ Canada
language Models (LMs) pre-trained with self-supervision on large text corpora have become the default starting point for developing models for various NLP tasks. Once the pre-training corpus has been assembled, all da... 详细信息
来源: 评论
Unveiling and Consulting Core Experts in Retrieval-Augmented MoE-based LLMs
Unveiling and Consulting Core Experts in Retrieval-Augmented...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhou, Xin Nie, Ping Guo, Yiwen Wei, Haojie Zhang, Zhanqiu Minervini, Pasquale Ma, Ruotian Gui, Tao Zhang, Qi Huang, Xuanjing School of Computer Science Fudan University Shanghai China LightSpeed Studios Tencent China Institute of Modern Languages and Linguistics Fudan University Shanghai China Key Laboratory of Intelligent Information Processing Fudan University Shanghai China School of Informatics and ELLIS University of Edinburgh United Kingdom
Retrieval-Augmented Generation (RAG) significantly improved the ability of Large language Models (LLMs) to solve knowledge-intensive tasks. While existing research seeks to enhance RAG performance by retrieving higher... 详细信息
来源: 评论
PromptKD: Distilling Student-Friendly Knowledge for Generative language Models via Prompt Tuning
PromptKD: Distilling Student-Friendly Knowledge for Generati...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kim, Gyeongman Jang, Doohyuk Yang, Eunho Korea Republic of AITRICS Korea Republic of
Recent advancements in large language models (LLMs) have raised concerns about inference costs, increasing the need for research into model compression. While knowledge distillation (KD) is a prominent method for this... 详细信息
来源: 评论
Learning to Use Tools via Cooperative and Interactive Agents with Large language Models
Learning to Use Tools via Cooperative and Interactive Agents...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Shi, Zhengliang Gao, Shen Chen, Xiuyi Feng, Yue Yan, Lingyong Shi, Haibo Yin, Dawei Ren, Pengjie Verberne, Suzan Ren, Zhaochun Shandong University China University of Electronic Science and Technology of China China Baidu Inc. Beijing China University of Birmingham Birmingham United Kingdom Leiden University Leiden Netherlands
Tool learning empowers large language models (LLMs) as agents to use external tools and extend their utility. Existing methods employ one single LLM-based agent to iteratively select and execute tools, thereafter inco... 详细信息
来源: 评论