咨询与建议

限定检索结果

文献类型

  • 14,549 篇 会议
  • 662 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,352 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,015 篇 工学
    • 10,349 篇 计算机科学与技术...
    • 5,460 篇 软件工程
    • 1,467 篇 信息与通信工程
    • 956 篇 电气工程
    • 892 篇 控制科学与工程
    • 447 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 177 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,486 篇 理学
    • 1,156 篇 数学
    • 654 篇 物理学
    • 520 篇 生物学
    • 394 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,427 篇 管理学
    • 1,756 篇 图书情报与档案管...
    • 759 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,762 篇 文学
    • 1,710 篇 外国语言文学
    • 184 篇 中国语言文学
  • 515 篇 医学
    • 303 篇 临床医学
    • 286 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 279 篇 法学
    • 249 篇 社会学
  • 239 篇 教育学
    • 226 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 10 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,552 篇 natural language...
  • 1,789 篇 natural language...
  • 953 篇 computational li...
  • 741 篇 semantics
  • 683 篇 machine learning
  • 612 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 334 篇 large language m...
  • 334 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 255 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 alibaba grp peop...
  • 31 篇 school of artifi...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 26 篇 wen ji-rong
  • 26 篇 liu zhiyuan
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,307 篇 英文
  • 930 篇 其他
  • 114 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15353 条 记 录,以下是1161-1170 订阅
排序:
Revisiting Source Context in Nearest Neighbor Machine Translation
Revisiting Source Context in Nearest Neighbor Machine Transl...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Li, Xuanhong Li, Peng Hu, Po Cent China Normal Univ Hubei Prov Key Lab Artificial Intelligence & Smar Wuhan Hubei Peoples R China Cent China Normal Univ Sch Comp Sci Wuhan Hubei Peoples R China Cent China Normal Univ Natl Language Resources Monitoring & Res Ctr Netw Wuhan Hubei Peoples R China Tsinghua Univ Inst AI Ind Res AIR Beijing Peoples R China
Nearest neighbor machine translation (kNNMT), which interpolates target token probabilities with estimates derived from additional examples, has achieved significant improvements and attracted extensive interest in re... 详细信息
来源: 评论
Reasoning with language Model is Planning with World Model
Reasoning with Language Model is Planning with World Model
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Hao, Shibo Gu, Yi Ma, Haodi Hong, Joshua Jiahua Wang, Zhen Wang, Daisy Zhe Hu, Zhiting Univ Calif San Diego La Jolla CA 92093 USA Univ Florida Gainesville FL 32611 USA Mohamed bin Zayed Univ Artificial Intelligence Abu Dhabi U Arab Emirates
Large language models (LLMs) have shown remarkable reasoning capabilities, particularly with chain-of-thought (CoT) prompting. However, LLMs sometimes still struggle with problems that are easy for humans, such as gen... 详细信息
来源: 评论
What Makes a High-Quality Training Dataset for Large language Models: A Practitioners' Perspective  24
What Makes a High-Quality Training Dataset for Large Languag...
收藏 引用
39th ACM/IEEE International conference on Automated Software Engineering (ASE)
作者: Yu, Xiao Zhang, Zexian Niu, Feifei Hu, Xing Xia, Xin Grundy, John Huawei Hangzhou Peoples R China Wuhan Univ Technol Sch Comp Sci & Artificial Intelligence Wuhan Peoples R China Univ Ottawa Sch Elect Engn & Comp Sci Ottawa ON Canada Zhejiang Univ State Key Lab Blockchain & Data Secur Hangzhou Peoples R China Monash Univ Fac Informat Technol Melbourne Vic Australia Wuhan Univ Technol Chongqing Res Inst Chongqing Peoples R China
Large language Models (LLMs) have demonstrated remarkable performance in various application domains, largely due to their self-supervised pre-training on extensive high-quality text datasets. However, despite the imp... 详细信息
来源: 评论
Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large language Models
Tree of Clarifications: Answering Ambiguous Questions with R...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kim, Gangwoo Kim, Sungdong Jeon, Byeongguk Park, Joonsuk Kang, Jaewoo Korea Univ Seoul South Korea NAVER Cloud Seongnam Si South Korea NAVER AI Lab Seongnam Si South Korea KAIST AI Daejeon South Korea Univ Richmond Richmond VA 23173 USA
Questions in open-domain question answering are often ambiguous, allowing multiple interpretations. One approach to handling them is to identify all possible interpretations of the ambiguous question (AQ) and to gener... 详细信息
来源: 评论
The Benefits in Shallow: Merge Decoding Across Large language Model Layers  13th
The Benefits in Shallow: Merge Decoding Across Large Languag...
收藏 引用
13th International conference on natural language processing and Chinese Computing
作者: Zhou, Yuechi Zhou, Chuyue Xie, Wenjing Wang, Xinrui Chen, Jiuchang Ni, Zhenghua Li, Juntao Soochow Univ Inst Comp Sci & Technol Suzhou Peoples R China
Large language models (LLMs) have become foundational to numerous natural language processing tasks;however, decoding coherent and contextually relevant text remains a complex challenge. In openended generation, maxim... 详细信息
来源: 评论
CONSTRUCTURE: Benchmarking CONcept STRUCTUre REasoning for Multimodal Large language Models
CONSTRUCTURE: Benchmarking CONcept STRUCTUre REasoning for M...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zha, Zhiwei Zhu, Xiangru Xu, Yuanyi Huang, Chenghua Liu, Jingping Li, Zhixu Wang, Xuwu Xiao, Yanghua Yang, Bei Xu, XiaoXiao Shanghai Key Laboratory of Data Science School of Computer Science Fudan University China School of Information Science and Engineering East China University of Science and Technology China Alibaba Group China School of Information Renmin University of China China Renmin University of China China
Multimodal Large language Models (MLLMs) have shown promising results in various tasks, but their ability to perceive the visual world with deep, hierarchical understanding similar to humans remains uncertain. To addr... 详细信息
来源: 评论
LLM-driven Instruction Following: Progresses and Concerns
LLM-driven Instruction Following: Progresses and Concerns
收藏 引用
2023 conference on empirical methods in natural language processing, EMNLP 2023
作者: Yin, Wenpeng Ye, Qinyuan Liu, Pengfei Ren, Xiang Schütze, Hinrich Penn State United States USC United States SJTU United States LMU Munich Germany
The progress of natural language processing (NLP) is primarily driven by machine learning that optimizes a system on a large-scale set of task-specific labeled examples. This learning paradigm limits the ability of ma... 详细信息
来源: 评论
MIBench: Evaluating Multimodal Large language Models over Multiple Images
MIBench: Evaluating Multimodal Large Language Models over Mu...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Haowei Zhang, Xi Xu, Haiyang Shi, Yaya Jiang, Chaoya Yan, Ming Zhang, Ji Huang, Fei Yuan, Chunfeng Li, Bing Hu, Weiming MAIS Institute of Automation Chinese Academy of Sciences China School of Artificial Intelligence University of Chinese Academy of Sciences China Alibaba Group China University of Science and Technology of China China Peking University China School of Information Science and Technology ShanghaiTech University China
Built on the power of LLMs, numerous multimodal large language models (MLLMs) have recently achieved remarkable performance on various vision-language tasks. However, most existing MLLMs and benchmarks primarily focus... 详细信息
来源: 评论
RALLE: A Framework for Developing and Evaluating Retrieval-Augmented Large language Models
RALLE: A Framework for Developing and Evaluating Retrieval-A...
收藏 引用
2023 conference on empirical methods in natural language processing, EMNLP 2023
作者: Hoshi, Yasuto Miyashita, Daisuke Ng, Youyang Tatsuno, Kento Morioka, Yasuhiro Torii, Osamu Deguchi, Jun Kioxia Corporation Japan
Retrieval-augmented large language models (R-LLMs) combine pre-trained large language models (LLMs) with information retrieval systems to improve the accuracy of factual question-answering. However, current libraries ... 详细信息
来源: 评论
Eliciting Instruction-tuned Code language Models' Capabilities to Utilize Auxiliary Function for Code Generation
Eliciting Instruction-tuned Code Language Models' Capabiliti...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lee, Seonghyeon Kim, Suyeon Jang, Joonwon Chon, Heejae Lee, Dongha Yu, Hwanjo Department of Computer Science and Engineering POSTECH Pohang Korea Republic of Department of Artificial Intelligence POSTECH Pohang Korea Republic of Department of Artificial Intelligence Yonsei University Seoul Korea Republic of
We study the code generation behavior of instruction-tuned models built on top of code pre-trained language models when they could access an auxiliary function to implement a function. We design several ways to provid... 详细信息
来源: 评论