咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 654 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,258 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,944 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,408 篇 软件工程
    • 1,463 篇 信息与通信工程
    • 954 篇 电气工程
    • 880 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 142 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,416 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 757 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 283 篇 基础医学(可授医学...
    • 111 篇 公共卫生与预防医...
  • 276 篇 法学
    • 248 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,535 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 740 篇 semantics
  • 681 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 338 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 321 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,662 篇 英文
  • 482 篇 其他
  • 106 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15259 条 记 录,以下是561-570 订阅
排序:
CombLM: Adapting Black-Box language Models through Small Fine-Tuned Models
CombLM: Adapting Black-Box Language Models through Small Fin...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ormazabal, Aitor Artetxe, Mikel Agirre, Eneko Univ Basque Country UPV EHU HiTZ Ctr Leioa Spain Reka AI Sunnyvale CA USA
methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in ... 详细信息
来源: 评论
Enhancing Computation Efficiency in Large language Models throughWeight and Activation Quantization
Enhancing Computation Efficiency in Large Language Models th...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Lee, Janghwan Kim, Minsoo Baek, Seungcheol Hwang, Seok Joong Sung, Wonyong Choi, Jungwook Hanyang Univ Seoul South Korea SAPEON Korea Inc Seongnam Si South Korea Seoul Natl Univ Seoul South Korea
Large language Models (LLMs) are proficient in natural language processing tasks, but their deployment is often restricted by extensive parameter sizes and computational demands. This paper focuses on post-training qu... 详细信息
来源: 评论
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from language Models Fine-Tuned with Human Feedback
Just Ask for Calibration: Strategies for Eliciting Calibrate...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Tian, Katherine Mitchell, Eric Zhou, Allan Sharma, Archit Rafailov, Rafael Yao, Huaxiu Finn, Chelsea Manning, Christopher D. Harvard Univ Cambridge MA 02138 USA Stanford Univ Stanford CA 94305 USA
A trustworthy real-world prediction system should produce well-calibrated confidence scores;that is, its confidence in an answer should be indicative of the likelihood that the answer is correct, enabling deferral to ... 详细信息
来源: 评论
Large language Models and Multimodal Retrieval for Visual Word Sense Disambiguation
Large Language Models and Multimodal Retrieval for Visual Wo...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kritharoula, Anastasia Lymperaiou, Maria Stamou, Giorgos Natl Tech Univ Athens Sch Elect & Comp Engn Artificial Intelligence & Learning Syst Lab Athens Greece
Visual Word Sense Disambiguation (VWSD) is a novel challenging task with the goal of retrieving an image among a set of candidates, which better represents the meaning of an ambiguous word within a given context. In t... 详细信息
来源: 评论
Learning to Route for Dynamic Adapter Composition in Continual Learning with language Models
Learning to Route for Dynamic Adapter Composition in Continu...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Araujo, Vladimir Moens, Marie-Francine Tuytelaars, Tinne KU Leuven Belgium
Parameter-efficient fine-tuning (PEFT) methods are increasingly used with pre-trained language models (PLMs) for continual learning (CL). These methods typically involve training a PEFT module for each new task and em... 详细信息
来源: 评论
Task-Adaptive Tokenization: Enhancing Long-Form Text Generation Efficacy in Mental Health and Beyond
Task-Adaptive Tokenization: Enhancing Long-Form Text Generat...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Liu, Siyang Jia, Yilin Deng, Naihao Huang, Minlie Sabour, Sahand Mihalcea, Rada Univ Michigan Dept Comp Sci & Engn Language & Informat Technol Lab LIT Ann Arbor MI 48109 USA Tsinghua Univ CoAI Grp Beijing Peoples R China
We propose task-adaptive tokenization(1) as a way to adapt the generation pipeline to the specifics of a downstream task and enhance long-form generation in mental health. Inspired by insights from cognitive science, ...
来源: 评论
TongGu: Mastering Classical Chinese Understanding with Knowledge-Grounded Large language Models
TongGu: Mastering Classical Chinese Understanding with Knowl...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cao, Jiahuan Peng, Dezhi Zhang, Peirong Shi, Yongxin Liu, Yang Ding, Kai Jin, Lianwen South China University of Technology China Intsig Information Co. Ltd. Singapore INTSIG-SCUT Joint Lab on Document Analysis and Recognition China
Classical Chinese is a gateway to the rich heritage and wisdom of ancient China, yet its complexities pose formidable comprehension barriers for most modern people without specialized knowledge. While Large language M... 详细信息
来源: 评论
Towards Interpretable Sequence Continuation: Analyzing Shared Circuits in Large language Models
Towards Interpretable Sequence Continuation: Analyzing Share...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lan, Michael Torr, Philip Barez, Fazl Apart Research Department of Engineering Sciences University of Oxford United Kingdom
While transformer models exhibit strong capabilities on linguistic tasks, their complex architectures make them difficult to interpret. Recent work has aimed to reverse engineer transformer models into human-readable ... 详细信息
来源: 评论
Instructed language Models with Retrievers Are Powerful Entity Linkers
Instructed Language Models with Retrievers Are Powerful Enti...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Xiao, Zilin Gong, Ming Wu, Jie Zhang, Xingyao Shou, Linjun Jiang, Daxin Rice Univ Houston TX 77251 USA Microsoft STCA Redmond WA USA
Generative approaches powered by large language models (LLMs) have demonstrated emergent abilities in tasks that require complex reasoning abilities. Yet the generative nature still makes the generated content suffer ... 详细信息
来源: 评论
Uncertainty in language Models: Assessment through Rank-Calibration
Uncertainty in Language Models: Assessment through Rank-Cali...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Huang, Xinmeng Li, Shuo Yu, Mengxin Sesia, Matteo Hassani, Hamed Lee, Insup Bastani, Osbert Dobriban, Edgar University of Pennsylvania PhiladelphiaPA United States University of Southern California Los AngelesCA United States
language Models (LMs) have shown promising performance in natural language generation. However, as LMs often generate incorrect or hallucinated responses, it is crucial to correctly quantify their uncertainty in respo... 详细信息
来源: 评论